Jump to content

Computers?


Fraggiebaby

Recommended Posts

7 minutes ago, efaardvark said:

We have not gone to 128 bits because there is a cost for doing so and we don't really need it.  There's a whole bunch of reasons for the cost.  At the hardware level we use parallel buses.  That means that every bit gets it's own data line.  If two wires touch then the computer stops working.  If two wires even just get too close then there's crosstalk and your computer becomes unreliable.  Routing all those wires across the motherboard to connect the CPU, memory, PCI buses, etc. becomes a problem the wider the bus is.  An 8-bit bus is easy to design.  16 bits is also pretty easy.  32 bits starts to get troublesome.  64 bits is downright tricky.  This is also a problem for chip design internally, for similar reasons.

Ok, so why even go to 64 bits then?  Well, with 32 bits you can only count to 4 billion.  If you have a file that you want to reference a particular byte of data in then your file can only be 4 gigabytes in size.  If you have memory addresses that you want to reference then you can only have 4GBytes of memory and still be able to reference each byte individually.  Lots of people want to use files more than 4GB in size or have more than 4GB of memory in their computers.  Yes, there's tricks like using two 32-bit registers to hold a single 64-bit number, but now things like your math libraries and other code at the software level get complicated and slow.  Worst case they have to do twice as much work and run half as fast.  It is worth it to go to 64 bit buses and let the hardware do most of the work, even though it makes the hardware a bit harder to design and more expensive to build.

Going to 128 bits would make the hardware extremely hard to design and build, as well as expensive.  At the same time going from 64 bits to 128 bits on address buses and integers doesn't buy you nearly the gains that going from 32 to 64 bits did.  With 64 bits you can reference data in files that are up to 18,446,744,073,709,551,616 bytes in size, or individually address over 18 thousand terabytes of memory.  Very few people have files that big or computers with that much memory.  (yet.)  Maybe we'll get there some day, but for now it isn't worth the costs.

I see that makes sense. This is stuff I have been studying pretty heavily as I am studying computer science. So any info certainly helps me. Makes sense as well that there is no true tangible gains for the expense even in the enterprise data center type environments where clustered computing works just as well if not better for the price based on what your saying.

So asuming the cost wasn't a gactor what possible gains could wee see from 128-bit hardware if say the spec was worked out?

Link to comment
Share on other sites

16 minutes ago, Geano said:

So asuming the cost wasn't a gactor what possible gains could wee see from 128-bit hardware if say the spec was worked out?

There really isn't any at this point.  At least not in the general case.  Besides huge files and massive amounts of memory there's really no benefit.  Maybe data transfer rates could double for a given clock rate on a bus that was twice as wide.  But if all you want to do is move masses of data around without processing it in any way you probably want to use optics and serial buses, especially if you're going any distance greater than the dimensions of your average motherboard.

That said, there are already parts of even common PCs that have data buses of 128 bits or more in width.  Some gfx cards for instance have data buses that are much wider than 128 bits.  I think I read somewhere that the Xbox has a 384 bit bus between the GDDR memory on the card and the GPU itself.  GFX cards are all about parallelism for speed and the routing isn't too complicated because you're only going between two points.. memory and GPU.  Some gfx cards have internal bus widths of up to 4096 bits!

Link to comment
Share on other sites

51 minutes ago, efaardvark said:

There really isn't any at this point.  At least not in the general case.  Besides huge files and massive amounts of memory there's really no benefit.  Maybe data transfer rates could double for a given clock rate on a bus that was twice as wide.  But if all you want to do is move masses of data around without processing it in any way you probably want to use optics and serial buses, especially if you're going any distance greater than the dimensions of your average motherboard.

That said, there are already parts of even common PCs that have data buses of 128 bits or more in width.  Some gfx cards for instance have data buses that are much wider than 128 bits.  I think I read somewhere that the Xbox has a 384 bit bus between the GDDR memory on the card and the GPU itself.  GFX cards are all about parallelism for speed and the routing isn't too complicated because you're only going between two points.. memory and GPU.  Some gfx cards have internal bus widths of up to 4096 bits!

Damn that kinda crazy when you put in that perspective. I guess its a matter of time like everything else we'll get there when we get there like everything else. and if we don't its probably a company like Microsoft slowing down the industry with their capitalism like you mention prior.

Link to comment
Share on other sites

  • 2 weeks later...

Linus (re)builds the other Linus's desktop PC, a 3970X threadripper system that I would also love to own.  Black, quiet, and lots and lots of cores.  Just the way I like it. :)

 

  • Like 1
Link to comment
Share on other sites

3 hours ago, efaardvark said:

Linus (re)builds the other Linus's desktop PC, a 3970X threadripper system that I would also love to own.  Black, quiet, and lots and lots of cores.  Just the way I like it. :)

 

I just watched this the other day. It was pretty good vid. I liked how he kept up the puns.

Link to comment
Share on other sites

  • 2 weeks later...

J2C just released a blurb on building a console-killer - even though the consoles aren't out yet and pricing hasn't even been announced.  Guess it was a slow news day.

Anyway, it does kind of follow my own thoughts wrt "big NAVI", aka RDNA2.  J2C basically said that you really can't build a PC with the same specs as the (claimed) specs of the new consoles.  This is true because right now there are no RDNA2-based gfx cards available for the PC.  (RDNA-2 is the AMD GPU technology that is in both the new XBox and PS consoles.)  To get that kind of performance from a currently-selling PC GFX card you would need to go with something in the $1,000+ range that would be so expensive that you would basically wind up spending 3-5x what the new consoles from MSFT and SONY are expected to cost.  Pretty much a non-starter for most people.

I think that situation is only going to last until the peak console sales are over however.  There's probably some don't-rain-on-my-parade deal between the console makers and AMD that says in return for guaranteed unit sales and maybe a bit of up-front money to help with factory tooling AMD will hold off on making RDNA2- or RDNA3-based discrete GFX cards that can duplicate the consoles' performance until after the consoles have made their money.  However, after that I think we'll have to see some major action on the PC side as well.  By that time the "PC Master Race" crowd will be in a frenzy and I can't see AMD shareholders allowing AMD to miss that opportunity.  (Exact timing depends on things like COVID and how long the console makers can keep the buzz alive after release so I'm not even going to speculate there.)

Edited by efaardvark
Link to comment
Share on other sites

Just ran across this interesting breakdown of gaming on linux using data from Steam's ProtonDB.  Like the article says, if you combine the Manjaro and Arch count (Manjaro being a close Arch derivative) then it looks like Arch is the way to go if you're interested in gaming on linux.  (But then wouldn't you have to add the Mint numbers to Ubuntu?  Or heck, add Mint to Ubuntu - since Mint is an Ubuntu derivative - and then add that number to Debian, since Ubuntu is based on Debian.  Or.....?)  At least that's the way it seemed back in October according to Proton data... which of course only counts the people playing "Windows-only" titles on linux via Proton. 

In other words, who knows what this means?  :D  Still, interesting...
https://boilingsteam.com/we-dont-game-on-the-same-distros-no-more/

spacer.png

Edited by efaardvark
  • Like 1
Link to comment
Share on other sites

Just saw this over on PC Gamer..

'At the beginning of the session Avkarogullari demonstrates the x86 version of Dirt Rally recorded directly from an Apple Silicon Mac "running the unmodified x86 compiled binary translated using Rosetta." And it seems to be running beautifully, with all the geometry, shading, and particle effects you should expect.'

Note this is not only translating x86 code on the fly, it is also translating the GPU code since this is also using an Apple-designed GPU as well as an Apple-designed (ARM-based) CPU.  This is what you can do with RISC vs CISC.  Rosetta is basically doing for x86's CISC ISA what Java's JIT compilers do with java bytecode.  If Apple is ditching the entire x86 ecosystem (AMD GPU as well as intel CPU) then a company their size can now go literally anywhere they want.  For better or for worse.  I'm really, really interested in seeing where Apple takes this... and how much of the rest of the computing world follows them there.

  • Like 1
Link to comment
Share on other sites

System76 now has additional CPUs available for their "Serval WS" laptops running (Ubuntu-based) PopOS! linux, with Ryzen 5, 7, and 9 options.  Not cheap tho.  The 6-core Ryzen 5 3600 starts at $1,299.  The Ryzen 9 3900 option is a $435 upgrade from there.  Still, even the 3600 will beat an i7 10-series at multiprocessing.  You also get your choice of a 1660 Ti w/6GB or an 8 GB 2070 for GFX, and of course an NVMe M.2 SSD.  This laptop is better than a lot of desktop systems.

That said, personally I'm still waiting for the Zen3+RDNA2 APUs to come out from AMD.  I get the feeling we haven't seen anything yet, especially where laptops are concerned.

  • Like 1
Link to comment
Share on other sites

10 hours ago, efaardvark said:

System76 now has additional CPUs available for their "Serval WS" laptops running (Ubuntu-based) PopOS! linux, with Ryzen 5, 7, and 9 options.  Not cheap tho.  The 6-core Ryzen 5 3600 starts at $1,299.  The Ryzen 9 3900 option is a $435 upgrade from there.  Still, even the 3600 will beat an i7 10-series at multiprocessing.  You also get your choice of a 1660 Ti w/6GB or an 8 GB 2070 for GFX, and of course an NVMe M.2 SSD.  This laptop is better than a lot of desktop systems.

That said, personally I'm still waiting for the Zen3+RDNA2 APUs to come out from AMD.  I get the feeling we haven't seen anything yet, especially where laptops are concerned.

I see someone sure wants to keep this topic alive which is more then appropriated by me.

I have been doing copious amounts of research on Linux as you know since I have been asking you endlessly about it.

While we're on the subject what side is Gentoo based on from my understanding it is not grand daddy debion?

Link to comment
Share on other sites

1 hour ago, Geano said:

...what side is Gentoo based on from my understanding it is not grand daddy debion?

It is pretty much a from-scratch custom distro.  The usual way of forking a distro by copying its packages doesn’t work if you don’t have packages.  :)  Not sure where Enoch (the Gentoo precursor) got its original source code from.

Link to comment
Share on other sites

7 hours ago, efaardvark said:

It is pretty much a from-scratch custom distro.  The usual way of forking a distro by copying its packages doesn’t work if you don’t have packages.  :)  Not sure where Enoch (the Gentoo precursor) got its original source code from.

Yah it certainly seems like a fair bit of work though fun afternoon project to be sure. Have you, or would you build a gentoo kurnel?

Link to comment
Share on other sites

1 hour ago, Geano said:

Yah it certainly seems like a fair bit of work though fun afternoon project to be sure. Have you, or would you build a gentoo kurnel?

I’ve never run Gentoo specifically but before Linux I was running BSD Unix on a Motorola-based system.  BSD has a “ports” system where everything is compiled from source that is very similar to what Gentoo does.  In fact, Gentoo’s system is named “portage” because it is based on the BSD ports system.

The BSD system I was on was an old Commodore Amiga 3000 with a 25mhz 68030 CPU, 6 -mega-with-an-m-bytes of RAM and a 60 megabyte HD.  (And two floppy drives. :) )  Back in those days when CPU clock rates were measured in the 10s of megahertz, everybody was still running 32-bit code, gigabytes of system memory was a fantastic dream, and upgrading anything meant spending a lot of money it made sense to spend the time to wring every last bit of performance and efficiency out of whatever code you were running.  A kernel can be a few hundred kilobytes to a few megabytes in size and you didn’t want to have a bunch of unused code (even stubs) or oversized arrays taking up what little memory you had.  Early on there weren’t modern conveniences like dynamic driver/module loading either.  All your driver code, even printer drivers had to be compiled into your kernel.

These days it doesn’t matter nearly so much.  I probably would not build my own kernel.  The kernel is pretty well tuned at this point anyway, with much more in the way of things like dynamic memory allocation instead of static tables.  Very few device drivers are actually compiled into the kernel anymore either.  Everything is a module that is loaded on demand.  Since EGCS was merged into/took over from gcc the compiled code is a lot better as well.  It still makes some sense to optimize where you can, but in these 64-bit days you’re a lot less likely to run into system-crashing resource shortages.  And if you do then it is also a lot easier to just buy more RAM or a faster CPU.

Edited by efaardvark
  • Cool (Kakkoii) 1
Link to comment
Share on other sites

2 hours ago, efaardvark said:

I’ve never run Gentoo specifically but before Linux I was running BSD Unix on a Motorola-based system.  BSD has a “ports” system where everything is compiled from source that is very similar to what Gentoo does.  In fact, Gentoo’s system is named “portage” because it is based on the BSD ports system.

The BSD system I was on was an old Commodore Amiga 3000 with a 25mhz 68030 CPU, 6 -mega-with-an-m-bytes of RAM and a 60 megabyte HD.  (And two floppy drives. :) )  Back in those days when CPU clock rates were measured in the 10s of megahertz, everybody was still running 32-bit code, gigabytes of system memory was a fantastic dream, and upgrading anything meant spending a lot of money it made sense to spend the time to wring every last bit of performance and efficiency out of whatever code you were running.  A kernel can be a few hundred kilobytes to a few megabytes in size and you didn’t want to have a bunch of unused code (even stubs) or oversized arrays taking up what little memory you had.  Early on there weren’t modern conveniences like dynamic driver/module loading either.  All your driver code, even printer drivers had to be compiled into your kernel.

These days it doesn’t matter nearly so much.  I probably would not build my own kernel.  The kernel is pretty well tuned at this point anyway, with much more in the way of things like dynamic memory allocation instead of static tables.  Very few device drivers are actually compiled into the kernel anymore either.  Everything is a module that is loaded on demand.  Since EGCS was merged into/took over from gcc the compiled code is a lot better as well.  It still makes some sense to optimize where you can, but in these 64-bit days you’re a lot less likely to run into system-crashing resource shortages.  And if you do then it is also a lot easier to just buy more RAM or a faster CPU.

Now that is some interesting stuff. I recently watched some stuff on building a kernel from ground up in Gentoo and it looked pretty fun to me, and with enough experience is something I could totally see myself doing.

So apart from Ubuntu what distroes for Linux have you used.

Link to comment
Share on other sites

3 hours ago, Geano said:

Now that is some interesting stuff. I recently watched some stuff on building a kernel from ground up in Gentoo and it looked pretty fun to me, and with enough experience is something I could totally see myself doing.

So apart from Ubuntu what distroes for Linux have you used.

My first Linux distro was something called Yggdrasil, mostly because it was the first one I found that had everything on a bootable cd.  (This was back in the dial-up modem days of the early ‘Net so not having to download everything was a big plus.)  After that I actually bought a system with RedHat pre-installed, but RH didn’t support that too well unless you had a support contract with them so I pretty quickly upgraded that to SuSE because SuSE had X.org on the disc.  X.org was good at keeping up-to-date with changes to the gfx drivers, even for motherboard chipsets with integrated gpus like the one on that mb.

I kind of liked SuSE so I stuck with it for several years.  (Gotta appreciate German-engineered open-source.)  In fact, until early this year my internet access was through an old mini-itx pc (with passively-cooled laptop cpu+gpu and an early 40G ssd so no moving parts) running SuSE and acting as a firewall/gateway.

I also had another SuSE box that I was running for a while as a household server.  That was actually an old surplussed server motherboard, with dual Athlon CPUs, a RAID card, and even ECC memory.  I thought that I would turn that into my “everything” server, but it was too big (full tower case), took too much power, and ran too hot to really be practical as a home server.  Eventually I replaced the Athlon beast with my current toaster-sized Synology NAS.  The NAS runs a custom Linux-based software package that Synology calls “DSM”.  That’s now my everything server running things like file, print, and backup services.  It is also where I run my Plex server.  At 55 Watts peak even while transcoding video and 14 while idle it is much more practical for that sort of thing.

On my desktop system(s) over the years I’ve run NetBSD, OpenBSD, the Yggdrasil Linux I mentioned, RedHat, SuSE, and Debian (briefly).  On my old AMD FX system I had Manjaro.  That was fine but when I upgraded to my current Ryzen 7 system a couple years ago I decided to try Ubuntu.  (Currently 20.04LTS.)  I’m not happy with some of the tricks Canonical is pulling but Ubuntu itself isn’t quite annoying enough to uninstall now that it is there.

  • Awesome (Sugoi) 1
Link to comment
Share on other sites

4 hours ago, efaardvark said:

My first Linux distro was something called Yggdrasil, mostly because it was the first one I found that had everything on a bootable cd.  (This was back in the dial-up modem days of the early ‘Net so not having to download everything was a big plus.)  After that I actually bought a system with RedHat pre-installed, but RH didn’t support that too well unless you had a support contract with them so I pretty quickly upgraded that to SuSE because SuSE had X.org on the disc.  X.org was good at keeping up-to-date with changes to the gfx drivers, even for motherboard chipsets with integrated gpus like the one on that mb.

I kind of liked SuSE so I stuck with it for several years.  (Gotta appreciate German-engineered open-source.)  In fact, until early this year my internet access was through an old mini-itx pc (with passively-cooled laptop cpu+gpu and an early 40G ssd so no moving parts) running SuSE and acting as a firewall/gateway.

I also had another SuSE box that I was running for a while as a household server.  That was actually an old surplussed server motherboard, with dual Athlon CPUs, a RAID card, and even ECC memory.  I thought that I would turn that into my “everything” server, but it was too big (full tower case), took too much power, and ran too hot to really be practical as a home server.  Eventually I replaced the Athlon beast with my current toaster-sized Synology NAS.  The NAS runs a custom Linux-based software package that Synology calls “DSM”.  That’s now my everything server running things like file, print, and backup services.  It is also where I run my Plex server.  At 55 Watts peak even while transcoding video and 14 while idle it is much more practical for that sort of thing.

On my desktop system(s) over the years I’ve run NetBSD, OpenBSD, the Yggdrasil Linux I mentioned, RedHat, SuSE, and Debian (briefly).  On my old AMD FX system I had Manjaro.  That was fine but when I upgraded to my current Ryzen 7 system a couple years ago I decided to try Ubuntu.  (Currently 20.04LTS.)  I’m not happy with some of the tricks Canonical is pulling but Ubuntu itself isn’t quite annoying enough to uninstall now that it is there.

From my research a lot of people are saying pretty bad things about Canonical lately. What exactly are they doing that is so bad. I have seen some rather un-linux like closed souse software pushing plus their forced use of the snap store by default. Though from what you have said plus some things I have looked into sense it would seem most the closed souse support is hardware, and common Windows software porting. Perhaps there is some political business reasons I am presently unaware can you go into more detail on those if any.

Link to comment
Share on other sites

9 hours ago, Geano said:

Perhaps there is some political business reasons I am presently unaware can you go into more detail on those if any.

I think the business part of it is the main thing.  Canonical seems to be more open to.. shall we say experimental monetization opportunities that don't fit well with the rest of the open-source philosophy and therefore annoy the open-source faithful. 

Even when there's no explicit money angle they still try to control things.  Snaps is a good example of that.  If they would just let everyone who wants to run their own "snap store" then there wouldn't be nearly the resistance, but the way they've set things up you have to use the Ubuntu servers to distribute snap bundles.  See this blog for Mint Linux's view of snaps from a non-Canonical POV.  They're getting better in the specific case of snaps, but they started with the server side completely closed off.   It is like if Canonical started saying that only Ubuntu servers could be used to distribute Ubuntu's .deb files.  One of the great things about open-source and Linux is the ability to "fork the code" and go in a different direction if someone tries to force you down a path that you don't want to take.  The way Canonical initially tried to set up snap distribution there was no way to "fork the code".  If you wanted to use snaps at all then you had to do things Canonical's way.  One gets the sense that they're a wanna-be Microsoft or Apple, which is exactly what most of the people who are now using Linux are trying to get away from in the first place.

There's other, technical problems with snaps too.  Snaps bundle -all- dependencies together.  That makes them huge, relative to .deb or .rpm files.  A snap bundle can easily take up 10x he disk space as the same app delivered as a traditional .deb.  Because it is all self-contained there's also no way to integrate a snap app into the system in terms of things like desktop themes or file accesses.  If I have Discord and Minecraft both installed as snaps (the default for Ubuntu) then Discord won't pick up my desktop (gnome) themes and I won't be able to drag a file from the minecraft .screenshot directory into a discord window to create a message.  Snaps also put things in strange places.  It took me about 10 minutes to figure out where Ubuntu's Minecraft snap puts the .screenshots directory.  This after being forced to re-install MC on Ubuntu 20.04 because installing the snap broke my existing minecraft install that had been working perfectly!  Installing optifine to the correct minecraft directory was also a bit of a PITA, as I've mentioned elsewhere.  Finally, there is no way to turn off automatic updates on a snap.  This last alone makes it a non-starter in a lot of corporate environments where there is a requirement to certify a system as a whole.  Other attempts like flatpak or appimage to solve the same distribution problem as snaps don't have these sorts of issues.  (Though they do have others.)

I'm just using snaps as an example here.  This isn't the first time Canonical has done this sort of thing.  They can talk the talk but when it comes to walking the walk they don't seem to "get it".  They tend to start "closed" and only open things up when enough people complain.  The default should be "open".

 

Link to comment
Share on other sites

  • 4 weeks later...

"This is a mouse.  These are windows."  :D  Man this takes me back.  Commodore's Amiga 1000 was first computer I ever bought for myself.  In 1985 there was no MSFT Windows(tm).  Windows 1.0 wasn't released for another year or so.  DOS was all they had, on 8-bit processors no less.  Color graphics?  Sound?  Multitasking?  On a computer?  Are you crazy?  Monochrome text and speaker beeps from one program at a time.  That's all the competition had.

 

 

  • Awesome (Sugoi) 1
Link to comment
Share on other sites

1 hour ago, efaardvark said:

"This is a mouse.  These are windows."  :D  Man this takes me back.  

 

 

Probably not the right place to talk about guitar stuff and pedals. But the video you posted reminds me of most of the pedal  tutorials on Death By Audio.

https://deathbyaudio.com/collections/fuzz

Edited by Supernova1!
Link to comment
Share on other sites

4 hours ago, efaardvark said:

"This is a mouse.  These are windows."  :D  Man this takes me back.  Commodore's Amiga 1000 was first computer I ever bought for myself.  In 1985 there was no MSFT Windows(tm).  Windows 1.0 wasn't released for another year or so.  DOS was all they had, on 8-bit processors no less.  Color graphics?  Sound?  Multitasking?  On a computer?  Are you crazy?  Monochrome text and speaker beeps from one program at a time.  That's all the competition had.

 

 

Now that is a piece of art for thee ages to be sure.

Oldie but a goodie :)

Link to comment
Share on other sites

The "ASUS PB50 Mini PC" might be my next computer addition.  It is more of a business computer rather than a gaming etc. system for home users but it can be configured with up to a 8-core Ryzen 7 processor + Vega 10 gfx so it has enough horsepower to do useful things.  It is also small enough to be tucked away someplace out of the way in an entertainment center or on a corner of a cluttered desk.

I'm thinking of putting it in the den where my mom has her desk and TV.  I put Stardew Valley on her iPad a few weeks ago.  She likes the game but she's been saying that she wants it on her laptop so she can play it on the big screen (TV).  We have an adaptor/cable for the iPad but it is awkward to use and I usually have to set it up for her.  Her laptop works, and I could put Steam & SDV on it, but it struggles to mirror the desktop at HD resolutions.  Same situation for things like netflix.  The laptop is old, she's almost as annoyed by MacOS as I am, and she never uses it anywhere but at her desk anyway.  I'm thinking that one of these ASUS systems might simplify things for both her and me.

In terms of processing power and memory it would definitely be a step up from both her iPad and laptop.  As the family's admin staff :) I'd be able to put linux on it and admin it for her better than I can Apple's MacOS.  I could put Steam on it, hook it to the TV, and put a wireless keyboard on it so she could use it for things like streaming netflix or Plex, surfing the 'Net, or playing SDV on the big screen from the "comfy chair".  It supports up to 3 displays so I could just plug it into both her desk/computer monitor and the TV and leave it.  That way setup would be simply putting the application window on whatever screen she wants & I wouldn't have to help her with cabling it up every time.

I think it is worth a try.  If it doesn't work for that application then I'll use it to replace the old SFF system that I've been using as an Internet firewall/gateway.  That system isn't exactly broken and it worked fine back when we were on our 8 megabit DSL but it can't quite keep up with the new cable system's bandwidth.  It is about 10 years old at this point and I've been thinking it is time to replace it anyway but with the virus and work I just haven't gotten a round tuit yet.

 

Link to comment
Share on other sites

  • 2 weeks later...

I need to purge more often.  Cleaning out some bookshelf space this weekend I found an ancient (in computer years) copy of "Caldera linux", copyright 1998 and CD still in it's sleeve attached to the back of the book.  :D  The CD showed signs of use and wikipedia had some interesting things to say about Caldera but I don't recall ever having a system that I'd installed this particular version of linux on.

660945806_ScanSep72020at9.20PMpage1.JPG.238c7d52579fc85590efd592c4feb847.JPG2128803318_ScanSep72020at9.20PMpage2.thumb.JPG.5cf7eafba5bd2307a1940a4afe9cdee8.JPGIMG_0612.jpg.dcb8a051f714fbde410fa93f49669c94.jpg

  • Like 1
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...