It doesn't take much knowledge of history to see that a Raspberry PI5 with a ton of new features is a computer that rivals infrastructure computers that are probably still in service. Hell, even a RPI4 may even fit that description.
I've been using RPIs for about 10 years and they've grown from a neat little embedded linux platform to a full blown computer that is perfectly usable as a general purpose platform. I have a anti-static bag filled with just about every PI board from the beginning. (Which kind of troubles me about "disposibility" of tech, they "work" but have no tangible value.) I have a RPI5 running ZFS, PostgreSQL, a DLNA server, and a full development stack that compiles just about any code I have laying around. These things chave 8 gigs of RAM, 4 CPUs, use 15 watts of power, and cost less than a video card. My desktop is considerably bigger, but the core speed is only 2 or 3 times as fast as the ARM. When you look at virtualization, it sounds great, but a lot of people don't realize that CPU scheduling and timing don't get you a 1:1 relationship between "core" and "virtualize core." Thus, you may have 16 cores, but that does not translate to four effective virtualized 4 core VMs. This is one of the reasons why "containers" perform better. Containers are basically "chroot jails" with kernel enforced security and name spaces. (Warning: over simplification) They are scheduled like every other process in your system. Would a stack of RPI5s, controlled by some sort of docker look-alike, perform better than a huge VMware server? Would it perform better than a large kubernetes cluster? Would they be more secure because they are physically separated. Thoughts? _______________________________________________ Discuss mailing list Discuss@driftwood.blu.org https://driftwood.blu.org/mailman/listinfo/discuss