Rich Freeman wrote:
> On Fri, Aug 26, 2022 at 7:26 AM Dale <rdalek1...@gmail.com> wrote:
>> I looked into the Raspberry and the newest version, about $150 now, doesn't 
>> even have SATA ports.
> The Pi4 is definitely a step up from the previous versions in terms of
> IO, but it is still pretty limited.  It has USB3 and gigabit, and they
> don't share a USB host or anything like that, so you should get close
> to full performance out of both.  The CPU is of course pretty limited,
> as is RAM.  Biggest benefit is the super-low power consumption, and
> that is something I take seriously as for a lot of cheap hardware that
> runs 24x7 the power cost rapidly exceeds the purchase price.  I see
> people buying old servers for $100 or whatever and those things will
> often go through $100 worth of electricity in a few months.
>
> How many hard drives are you talking about?  There are two general
> routes to go for something like this.  The simplest and most
> traditional way is a NAS box of some kind, with RAID.  The issue with
> these approaches is that you're limited by the number of hard drives
> you can run off of one host, and of course if anything other than a
> drive fails you're offline.  The other approach is a distributed
> filesystem.  That ramps up the learning curve quite a bit, but for
> something like media where IOPS doesn't matter it eliminates the need
> to try to cram a dozen hard drives into one host.  Ceph can also do
> IOPS but you're talking 10GbE + NVMe and big bucks, and that is how
> modern server farms would do it.
>
> I'll describe the traditional route since I suspect that is where
> you're going to end up.  If you only had 2-4 drives total you could
> probably get away with a Pi4 and USB3 drives, but if you want
> encryption or anything CPU-intensive you're probably going to
> bottleneck on the CPU.  It would be fine if you're more concerned with
> capacity than storage.
>
> For more drives than that, or just to be more robust, then any
> standard amd64 build will be fine.  Obviously a motherboard with lots
> of SATA ports will help here.  However, that almost always is a
> bottleneck on consumer gear, and the typical solution to that for SATA
> is a host bus adapter.  They're expensive new, but cheap on ebay (I've
> had them fail though, which is probably why companies tend to sell
> them while they're still working).  They also use a ton of power -
> I've measured them using upwards of 60W - they're designed for servers
> where nobody seems to care.  A typical HBA can provide 8-32 SATA
> ports, via mini-SAS breakout cables (one mini-SAS port can provide 4
> SATA ports).  HBAs tend to use a lot of PCIe lanes - you don't
> necessarily need all of them if you only have a few drives and they're
> spinning disks, but it is probably easiest if you get a CPU with
> integrated graphics and use the 16x slot for the HBA.  That or get a
> motherboard with two large slots (they usually aren't 16x, but getting
> 4-8x slots on a consumer motherboard isn't super-common).
>
> For software I'd use mdadm plus LVM.  ZFS or btrfs are your other
> options, and those can run on bare metal, but btrfs is immature and
> ZFS cannot be reshaped the way mdadm can, so there are tradeoffs.  If
> you want to use your existing drives and don't have a backup to
> restore or want to do it live, then the easiest option there is to add
> one drive to the system to expand capacity.  Put mdadm on that drive
> as a degraded raid1 or whatever, then put LVM on top, and migrate data
> from an existing disk live over to the new one, freeing up one or more
> existing drives.  Then put mdadm on those and LVM and migrate more
> data onto them, and so on, until everything is running on top of
> mdadm.  Of course you need to plan how you want the array to look and
> have enough drives that you get the desired level of redundancy.  You
> can start with degraded arrays (which is no worse than what you have
> now), then when enough drives are freed up they can be added as pairs
> to fill it out.
>
> If you want to go the distributed storage route then CephFS is the
> canonical solution at this point but it is RAM-hungry so it tends to
> be expensive.  It is also complex, but there are ansible playbooks and
> so on to manage that (though playbooks with 100+ plays in them make me
> nervous).  For something simpler MooseFS or LizardFS are probably
> where I'd start.  I'm running LizardFS but they've been on the edge of
> death for years upstream and MooseFS licensing is apparently better
> now, so I'd probably look at that first.  I did a talk on lizardfs
> recently: https://www.youtube.com/watch?v=dbMRcVrdsQs
>


This is some good info.  It will likely start off with just a few hard
drives but that will grow over time.  I also plan to have a large drive
as a spare as well, in case one starts having issues and needs replacing
quick.  I'd really like to be using RAID at least the two copies one but
may take time, plus I got to learn how to do the thing. ;-)

I may use NAS software.  I've read/seen people use that and it is on a
USB stick or something.  They say once it is booted up, it doesn't need
to access the USB stick much.  I guess it is set to load into memory at
boot up, like some rescue media does.  I think that uses ZFS for the
file system which is a little like LVM.  The bad thing about using that
tho, I may not be able to just move the drives over to the new NAS since
it may not accept LVM well.  I don't know much on that yet.  I may end
up having to buy drives and just rsync them over.  One good thing, I
have a 1GB capable router.  It even has fast wifi if I get that.  Plus,
I'll have extra drives depending on how I work this.

My first thing, buy a case to organize all this.  I looked at a Fractal
Design Node 804 case.  It is a NAS case.  I think it can handle a wide
variety of mobos too.  It can also hold a lot of drives as well.  Eight
if I recall.  If I put the OS on a USB stick or something, that is a lot
of drive spaces.  I may could add a drive cage if needed.  It's a fair
sized case and certainly a good start. 

It will take me a while to build all this.  Thing is, since I'll use it
like a backup tool, I need to be able to put it in a safe place.  I wish
I could get a fire safe to stick it into.  The biggest part is 16" I
think.  Plenty of cooling as well.  I wish I had another Cooler Master
HAF-932 like I have now but that almost certainly won't fit in a fire
safe.  Dang thing is huge.  ROFL 

Lots to think on. 

Dale

:-)  :-) 

Reply via email to