On Friday, November 9, 2018 3:29:52 AM CET Rich Freeman wrote:
> On Thu, Nov 8, 2018 at 8:16 PM Dale <rdalek1...@gmail.com> wrote:
> > I'm trying to come up with a
> > plan that allows me to grow easier and without having to worry about
> > running out of motherboard based ports.
> 
> So, this is an issue I've been changing my mind on over the years.
> There are a few common approaches:
> 
> * Find ways to cram a lot of drives on one host
> * Use a patchwork of NAS devices or improvised hosts sharing over
> samba/nfs/etc and end up with a mess of mount points.
> * Use a distributed FS
> 
> Right now I'm mainly using the first approach, and I'm trying to move
> to the last.  The middle option has never appealed to me.

I'm actually in the middle, but have a single large NAS.

> So, to do more of what you're doing in the most efficient way
> possible, I recommend finding used LSI HBA cards.  These have mini-SAS
> ports on them, and one of these can be attached to a breakout cable
> that gets you 4 SATA ports.  I just picked up two of these for $20
> each on ebay (used) and they have 4 mini-SAS ports each, which is
> capacity for 16 SATA drives per card.  Typically these have 4x or
> larger PCIe interfaces, so you'll need a large slot, or one with a
> cutout.  You'd have to do the math but I suspect that if the card+MB
> supports PCIe 3.0 you're not losing much if you cram it into a smaller
> slot.  If most of the drives are idle most of the time then that also
> demands less bandwidth.  16 fully busy hard drives obviously can put
> out a lot of data if reading sequentially.

I also recommend LSI HBA cards, they work really well and are really well 
supported by Linux.

> You can of course get more consumer-oriented SATA cards, but you're
> lucky to get 2-4 SATA ports on a card that runs you $30.  The mini-SAS
> HBAs get you a LOT more drives per PCIe slot, and your PCIe slots are
> you main limiting factor assuming you have power and case space.
>
> Oh, and those HBA cards need to be flashed into "IT" mode - they're
> often sold this way, but if they support RAID you want to flash the IT
> firmware that just makes them into a bunch of standalone SATA slots.
> This is usually a PITA that involves DOS or whatever, but I have
> noticed some of the software needed in the Gentoo repo.

Even with Raid-firmware, they can be configured for JBOD.

> If you go that route it is just like having a ton of SATA ports in
> your system - they just show up as sda...sdz and so on (no idea where
> it goes after that).

I tested this once, ended up getting sdaa, sdab,...

> Software-wise you just keep doing what you're
> already doing (though you should be seriously considering
> mdadm/zfs/btrfs/whatever at that point).

I would suggest ZFS or BTRFS over mdadm. Gives you more flexibility and is a 
logical follow-up to LVM.

> That is the more traditional route.
> 
> Now let me talk about distributed filesystems, which is the more
> scalable approach.  I'm getting tired of being limited by SATA ports,
> and cases, and such.  I'm also frustrated with some of zfs's
> inflexibility around removing drives.

IMHO, ZFS is nice for large storage devices, not so much for regular desktops. 
This is why I am hoping BTRFS will solve the resilver issues. (not kept up, is 
this still not working?)

--
Joost



Reply via email to