On Mon, Jun 3, 2024 at 11:17 AM Dale <rdalek1...@gmail.com> wrote:
>
> When you say HBA.  Is this what you mean?
>
> https://www.ebay.com/itm/125486868824
>

Yes.  Typically they have mini-SAS interfaces, and you can get a
breakout cable that will attach one of those to 4x SATA ports.

Some things to keep in mind when shopping for HBAs:
1. Check for linux compatibility.  Not every card has great support.
2. Flashing the firmware may require windows, and this may be
necessary to switch a card between RAID mode and IT mode, the latter
being what you almost certainly want, and the former being what most
enterprise admins tend to have them flashed as.  IT mode basically
exposes all the drives that are attached as a bunch of standalone
drivers, while RAID mode will just expose a limited number of virtual
interfaces and the card bundles the disks into arrays (and if the card
dies, good luck ever reading those disks again until you reformat
them).
3. Be aware they often use a ton of power.
4. Take note of internal vs external ports.  You can get either.  They
need different cables, and if your disks are inside the case having
the ports on the outside isn't technically a show-stopper but isn't
exactly convenient.
5. Take note of the interface speed and size.  The card you linked is
(I think) an 8x v2 card.  PCIe will auto-negotiate down, so if you
plug that card into your v4 4x slot it will run at v2 4x, which is
2GB/s bandwidth.  That's half of what it is capable of, but probably
not a big issue.  If you want to plug 16 enterprise SSDs into it then
you'll definitely hit the PCIe bottleneck, but if you plug 16 consumer
7200RPM HDDs into it you're only going to hit 2GB/s under fairly ideal
circumstances, and with fewer HDDs you couldn't hit it at all.  If you
pay more you'll get a newer PCIe revision, which means more bandwidth
for a given number of lanes.
6. Check for hardware compatibility too.  Stuff from 1st parties like
Dell/etc might be fussy about wanting to be in a Dell server with
weird firmware interactions with the motherboard.  A 3rd party card
like LSI probably is less of an issue here, but check.

Honestly, part of why I went the distributed filesystem route (Ceph
these days) is to avoid dealing with this sort of nonsense.  Granted,
now I'm looking to use more NVMe and if you want high capacity NVMe
that tends to mean U.2, and dealing with bifurcation and PCIe
switches, and just a different sort of nonsense....

-- 
Rich

Reply via email to