On Sun, May 3, 2020 at 6:50 PM hitachi303
<gentoo-u...@konstantinhansen.de> wrote:
>
> The only person I know who is running a really huge raid ( I guess 2000+
> drives) is comfortable with some spare drives. His raid did fail an can
> fail. Data will be lost. Everything important has to be stored at a
> secondary location. But they are using the raid to store data for some
> days or weeks when a server is calculating stuff. If the raid fails they
> have to restart the program for the calculation.

So, if you have thousands of drives, you really shouldn't be using a
conventional RAID solution.  Now, if you're just using RAID to refer
to any technology that stores data redundantly that is one thing.
However, if you wanted to stick 2000 drives into a single host using
something like mdadm/zfs, or heaven forbid a bazillion LSI HBAs with
some kind of hacked-up solution for PCIe port replication plus SATA
bus multipliers/etc, you're probably doing it wrong.  (Really even
with mdadm/zfs you probably still need some kind of terribly
non-optimal solution for attaching all those drives to a single host.)

At that scale you really should be using a distributed filesystem.  Or
you could use some application-level solution that accomplishes the
same thing on top of a bunch of more modest hosts running zfs/etc (the
Backblaze solution at least in the past).

The most mainstream FOSS solution at this scale is Ceph.  It achieves
redundancy at the host level.  That is, if you have it set up to
tolerate two failures then you can take two random hosts in the
cluster and smash their motherboards with a hammer in the middle of
operation, and the cluster will keep on working and quickly restore
its redundancy.  Each host can have multiple drives, and losing any or
all of the drives within a single host counts as a single failure.
You can even do clever stuff like tell it which hosts are attached to
which circuit breakers and then you could lose all the hosts on a
single power circuit at once and it would be fine.

This also has the benefit of covering you when one of your flakey
drives causes weird bus issues that affect other drives, or one host
crashes, and so on.  The redundancy is entirely at the host level so
you're protected against a much larger number of failure modes.

This sort of solution also performs much faster as data requests are
not CPU/NIC/HBA limited for any particular host.  The software is
obviously more complex, but the hardware can be simpler since if you
want to expand storage you just buy more servers and plug them into
the LAN, versus trying to figure out how to cram an extra dozen hard
drives into a single host with all kinds of port multiplier games.
You can also do maintenance and just reboot an entire host while the
cluster stays online as long as you aren't messing with them all at
once.

I've gone in this general direction because I was tired of having to
try to deal with massive cases, being limited to motherboards with 6
SATA ports, adding LSI HBAs that require an 8x slot and often
conflicts with using an NVMe, and so on.

-- 
Rich

Reply via email to