On 7/16/20 9:52 AM, Ali via cctalk wrote:
I have never used a SW RAID solution (except for a RAID 0 on Win2K3 for the boot drive)

Are you sure that was RAID 0 (zero), /striping/? I've never heard of /software/ RAID 0 (striping) for the /boot/ drive in Windows. I would expect that to be RAID 1 or something other than the drive with NTLDR.EXE on it. I also suspect that the drive with %SystemRoot% on it would need to more conducive to loading driver and software RAID support files very early in the boot process.

and have used HW controllers in my more recent systems (I am particular to the Areca Controllers - cheap but effective with a good feature mix).

I've completely lost track of hardware RAID controllers. I'm now more interested in I.T. HBA controllers to use with ZFS based software RAID.

What I find problematic with RAID (specially RAID 6) is that with the larger drives we have in use today build (or more importantly rebuild/recovery) times are extremely long. Long enough that you could have a second drive failure during that time based on statistics.

That's one of the reasons that ZFS supports three drives worth of redundancy in addition to the data space. RAID Z1 / Z2 / Z3.

I think we are quickly getting to the point, if not past it, where a /single/ RAID array can't safely hold the entirety of the necessary storage. Instead, I see multiple smaller RAID arrays aggregated together at a higher layer.

I've seen this done by striping / JBODing / LVMing / etc. multiple discrete RAID arrays together in the OS.

ZFS natively does this by striping (RAID 0) across multiple underlying RAID sets (of whatever RAID level you want).

This is an article (for the layman) written in 2010 predicting the lack of usability of RAID 6 by 2019: www.zdnet.com/article/why-raid-6-stops-working-in-2019/. I found the math in it interesting and the conclusions pretty true to my experience.

I am wondering if SW RAID is faster in rebuild times by now (using the full power of the multi-core processors) vs. a dedicated HW controller (even one with dual cores).

I think that the CPU overhead / computation time is now largely insignificant. To me, one of the biggest issues is the simple massing amount of data that needs to be read from and written to multiple drives. At full interface speed, some drives can take a LONG time to transfer all the data. What's worse is the sustained I/O speed to platters of spinning rust being significantly slower than the interface speed.

This is where some intelligence in the RAID implementation is really nice. There is very little need to rebuild the yet unused area of a big RAID array. ZFS shines in this as it only (re)builds the area that has any data on it. Only have a few hundred GB on that multi TB RAID array consisteng of multipel 1 TB drives? Fine. Only need to check the few hundred GB. It's actually quite fast.



--
Grant. . . .
unix || die

Reply via email to