On 11/13/22 13:02, hw wrote:
On Fri, 2022-11-11 at 07:55 -0500, Dan Ritter wrote:
hw wrote:
On Thu, 2022-11-10 at 20:32 -0500, Dan Ritter wrote:
Linux-Fan wrote:


[...]
* RAID 5 and 6 restoration incurs additional stress on the other
   disks in the RAID which makes it more likely that one of them
   will fail. The advantage of RAID 6 is that it can then recover
   from that...

Disks are always being stressed when used, and they're being stessed as well
when other types of RAID arrays than 5 or 6 are being rebuild.  And is there
evidence that disks fail *because* RAID arrays are being rebuild or would
they
have failed anyway when stressed?

Does it matter? The observed fact is that some notable
proportion of RAID 5/6 rebuilds fail because another drive in
that group has failed.

Fortunately, I haven't observed that.  And why would only RAID 5 or 6 be
affected and not RAID 1 or other levels?


Any RAID level can suffer additional disk failures while recovering from a disk failure. I saw this exact scenario on my SOHO server in August 2022. The machine has a stripe of two mirrors of two HDD's each (e.g. ZFS equivalent of RAID10). One disk was dying, so I replaced it. While the replacement disk was resilvering, a disk in the other mirror started dying. I let the first resilver finish, then replaced the second disk. Thankfully, no more disks failed. I got lucky.


David

Reply via email to