I wasn't referring to write corruption but the chance of simultaneous loss of 2 drives.  If you have a drive failure in your R1, it still needs to be replaced and rebuilt before the second drive also fails.

2 drive failure is also what you primarily worry about in R5 systems, but the chance of two drives failing is larger on R5 because you have more drives that could possibly fail.

Once the first drive fails on a R1, the second drive must fail too.
Once the first drive fails on R5, one of N-1 drives must also fail. 
If the reliability of any drive is .99 for a given time frame, the reliability of N of them is (.99)^N.  So the second drive in a mirror is .99 reliable.  The set of 3 other drives in a R5 is (.99)^3 or .97.

On 3/23/06, BillC <[EMAIL PROTECTED]> wrote:

jimdibb Wrote:
>
> For R5 vs. R1, R5 is cheaper and the real reliability is only slightly
> worse
> than mirrors (gets worse with wider Raid groups, but also gets
> cheaper).  R1
> needs back-ups just as much as R5 does.
>
In the general case I'll agree that R1 needs backups as much as R5. In
the case of a music library the chance of a write corruption that
propogates across the mirrors is pretty low since almost all access is
read only. In this case I think R1 is pretty close functionally to a
backup. But as you said, with only a slight reliability advantage and a
cost disadvantage compared to R5 - which is why I'm using R5 in the
first place.

I guess I'll just keep on with my R5 array and my "back-up" the
physical media that I've ripped.


--
BillC
------------------------------------------------------------------------
BillC's Profile: http://forums.slimdevices.com/member.php?userid=1235
View this thread: http://forums.slimdevices.com/showthread.php?t=22379

_______________________________________________
Discuss mailing list
Discuss@lists.slimdevices.com
http://lists.slimdevices.com/lists/listinfo/discuss

_______________________________________________
Discuss mailing list
Discuss@lists.slimdevices.com
http://lists.slimdevices.com/lists/listinfo/discuss

Reply via email to