Thiemo Nagel wrote:
For errors occurring on the level of hard disk blocks (signature: most
bytes of the block have D errors, all with same z), the probability for
multidisc corruption to go undetected is ((n-1)/256)**512.  This might
pose a problem in the limiting case of n=255, however for practical
applications, this probability is negligible as it drops off
exponentially with decreasing n:

That assumes fully random data distribution, which is almost certainly a
false assumption.

Agreed.  This means, that the formula only serves to specify a lower limit
to the probability.  However, is there an argumentation, why a pathologic
case would be probable, i.e. why the probability would be likely to
*vastly* deviate from the theoretical limit?  And if there is, would that
argumentation not apply to other raid 6 operations (like "check") also? And would it help to use different Galois field generators at different
positions in a sector instead of using a uniform generator?


What you call "pathologic" cases when it comes to real-world data are very common. It is not at all unusual to find sectors filled with only a constant (usually zero, but not always), in which case your **512 becomes **1.

It doesn't mean it's not worthwhile, but don't try to claim it is anything other than opportunistic.

        -hpa
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to