On Thu, Oct 25, 2012 at 7:35 AM, Jim Klimov <jimkli...@cos.ru> wrote:

>
> If scrubbing works the way we "logically" expect it to, it
> should enforce validation of such combinations for each read
> of each copy of a block, in order to ensure that parity sectors
> are intact and can be used for data recovery if a plain sector
> fails. Likely, raidzN scrubs should show as compute-intensive
> tasks compared to similar mirror scrubs.
>

It should only be as compute intensive as writes - it can read the userdata
and parity sectors, ensure the userdata checksum matches (reconstruct and
do a fresh write in the rare cases it is not), and then recalculate the
parity sectors from the verified user sectors, and compare them to the
parity sectors it actually read.  The only reason it would need to use the
combinatorial approach is if it has a checksum mismatch and needs to
rebuild the data in the presence of bit rot.

I thought about it, wondered and posted the question, and went
> on to my other work. I did not (yet) research the code to find
> first-hand, partly because gurus might know the answer and reply
> faster than I dig into it ;)
>

I recently wondered this also and am glad you asked, I hope someone can
answer definitively.

Tim
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to