Robert Milkowski wrote:
Bob Friesenhahn wrote:
On Mon, 28 Sep 2009, Richard Elling wrote:

Scrub could be faster, but you can try
    tar cf - . > /dev/null

If you think about it, validating checksums requires reading the data.
So you simply need to read the data.

This should work but it does not verify the redundant metadata. For example, the duplicate metadata copy might be corrupt but the problem is not detected since it did not happen to be used.


Not only that - it won't also read all the copies of data if zfs has redundancy configured at a pool level. Scrubbing the pool will. And that's the main reason behind the scrub - to be able to detect and repair checksum errors (if any) while a redundant copy is still fine.


Also doing tar means reading from ARC and/or L2ARC if data is cached which won't verify if data is actually fine on a disk. Scrub won't use a cache and will always go to physical disks.

--
Robert Milkowski
http://milek.blogspot.com

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to