On Sep 28, 2009, at 3:42 PM, Albert Chin wrote:

On Mon, Sep 28, 2009 at 12:09:03PM -0500, Bob Friesenhahn wrote:
On Mon, 28 Sep 2009, Richard Elling wrote:

Scrub could be faster, but you can try
        tar cf - . > /dev/null

If you think about it, validating checksums requires reading the data.
So you simply need to read the data.

This should work but it does not verify the redundant metadata.  For
example, the duplicate metadata copy might be corrupt but the problem
is not detected since it did not happen to be used.

Too bad we cannot scrub a dataset/object.

Can you provide a use case? I don't see why scrub couldn't start and
stop at specific txgs for instance. That won't necessarily get you to a
specific file, though.
 -- richard

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to