On 2012-04-24 19:14, Tim Cook wrote:
Personally unless the dataset is huge and you're using z3, I'd be
scrubbing once a week.  Even if it's z3, just do a window on Sunday's or
something so that you at least make it through the whole dataset at
least once a month.

+1 I guess
Among other considerations, if the scrub does find irrepairable errors,
you might have some recent-enough backups or other sources of the data,
so the situation won't be as fatal as when you look for errors once a
year ;)

There's no reason NOT to scrub that I can think of other than the
overhead - which shouldn't matter if you're doing it during off hours.

"I heard a rumor" that HDDs can detect reading flaky sectors
(i.e. detect a bit-rot error and recover thanks to ECC), and
in this case they would automatically remap the revocered
sector. So reading the disks in (logical) locations where
your data is known to be may be a good thing to prolong its
available life.

This of course relies kinda on disk reliability - i.e. it
should be rated 24/7 and within warranted age (mechanics
should be within acceptable wear). No guarantees with other
drives, although I don't think weekly scrubs would be fatal.

If only ZFS could queue scrubbing reads more linearly... ;)

//Jim


_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to