On Mon, 2009-06-22 at 06:06 -0700, Richard Elling wrote:
> Nevertheless, in my lab testing, I was not able to create a random-enough
> workload to not be write limited on the reconstructing drive.  Anecdotal
> evidence shows that some systems are limited by the random reads.

Systems I've run which have random-read-limited reconstruction have a
combination of:
 - regular time-based snapshots
 - daily cron jobs which walk the filesystem, accessing all directories
and updating all directory atimes in the process.

Because the directory dnodes are randomly distributed through the dnode
file, each block of the dnode file likely contains at least one
directory dnode, and as a result each of the tree walk jobs causes the
entire dnode file to diverge from the previous day's snapshot.

If the underlying filesystems are mostly static and there are dozens of
snapshots, a pool traverse spends most of its time reading the dnode
files and finding block pointers to older blocks which it knows it has
already seen.
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to