Stuart Anderson writes:
 > 
 > On Jun 21, 2009, at 10:21 PM, Nicholas Lee wrote:
 > 
 > >
 > >
 > > On Mon, Jun 22, 2009 at 4:24 PM, Stuart Anderson 
 > > <ander...@ligo.caltech.edu 
 > > > wrote:
 > >
 > > However, it is a bit disconcerting to have to run with reduced data
 > > protection for an entire week. While I am certainly not going back to
 > > UFS, it seems like it should be at least theoretically possible to  
 > > do this
 > > several orders of magnitude faster, e.g., what if every block on the
 > > replacement disk had its RAIDZ2 data recomputed from the degraded
 > >
 > > Maybe this is also saying - that for large disk sets a single RAIDZ2  
 > > provides a false sense of security.
 > 
 > This configuration is with 3 large RAIDZ2 devices but I have more  
 > recently
 > been building thumper/thor systems with a larger number of smaller  
 > RAIDZ2's.
 > 
 > Thanks.
 > 

170M small files reconstructed in 1 week over 3 raid-z
groups is 93 files / sec per raid-z group. That is not too
far from expectations for 7.2K RPM drives (where they ?).

I don't see orders of magnitude improvements on this however
this CR (integrated in snv_109) might give the workload a boost :

        6801507 ZFS read aggregation should not mind the gap

This will enable more read aggregation to occur during a
resilver. We could also contemplate enabling the vdev
prefetch code for data during a resilver. 

Otherwise, limiting the # of small objects per raid-z group
as you're doing now, seems wise to me.

-r


 > --
 > Stuart Anderson  ander...@ligo.caltech.edu
 > http://www.ligo.caltech.edu/~anderson
 > 
 > 
 > 
 > _______________________________________________
 > zfs-discuss mailing list
 > zfs-discuss@opensolaris.org
 > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to