Scott Lawson writes:
 > Also you may wish to look at the output of 'iostat -xnce 1' as well.
 > 
 > You can post those to the list if you have a specific problem.
 > 
 > You want to be looking for error counts increasing and specifically 'asvc_t'
 > for the service times on the disks. I higher number for asvc_t  may help to
 >  isolate poorly performing individual disks.
 > 
 > 

I blast the pool with dd, and look for drives that are
*always* active, while others in the same group have
completed their transaction group and get no more activity.
Within a group drives should be getting the same amount of
data per 5 second (zfs_txg_synctime) and the ones that are
always active are the ones slowing you down.

If whole groups are unbalanced that's a sign that they have
different amount of free space and the expectation is that
you will be gated by the speed on the group that needs to
catch up. 

-r

 > 
 > Scott Meilicke wrote:
 > > You can try:
 > >
 > > zpool iostat pool_name -v 1
 > >
 > > This will show you IO on each vdev at one second intervals. Perhaps you 
 > > will see different IO behavior on any suspect drive.
 > >
 > > -Scott
 > >   
 > 
 > 
 > _______________________________________________
 > zfs-discuss mailing list
 > zfs-discuss@opensolaris.org
 > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to