On Jan 12, 2010, at 2:54 PM, Ed Spencer wrote:

> We have a zpool made of 4 512g iscsi luns located on a network appliance.
> We are seeing poor read performance from the zfs pool. 
> The release of solaris we are using is:
> Solaris 10 10/09 s10s_u8wos_08a SPARC
> 
> The server itself is a T2000
> 
> I was wondering how we can tell if the zfs_vdev_max_pending setting is 
> impeding read performance of the zfs pool? (The pool consists of lots of 
> small files).

zfs_vdev_max_pending is the queue depth for each vdev. ZFS will issue
up to 35 I/Os to the vdev [1]. You can check the average queue depth by
observing the number of entries in the actv and wait queues in iostat.
If the actv+wait << 35, then don't worry about it.

[1] changed in b127 to 4-10 with an adjusting algorithm.

> And if it is impeding read performance, how do we go about finding a new 
> value for this parameter?

This can be changed on the fly, so it is reasonable to experiment.

> 
> Of course I may misunderstand this parameter entirely and would be quite 
> happy for an proper explanation!

This is rarely the cause of slow response for fast devices, such as
NetApp servers.
 -- richard



_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to