>>>> 4 disk raidz group issues 128k/3=42.6k io to each individual data 
>>>> disk.If 35 concurrent 128k IO is enough to saturate a disk( vdev ) ,
>>>> then 35*3=105 concurrent 42k io will be required to saturates the 
>>>> same disk.
>>>
>>> ZFS doesn't know anything about disk saturation.  It will send
>>> up to vq_max_pending  I/O requests per vdev (usually a vdev is a
>>> disk). It will try to keep vq_max_pending I/O requests queued to
>>> the vdev.
>>
>> I can see the "avg pending I/Os" hitting my  vq_max_pending limit, 
>> then raising the limit would be a good thing. I think , it's due to
>> many 42k Read IO to individual disk in the 4 disk raidz group.
>
> You're dealing with a queue here.  iostat's average pending I/Os 
> represents
> the queue depth.   Some devices can't handle a large queue.  In any
> case, queuing theory applies.
>
> Note that for reads, the disk will likely have a track cache, so it is
> not a good assumption that a read I/O will require a media access.
My workload issues around 5000 MB read I/0 & iopattern says around 55% 
of the IO are random in nature.
I don't know how much prefetching through track cache is going to help 
here.Probably I can try disabling vdev_cache
through "set 'zfs_vdev_cache_max' 1"

Thanks
Manoj Nayak
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to