Manoj Nayak writes:
 > Hi All.
 > 
 > ZFS document says ZFS schedules it's I/O in such way that it manages to 
 > saturate a single disk bandwidth  using enough concurrent 128K I/O.
 > The no of concurrent I/O is decided by vq_max_pending.The default value 
 > for  vq_max_pending is 35.
 > 
 > We have created 4-disk raid-z group inside ZFS pool on Thumper.ZFS 
 > record size is set to 128k.When we read/write a 128K record ,it issue a
 > 128K/3 I/O to each of the 3 data disks in the 4-disk raid-z group.
 > 
 > We need to saturate all three data disk bandwidth in the Raidz group.Is 
 > it required to set vq_max_pending value to 35*3=135  ?
 > 

Nope.

Once a disk controller is working on 35 requests, we don't
expect to get any more out of it by queueing more requests
and we might even confuse the firmware and get less.

Now for  an array controller and  a vdev  fronting for large
number of disks, then 35 might  be a low number not allowing
full throughput.  Rather    than tuning 35 up,    we suggest
splitting devives into smaller LUNs  since each luns is given
a 35-deep queue. 

Tuning vq_max_pending down helps read and synchronous write
(ZIL) latency. Today the preferred way to help ZIL latency
is to use a Separate Intent Log.

-r


 > Thanks
 > Manoj Nayak
 > _______________________________________________
 > zfs-discuss mailing list
 > zfs-discuss@opensolaris.org
 > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to