> On Tue, 30 Jun 2009, Bob Friesenhahn wrote:
> 
> Note that this issue does not apply at all to NFS
> service, database 
> service, or any other usage which does synchronous
> writes.

I see read starvation with NFS. I was using iometer on a Windows VM, connecting 
to an NFS mount on a 2008.11 physical box. iometer params: 65% read, 60% 
random, 8k blocks, 32 outstanding IO requests, 1 worker, 1 target.

NFS Testing
                   capacity     operations    bandwidth
pool         used  avail   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
data01      59.6G  20.4T     46     24   757K  3.09M
data01      59.6G  20.4T     39     24   593K  3.09M
data01      59.6G  20.4T     45     25   687K  3.22M
data01      59.6G  20.4T     45     23   683K  2.97M
data01      59.6G  20.4T     33     23   492K  2.97M
data01      59.6G  20.4T     16     41   214K  1.71M
data01      59.6G  20.4T      3  2.36K  53.4K  30.4M
data01      59.6G  20.4T      1  2.23K  20.3K  29.2M
data01      59.6G  20.4T      0  2.24K  30.2K  28.9M
data01      59.6G  20.4T      0  1.93K  30.2K  25.1M
data01      59.6G  20.4T      0  2.22K      0  28.4M
data01      59.7G  20.4T     21    295   317K  4.48M
data01      59.7G  20.4T     32     12   495K  1.61M
data01      59.7G  20.4T     35     25   515K  3.22M
data01      59.7G  20.4T     36     11   522K  1.49M
data01      59.7G  20.4T     33     24   508K  3.09M
data01      59.7G  20.4T     35     23   536K  2.97M
data01      59.7G  20.4T     32     23   483K  2.97M
data01      59.7G  20.4T     37     37   538K  4.70M

While writes are being committed to the ZIL all the time, periodic dumping to 
the pool still occurs, and during those times reads are starved. Maybe this 
doesn't happen in the 'real world' ?

-Scott
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to