On Wed, 16 Jul 2008, Matthew Huang wrote:
> comparing to their existing systems. However the outputs of the I/O stress 
> test with iozone show the mixed results as follows:
>
>   * The read performance sharply degrades (almost down to 1/20, i.e
>     from 2,000,000 down to 100,000) when the file sizes are larger
>     than 256KBytes.

This issue is almost certainly client-side rather than server side. 
The 256KByte threshold is likely the NFS buffer cache size (could be 
overall filesystem cache size) on the client. In order to know for 
sure, run iozone directly on the server as well.

If tests directly on the server don't show a slowdown at the 256KByte 
threshold, then the abrupt slowdown is due to client caching combined 
with inadequate network transfer performance or excessive network 
latency.  If sequential read performance is important to you, then you 
should investigate NFS client tuning parameters (mount parameters) 
related to the amount of sequential read-ahead performed by the 
client.  If clients request an unnecessary amount of read-ahead, then 
network performance could suffer due to transferring data which is 
never used.  When using NFSv3 or later, TCP tuning parameters can be a 
factor as well.

You can expect that ZFS read performance will slow down on the server 
once the ZFS ARC size becomes significant as compared to the amount of 
installed memory on the server.  For re-reads, if the file is larger 
than the ARC can grow, then ZFS needs to go to disk rather than use 
its cache.

Do an ftp transfer from the server to the client.  A well-tuned NFS 
should be at least as fast as this.

Bob
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to