I have a ZFS-based NFS server (Solaris 10 U4 on x86) where I am seeing
a weird performance degradation as the number of simultaneous sequential
reads increases.

 Setup:
        NFS client -> Solaris NFS server -> iSCSI target machine

 There are 12 physical disks on the iSCSI target machine. Each of them
is sliced up into 11 parts and the parts exported as individual LUNs to
the Solaris server. The Solaris server uses each LUN as a separate ZFS
pool (giving 132 pools in total) and exports them all to the NFS client.

(The NFS client and the iSCSI target machine are both running Linux.
The Solaris NFS server has 4 GB of RAM.)

 When the NFS client starts a sequential read against one filesystem
from each physical disk, the iSCSI target machine and the NFS client
both use the full network bandwidth and each individual read gets
1/12th of it (about 9.something MBytes/sec). Starting a second set of
sequential reads against each disk (to a different pool) behaves the
same, as does starting a third set.

 However, when I add a fourth set of reads thing change; while the
NFS server continues to read from the iSCSI target at full speed, the
data rate to the NFS client drops significantly. By the time I hit
9 reads per physical disk, the NFS client is getting a *total* of 8
MBytes/sec.  In other words, it seems that ZFS on the NFS server is
somehow discarding most of what it reads from the iSCSI disks, although
I can't see any sign of this in 'vmstat' output on Solaris.

 Also, this may not be just an NFS issue; in limited testing with local
IO on the Solaris machine it seems that I may be seeing the same effect
with the same rough magnitude.

(It is limited testing because it is harder to accurately measure what
aggregate data rate I'm getting and harder to run that many simultaneous
reads, as if I run too many of them the Solaris machine locks up due to
overload.)

 Does anyone have any ideas of what might be going on here, and how I
might be able to tune things on the Solaris machine so that it performs
better in this situation (ideally without harming performance under
smaller loads)? Would partitioning the physical disks on Solaris instead
of splitting them up on the iSCSI target make a significant difference?

 Thanks in advance.

        - cks
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to