Hi,

I'm trying to measure the performance of the NFS client and server in
Solaris 10. I have two machines, both running Solaris 10u2. One
machine is a x86 V20z and the other one is a UltraSparc Fire V240. The
systems are interconnected using gigabit with a HP ProCurve switch in
between (gigabit the whole way).

Both have arrays connected to them. The V240 has a 6140 array with a
zfs pool. I have measured the read and write performance locally to
over 200MB/s. I've disabled the zil_update on this pool.

The V20z has a generic sata array and I've neasured write performance
above 100MB/s sustained.

I've created a 8GB file on the 6140 and I'm now trying to read and
write from the V20z using NFS. I've tried combinations of NFSv3 and
NFSv4 but I get really poor performance. Reading the file using the
following command:

$ dd if=/mnt/bigfile of=/dev/null bs=128k

gives me a reading speed of around 35-40MB/s which is disappointing.
But now for the weird part

$ dd if=/dev/zero of=/mnt/bigfile.1 bs=128k

from the V20z to the V240 gives me a write speed of 80MB/s. So my
problem is that I get reading speed of 40MB/s over NFS and with the
same options I get writing speed of 80MB/s. I'm having a really hard
time to understand this and I was hoping someone here could provide
some light in this very dark tunnel.

What could cause read speed to be so much slower than write?

cheers,
Nickus


-- 
Have a look at my blog for sysadmins!
http://aspiringsysadmin.com

Reply via email to