On Mon, Aug 23, 2010 at 12:53 AM, Nikolai Schupbach <[email protected]>wrote:
> We are doing some performance testing on a new system. We have a > OpenSolaris NFS server sharing a folder on a ZFS filesystem and a FreeBSD > 8.1 NFS client. The machines are directly connected using 10GbE (no switch > in-between). > > Below are the performance figures we attained when doing simple 10GB dd > write (dd if=/dev/zero of=/mnt/file.tmp bs=1M count=10240) and read (dd > of=/dev/null if=/mnt/file.tmp) tests over NFS from the FreeBSD client using > various mount options. > > We performed these tests numerous times and all results are roughly the > same for each test. We have tuned kern.ipc.maxsockbuf, > net.inet.tcp.recvspace and net.inet.tcp.sendspace. This didn't result in any > significant differences in the test results. Both NFS client and server NICs > have MTU set to 9000; this improves performance noticeable. > > Currently it appears sticking with the stable NFSv3 code yields the best > results. Both NFSv3 and NFSv4 with the newnfs code has disappointing > performance. We installed Linux on the client machine as a test and > unfortunately Linux has the best performance by far. > > Are there any other options we can use to improve the performance of NFSv4 > for large sequential writes and reads? > You can use a UDP mount, should help a little. Have you measured your NIC's performance to see if that is the bottleneck. Perhaps the driver is subpar. -- Adam Vande More _______________________________________________ [email protected] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to "[email protected]"
