Hi,

as promised, the parameter tuning I have on the box (does anyone see anything 
wrong?)

/boot/loader.conf

kern.hz="100"
vm.kmem_size_max="1536M"
vm.kmem_size="1536M"
vfs.zfs.prefetch_disble=1

/etc/sysctl.conf

kern.ipc.maxsockbuf=16777216
kern.ipc.nmbclusters=32768
kern.ipc.somaxconn=8192
kern.maxfiles=65536
kern.maxfilesperproc=32768
kern.mxvnodes=600000
net.inet.tcp.delayed_ack=0
net.inet.tcp.inflight.enable=0
net.inet.tcp.path_mtu_discovery=0
net.inet.tcp.recvbuf_auto=1
net.inet.tcp.recvbuf_inc=16384
net.inet.tcp.recvbuf_max=16777216
net.inet.tcp.recvspace=65536
net.inet.tcp.rfc1323=1
net.inet.tcp.sendbuf_auto=1
net.inet.tcpsendbuf_inc=8192
net.inet.tcp.sendspace=65536
net.inet.udp.maxdgram=57344
net.inet.udp.recvspace=65536
net.local.stream.recvspace=65536
net.inet.tcp.sendbuf_max=16777216





________________________________
From: Paul Patterson <pathia...@yahoo.com>
To: freebsd-performance@freebsd.org
Sent: Thursday, December 18, 2008 8:04:37 PM
Subject: ZFS, NFS and Network tuning

Hi,

I just set up my first machine with ZFS.  (First, ZFS is nothing short of 
amazing)  I'm running FreeBSD 7.1-RC1 as an NFS server with ZFS striped across 
two volumes (just testing throughput for now.)  Anyhow, I was benching this 
box, 4GB or RAM, the volume is on 2x146 GB SAS 10K rpm drives and it's an HP 
Proliant DL360 with dual Gb interfaces. (device bce)

Now, I believe that I have tuned this box to the hilt with all the parameters 
that I can think of (it's at work right now so I'll cut and paste all the 
sysctls and loader.conf parameters for ZFS and networking) and it still seems 
to have some type of bottleneck.

I have two Debian Linux clients that I use to bench with.  I run a script that 
makes calls that writes to the NFS device and, after about 30 minutes, starts 
to delete the initial data and follow behind writing and deleting.

Here's what's happening:  The "other" machine is a NetAPP.  It's got 1GB of RAM 
and it's running RAID DP with 2 parity drives and 6 data drives, all SATA 750 
GB 7200 RPM drives with dual Gb interfaces.  

The benchmark script manages to write lots of little (all less than 30KB) files 
at a rate of 11,000 per minute, however, after 30 minutes, when it starts 
deleting, the throughput on write goes to 9500 and deletion is 6000 per minute. 
 If I turn on the second node, I get 17,000 writing combined with about 11,000 
deletions combined.  One way or another, this will overflow in time.  Not good.

Now, on to my pet project. :-)  The FreeBSD/ZFS server is only able to maintain 
about 3500 writes per minute but also deletes at the same rate!  (I would 
expect deletion to be at least as fast as writing)  The drives are running at 
only 20-35% while this is going on and only putting down about 4-5 MB/sec each. 
 So, at 1Gb or ~92MB/sec theoretical max (is that about right?) There's 
something wrong somewhere.  I'm assuming it's the network.  (I'll post all the 
tunings tomorrow.)

Thinking something wrong, I mounted only one client to each server (they are 
identical clients and the same configuration as the FreeBSD box).  I did a 
simple stream of:  dd if=/dev/zero of=/mnt/nfs bs=1m count=1000.  The FreeBSD 
box wins?!  It cranked up the drives to 45-50 MB/sec each and balanced them 
perfectly on transactions/sec KB/sec, etc from systat -vm. (Woohoo!)  The 
NetAPPs CPU was at over 35-40% constantly, (it does that while benching, too)

I'll post the NetAPP finding tomorrow as I forgot it for now.

As for the client mounting, it was with the options:  
nfsvers=3,rsize=32768,wsize=32768,hard,intr,async,noatime

I'm trying to figure out why, when running this benchmark, can the NetAPP with 
WAFL nearly triple the FreeBSD/ZFS box.  

Also, I'm having something strange happen when I try to mount the disk from the 
FreeBSD server versus the NetAPP.  The FreeBSD server will sometimes RPC 
timeout.  Mounting the NetAPP is instantaneous.

That's the beginning.  If I have a list of things to check tomorrow, I will.  
I'd like to see the little machine that could kick the NetAPPs butt.  (No 
offense to NetAPP. :-) )

Thank you for reading,

Paul


      
_______________________________________________
freebsd-performance@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-performance
To unsubscribe, send any mail to "freebsd-performance-unsubscr...@freebsd.org"



      
_______________________________________________
freebsd-performance@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-performance
To unsubscribe, send any mail to "freebsd-performance-unsubscr...@freebsd.org"

Reply via email to