On Thu, Sep 13, 2012 at 12:14:00PM -0700, Marion Hakanson wrote: > [email protected] said: > > I'm initiating a copy of the tree from the client by doing a good 'ol: > > # cd /mnt/path > > # tar cf - . | (cd /path/dst ; tar xvf -) > > I'm only getting around 200MB/sec (~1.6Gbps). Does this sound like an > > expected ceiling? > > Please accept my apologies if this seems too obvious, or if you've > already been down this road: > > Do your R720's have any kind of NVRAM cache in effect for the ZFS > pools? E.g. an H800 RAID controller, or a separate ZFS log device > (a.k.a. "dedicated ZIL" device)? > > Without one of the above, the "tar xvf" workload over NFS is going > to be hit by the synchronous write delays (and subsequent ZFS cache > flushes) for each and every directory creation, etc. The performance > penalty can be huge, especially when un-tar-ing lots of tiny files, > directories, and subdirectories. > > You might test if this is what's going on by temporarily disabling > the ZIL on your NFS servers. I believe "zfs set sync=disabled" will > do the trick when applied to the dataset(s) involved. > > And/or, use "zilstat" to watch ZIL traffic before/during/after your tests. > > Regards, > > Marion
Thanks, Marion. This could be a contributing factor, and may be worth testing, but I will mention that each of these R720's is connected a zpool comprised of a couple hundred SAS spindles each, and each R720 has ~192GB of memory. So even if we don't have an external log device per se, I feel fairly good about the RAM-based ZIL doing a pretty good job of handling synchronous writes efficiently. In addition -- wasn't sure how effective an external log device is at absorbing huge streams of sequential writes? Seems like it would just get filled up and you'd eventually be constrained by your spindles anyway. Thanks again! Ray _______________________________________________ nfs-discuss mailing list [email protected]
