In my own experiments with my own equivalent of mbuffer, it's well worth giving the receiving side a buffer which is sized to hold the amount of data in a transaction commit, which allows ZFS to be banging out one tx group to disk, whilst the network is bringing the next one across for it. This will be roughly the link speed in bytes/second x 5, plus a bit more for good measure, say 250-300Mbytes for a gigabit link. It seems to be most important when the disks and the network link have similar max theoretical bandwidths (100Mbytes/sec is what you might expect from both gigabit ethernet and reasonable disks), and it becomes less important as the difference in max performance between them increases. Without the buffer, you tend to see the network run flat out for 5 seconds, and then the receiving disks run flat out for 5 seconds, alternating back and forth, whereas with the buffer, both continue streaming at full gigabit speed without a break.

I have not seen any benefit of buffering on the sending side, although I'd still be inclined to include a small one.

YMMV...


Palmer, Trey wrote:
We have found mbuffer to be the fastest solution.   Our rates for large 
transfers on 10GbE are:

280MB/s    mbuffer
220MB/s    rsh
180MB/s    HPN-ssh unencrypted
 60MB/s     standard ssh

The tradeoff mbuffer is a little more complicated to script; rsh is, well, you know; and hpn-ssh requires rebuilding ssh and (probably) maintaining a second copy of it.

--
Andrew Gabriel
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to