On Wed, Oct 31, 2012 at 8:44 PM, Richard Elling < [email protected]> wrote:
> > > On the target system I am seeing writes up to > >> 160 MB/s with frequent zpool iostat probes. When iostat probes are up to > >> 5s+, there is a steady stream of 62 MB/s. > > > > I believe this *may* mean that your networking buffer receives data > > into memory (ZFS cache) at 62Mb/s, then every 5s the dirty cache > > is sent to disks during TXG commit at whatever speed in can burst > > (160Mb/s in your case). > > More likely: straight pipe send | receive is a blocking configuration. This > is why most people who go for high speed send | receive use a buffer, > such as mbuffer, to smooth out the performance. Check the archives, > this has been rehashed hundreds of times on these aliases. Thank you very much for rehashing it again, I stuck "| mbuffer -b 8192 -m 256M -q 2> /dev/null |" (some preliminary testing seemed to indicate it wanted 8192 blocksize for pipes, and when run from cron it produces an odd warning message) in the middle of my send | ssh recv pipe and was rewarded with this over gigabit ethernet: received 8.63GB stream in 89 seconds (99.3MB/sec) Previously I was getting 70MB/s or less, even after switching to arcfour128 for ssh cipher. My only gripe is that mbuffer doesn't have a manpage on OpenIndiana. Tim _______________________________________________ OpenIndiana-discuss mailing list [email protected] http://openindiana.org/mailman/listinfo/openindiana-discuss
