On Wed, May 4, 2011 at 9:04 PM, Brandon High <bh...@freaks.com> wrote:

> On Wed, May 4, 2011 at 2:25 PM, Giovanni Tirloni <gtirl...@sysdroid.com>
> wrote:
> >   The problem we've started seeing is that a zfs send -i is taking hours
> to
> > send a very small amount of data (eg. 20GB in 6 hours) while a zfs send
> full
> > transfer everything faster than the incremental (40-70MB/s). Sometimes we
> > just give up on sending the incremental and send a full altogether.
>
> Does the send complete faster if you just pipe to /dev/null? I've
> observed that if recv stalls, it'll pause the send, and they two go
> back and forth stepping on each other's toes. Unfortunately, send and
> recv tend to pause with each individual snapshot they are working on.
>
> Putting something like mbuffer
> (http://www.maier-komor.de/mbuffer.html) in the middle can help smooth
> it out and speed things up tremendously. It prevents the send from
> pausing when the recv stalls, and allows the recv to continue working
> when the send is stalled. You will have to fiddle with the buffer size
> and other options to tune it for your use.
>


We've done various tests piping it to /dev/null and then transferring the
files to the destination. What seems to stall is the recv because it doesn't
complete (through mbuffer, ssh, locally, etc). The zfs send always complete
at the same rate.

Mbuffer is being used but doesn't seem to help. When things start to stall,
the in / out buffers will quickly fill up and nothing will be sent. Probably
because the mbuffer on the other side can't receive any more data until the
zfs recv gives it some air to breath.

What I find it curious is that it only happens with incrementals. Full
send's go as fast as possible (monitored with mbuffer). I was just wondering
if other people have seen it, if there is a bug (b111 is quite old), etc.

-- 
Giovanni Tirloni
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to