On Mon, Feb 28, 2011 at 10:38 PM, Moazam Raja <moa...@gmail.com> wrote:
> We've noticed that on systems with just a handful of filesystems, ZFS
> send (recursive) is quite quick, but on our 1800+ fs box, it's
> horribly slow.

When doing an incremental send, the system has to identify what blocks
have changed, which can take some time. If not much data has changed,
the delay can take longer than the actual send.

I've noticed that there's a small delay when starting a send of a new
snapshot and when starting the receive of one. Putting something like
mbuffer in the path helps to smooth things out. It won't help in the
example you've cited below, but it will help in real world use.

> The other odd thing I've noticed is that during the 'zfs send' to
> /dev/null, zpool iostat shows we're actually *writing* to the zpool at
> the rate of 4MB-8MB/s, but reading almost nothing. How can this be the
> case?

The writing seems odd, but the lack of reads doesn't. You might have
most or all of the data in the ARC or L2ARC, so your zpool doesn't
need to be read from.

> 1.) Does ZFS get immensely slow once we have thousands of filesystems?

No. Incremental sends might take longer, as I mentioned above.

> 2.) Why do we see 4MB-8MB/s of *writes* to the filesystem when we do a
> 'zfs send' to /dev/null ?

Is anything else using the filesystems in the pool?

-B

-- 
Brandon High : bh...@freaks.com
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to