Re: [zfs-discuss] Quick zfs send -i performance questions

2011-05-05 Thread Brandon High
On Thu, May 5, 2011 at 11:17 AM, Giovanni Tirloni  wrote:
> What I find it curious is that it only happens with incrementals. Full
> send's go as fast as possible (monitored with mbuffer). I was just wondering
> if other people have seen it, if there is a bug (b111 is quite old), etc.

I missed that you were using b111 earlier. That's probably a large
part of the problem. There were a lot of performance and reliability
improvements between b111 and b134, and there have been more between
b134 and b148 (OI) or b151 (S11 Express).

Updating the host you're receiving on to something more recent may fix
the performance problem you're seeing.

Fragmentation shouldn't be to great of an issue if the pool you're
writing to is relatively empty. There were changes made to zpool
metaslab allocation post-b111 that might improve performance for pools
between 70% and 96% full. This could also be why the full sends
perform better than incremental sends.

-B

-- 
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Quick zfs send -i performance questions

2011-05-05 Thread Paul Kraus
On Thu, May 5, 2011 at 2:17 PM, Giovanni Tirloni  wrote:

> What I find it curious is that it only happens with incrementals. Full
> send's go as fast as possible (monitored with mbuffer). I was just wondering
> if other people have seen it, if there is a bug (b111 is quite old), etc.

I have been using zfs send / recv via ssh and a WAN connection to
replicate about 20 TB of data. One initial Full followed by an
Incremental every 4 hours. This has been going on for over a year and
I have not had any reliability issues. I started at Solaris 10U6, then
10U8, and now 10U9.

I did run into a bug early on that if the ssh failed, then the zfs
recv would hang, but that was fixed ages ago.

-- 
{1-2-3-4-5-6-7-}
Paul Kraus
-> Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
-> Sound Coordinator, Schenectady Light Opera Company (
http://www.sloctheater.org/ )
-> Technical Advisor, RPI Players
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Quick zfs send -i performance questions

2011-05-05 Thread Giovanni Tirloni
On Wed, May 4, 2011 at 9:04 PM, Brandon High  wrote:

> On Wed, May 4, 2011 at 2:25 PM, Giovanni Tirloni 
> wrote:
> >   The problem we've started seeing is that a zfs send -i is taking hours
> to
> > send a very small amount of data (eg. 20GB in 6 hours) while a zfs send
> full
> > transfer everything faster than the incremental (40-70MB/s). Sometimes we
> > just give up on sending the incremental and send a full altogether.
>
> Does the send complete faster if you just pipe to /dev/null? I've
> observed that if recv stalls, it'll pause the send, and they two go
> back and forth stepping on each other's toes. Unfortunately, send and
> recv tend to pause with each individual snapshot they are working on.
>
> Putting something like mbuffer
> (http://www.maier-komor.de/mbuffer.html) in the middle can help smooth
> it out and speed things up tremendously. It prevents the send from
> pausing when the recv stalls, and allows the recv to continue working
> when the send is stalled. You will have to fiddle with the buffer size
> and other options to tune it for your use.
>


We've done various tests piping it to /dev/null and then transferring the
files to the destination. What seems to stall is the recv because it doesn't
complete (through mbuffer, ssh, locally, etc). The zfs send always complete
at the same rate.

Mbuffer is being used but doesn't seem to help. When things start to stall,
the in / out buffers will quickly fill up and nothing will be sent. Probably
because the mbuffer on the other side can't receive any more data until the
zfs recv gives it some air to breath.

What I find it curious is that it only happens with incrementals. Full
send's go as fast as possible (monitored with mbuffer). I was just wondering
if other people have seen it, if there is a bug (b111 is quite old), etc.

-- 
Giovanni Tirloni
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Quick zfs send -i performance questions

2011-05-04 Thread Randy Jones

On 05/03/11 22:45, Rich Teer wrote:



True, but the SB1000 only supports 2GB of RAM IIRC!  I'll soon be


Actually you can get up to 16GB ram in a SB1000 (or SB2000). The 4GB
dimms are most likely not too common however the 1GB and 2GB dimms seem
to be common. At one time Dataram and maybe Kingston made 4GB dimms for
the SB1000 and SB2000. And don't forget can also put the 1.2GHz processors
in it.

Even with all that it is still not even close to the speed of the U20 M2
you are mentioning below. At least as a workstation...


migrating this machine's duties to an Ultra 20 M2.  A faster CPU
and 4 GB should make an noticable improvement (not to mention, on


You can also get up to 8GB ram in the U20 M2.


board USB 2.0 ports).

Thanks for you ideas!




--
--
Randy Jones
E-Mail: ra...@jones.tri.net
--
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Quick zfs send -i performance questions

2011-05-04 Thread Brandon High
On Wed, May 4, 2011 at 2:25 PM, Giovanni Tirloni  wrote:
>   The problem we've started seeing is that a zfs send -i is taking hours to
> send a very small amount of data (eg. 20GB in 6 hours) while a zfs send full
> transfer everything faster than the incremental (40-70MB/s). Sometimes we
> just give up on sending the incremental and send a full altogether.

Does the send complete faster if you just pipe to /dev/null? I've
observed that if recv stalls, it'll pause the send, and they two go
back and forth stepping on each other's toes. Unfortunately, send and
recv tend to pause with each individual snapshot they are working on.

Putting something like mbuffer
(http://www.maier-komor.de/mbuffer.html) in the middle can help smooth
it out and speed things up tremendously. It prevents the send from
pausing when the recv stalls, and allows the recv to continue working
when the send is stalled. You will have to fiddle with the buffer size
and other options to tune it for your use.

-B

-- 
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Quick zfs send -i performance questions

2011-05-04 Thread Giovanni Tirloni
On Tue, May 3, 2011 at 11:42 PM, Peter Jeremy <
peter.jer...@alcatel-lucent.com> wrote:

> - Is the source pool heavily fragmented with lots of small files?
>

Peter,

  We've some servers holding Xen VMs and the setup was create to have a
default VM from where others would be cloned so the space saving are quite
good.

  The problem we've started seeing is that a zfs send -i is taking hours to
send a very small amount of data (eg. 20GB in 6 hours) while a zfs send full
transfer everything faster than the incremental (40-70MB/s). Sometimes we
just give up on sending the incremental and send a full altogether.

  I'm wondering if it has to do with fragmentation too. Has anyone
experience this? OpenSolaris b111. As a data point, we also have servers
holding Vmware VMs (not cloned) and there is no problem. Anyone know what's
special about Xen's cloned VMs? Sparse files maybe?

Thanks,

-- 
Giovanni Tirloni
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Quick zfs send -i performance questions

2011-05-04 Thread Rich Teer
On Wed, 4 May 2011, Edward Ned Harvey wrote:

> 4G is also lightweight, unless you're not doing much of anything.  No dedup,
> no L2ARC, just simple pushing bits around.  No services running...  Just ssh

Yep, that's right.  This is a repurposed workstation for use in my home network.

> I don't understand why so many people are building systems with insufficient
> ram these days.  I don't put less than 8G into a personal laptop anymore...

The Ultra 20 only supports 4 GB of RAM, and I've installed that much.  It
can't hold any more!  I have to make do with the resources I have here,
with my next to $0 budget...

-- 
Rich Teer, Publisher
Vinylphile Magazine

www.vinylphilemag.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Quick zfs send -i performance questions

2011-05-04 Thread Rich Teer
On Wed, 4 May 2011, Edward Ned Harvey wrote:

> I suspect you're using a junky 1G slow-as-dirt usb thumb drive.

Nope--unless an IOMega Prestige Desktop Hard Drive (containing an
Hitachi 7200K RPM hard drive with 32MB of cache) counts as a slow
as dirt USB thumb drive!  

-- 
Rich Teer, Publisher
Vinylphile Magazine

www.vinylphilemag.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Quick zfs send -i performance questions

2011-05-04 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Rich Teer
> 
> Not such a silly question.  :-)  The USB1 port was indeed the source of
> much of the bottleneck.  The same 50 MB file system took only 8 seconds
> to copy when I plugged the drive into a USB 2.0 card I had in the machine!

50 Mbit/sec


> An 80 GB file system took 2 hours with the USB 2 port in use, with

88 Mbit/sec.


> True, but the SB1000 only supports 2GB of RAM IIRC!  I'll soon be
> migrating this machine's duties to an Ultra 20 M2.  A faster CPU
> and 4 GB should make an noticable improvement (not to mention, on
> board USB 2.0 ports).

4G is also lightweight, unless you're not doing much of anything.  No dedup,
no L2ARC, just simple pushing bits around.  No services running...  Just ssh

I don't understand why so many people are building systems with insufficient
ram these days.  I don't put less than 8G into a personal laptop anymore...

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Quick zfs send -i performance questions

2011-05-04 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Rich Teer
> 
> Also related to this is a performance question.  My initial test involved
> copying a 50 MB zfs file system to a new disk, which took 2.5 minutes
> to complete.  The strikes me as being a bit high for a mere 50 MB;
> 
> The source pool is on a pair of 146 GB 10K RPM disks on separate
> busses in a D1000 (split bus arrangement) and the destination pool
> is on a IOMega 1 GB USB attached disk.  

Even the fastest USB3 thumb drive is slower than the slowest cheapest hard
drive.  Whatever specs are published on the supposed speed of the flash
drive, don't believe them.  I'm not saying they lie - just that they publish
the fastest conceivable speed under ideal situations, which are totally
unrealistic and meaningless in the real world.

I suspect you're using a junky 1G slow-as-dirt usb thumb drive.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Quick zfs send -i performance questions

2011-05-04 Thread David Dyer-Bennet

On Tue, May 3, 2011 19:39, Rich Teer wrote:

> I'm playing around with nearline backups using zfs send | zfs recv.
> A full backup made this way takes quite a lot of time, so I was
> wondering: after the initial copy, would using an incremental send
> (zfs send -i) make the process much quick because only the stuff that
> had changed between the previous snapshot and the current one be
> copied?  Is my understanding of incremental zfs send correct?

Yes, that works.  In my setup, a full backup takes 6 hours (about 800GB of
data to an external USB 2 drive), the incremental maybe 20 minutes even if
I've added several gigabytes of images.

> Also related to this is a performance question.  My initial test involved
> copying a 50 MB zfs file system to a new disk, which took 2.5 minutes
> to complete.  The strikes me as being a bit high for a mere 50 MB;
> are my expectation realistic or is it just because of my very budget
> concious set up?  If so, where's the bottleneck?

In addition to issues others have mentiond, the way incremental send
works, it follows the order the blocks were written in rather than disk
order, so that can sometimes be bad.
-- 
David Dyer-Bennet, d...@dd-b.net; http://dd-b.net/
Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/data/
Photos: http://dd-b.net/photography/gallery/
Dragaera: http://dragaera.info

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Quick zfs send -i performance questions

2011-05-03 Thread Rich Teer
On Wed, 4 May 2011, Peter Jeremy wrote:

> Possibilities I can think of:
> - Do you have lots of snapshots?  There's an overhead of a second or so
>   for each snapshot to be sent.
> - Is the source pool heavily fragmented with lots of small files?

Nope, and I don't think so.

> Hopefully a silly question but does the SB1000 support USB2?  All of
> the Sun hardware I've dealt with only has USB1 ports.

Not such a silly question.  :-)  The USB1 port was indeed the source of
much of the bottleneck.  The same 50 MB file system took only 8 seconds
to copy when I plugged the drive into a USB 2.0 card I had in the machine!

An 80 GB file system took 2 hours with the USB 2 port in use, with
compression off.  I'm tryin git again right now with compression 
turned on in the receiving pool.  Should be interesting...

> And, BTW, 2GB RAM is very light on for ZFS (though I note you only
> have a very small amount of data).

True, but the SB1000 only supports 2GB of RAM IIRC!  I'll soon be
migrating this machine's duties to an Ultra 20 M2.  A faster CPU
and 4 GB should make an noticable improvement (not to mention, on
board USB 2.0 ports).

Thanks for you ideas!

-- 
Rich Teer, Publisher
Vinylphile Magazine

www.vinylphilemag.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Quick zfs send -i performance questions

2011-05-03 Thread Peter Jeremy
On 2011-May-04 08:39:39 +0800, Rich Teer  wrote:
>Also related to this is a performance question.  My initial test involved
>copying a 50 MB zfs file system to a new disk, which took 2.5 minutes
>to complete.  The strikes me as being a bit high for a mere 50 MB;
>are my expectation realistic or is it just because of my very budget
>concious set up?  If so, where's the bottleneck?

Possibilities I can think of:
- Do you have lots of snapshots?  There's an overhead of a second or so
  for each snapshot to be sent.
- Is the source pool heavily fragmented with lots of small files?

>The source pool is on a pair of 146 GB 10K RPM disks on separate
>busses in a D1000 (split bus arrangement) and the destination pool
>is on a IOMega 1 GB USB attached disk.  The machine to which both
>pools are connected is a Sun Blade 1000 with a pair of 900 MHz US-III
>CPUs and 2 GB of RAM.

Hopefully a silly question but does the SB1000 support USB2?  All of
the Sun hardware I've dealt with only has USB1 ports.

And, BTW, 2GB RAM is very light on for ZFS (though I note you only
have a very small amount of data).

-- 
Peter Jeremy


pgp8UazHZQHJM.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Quick zfs send -i performance questions

2011-05-03 Thread Eric D. Mudama

On Tue, May  3 at 17:39, Rich Teer wrote:

Hi all,

I'm playing around with nearline backups using zfs send | zfs recv.
A full backup made this way takes quite a lot of time, so I was
wondering: after the initial copy, would using an incremental send
(zfs send -i) make the process much quick because only the stuff that
had changed between the previous snapshot and the current one be
copied?  Is my understanding of incremental zfs send correct?


Your understanding is correct.  We use -I, not -i, since it can send
multiple snapshots with a single command.  Only the amount of changed
data is sent with an incremental 'zfs send'.


Also related to this is a performance question.  My initial test involved
copying a 50 MB zfs file system to a new disk, which took 2.5 minutes
to complete.  The strikes me as being a bit high for a mere 50 MB;
are my expectation realistic or is it just because of my very budget
concious set up?  If so, where's the bottleneck?


Our setup does a send/recv at roughly 40MB/s over ssh connected to a
1gbit/s ethernet connection.  There are ways to make this faster by
not using an encrypted transport, but setup is a bit more advanced
than just an ssh 'zfs recv' command line.


The source pool is on a pair of 146 GB 10K RPM disks on separate
busses in a D1000 (split bus arrangement) and the destination pool
is on a IOMega 1 GB USB attached disk.  The machine to which both
pools are connected is a Sun Blade 1000 with a pair of 900 MHz US-III
CPUs and 2 GB of RAM.  The HBA is Sun's dual differential UltraSCSI
PCI card.  The machine was relatively quiescent apart from doing the
local zfs send | zfs recv.


I'm guessing that the USB bus and/or the USB disk is part of your
bottleneck.  UltraSCSI should be plenty fast and your CPU should be
fine too.

--eric


--
Eric D. Mudama
edmud...@bounceswoosh.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss