Re: [zfs-discuss] zfs send speed

2009-08-21 Thread Ian Collins

Joseph L. Casale wrote:

I have my own application that uses large circular buffers and a socket
connection between hosts.  The buffers keep data flowing during ZFS
writes and the direct connection cuts out ssh.



Application, as in not script (something you can share)?
  


Not yet!

--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send speed

2009-08-20 Thread Joseph L. Casale
>I have my own application that uses large circular buffers and a socket
>connection between hosts.  The buffers keep data flowing during ZFS
>writes and the direct connection cuts out ssh.

Application, as in not script (something you can share)?

:)
jlc
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send speed

2009-08-20 Thread Ian Collins

Joseph L. Casale wrote:

With Solaris 10U7 I see about 35MB/sec between Thumpers using a direct
socket connection rather than ssh for full sends and 7-12MB/sec for
incrementals, depending on the data set.



Ian,
What's the syntax you use for this procedure?
  
I have my own application that uses large circular buffers and a socket 
connection between hosts.  The buffers keep data flowing during ZFS 
writes and the direct connection cuts out ssh.


--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send speed

2009-08-20 Thread Joseph L. Casale
>With Solaris 10U7 I see about 35MB/sec between Thumpers using a direct
>socket connection rather than ssh for full sends and 7-12MB/sec for
>incrementals, depending on the data set.

Ian,
What's the syntax you use for this procedure?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send speed

2009-08-20 Thread Ian Collins

Paul Kraus wrote:

There are about 3.3 million files / directories in the 'dataset',
files range in size from 1 KB to 100 KB.

pkr...@nyc-sted1:/IDR-test/ppk> time sudo zfs send
IDR-test/data...@1250616026 >/dev/null

real91m19.024s
user0m0.022s
sys 11m51.422s
pkr...@nyc-sted1:/IDR-test/ppk>

Which translates to a little over 18 MB/sec. and 600 files/sec. That
would mean almost 16 hours per TB. Better, but not much better than
NBU.

I do not think the SE-3511 is limiting us, as I have seen much higher
throughput on them when resilvering one or more mirrors.

Any thoughts as to why I am not getting better throughput ?
  
With Solaris 10U7 I see about 35MB/sec between Thumpers using a direct 
socket connection rather than ssh for full sends and 7-12MB/sec for 
incrementals, depending on the data set.


--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send speed

2009-08-19 Thread Richard Elling

On Aug 18, 2009, at 1:16 PM, Paul Kraus wrote:

Is the speed of a 'zfs send' dependant on file size / number of  
files ?


Not directly. It is dependent on the amount of changes per unit time.


   We have a system with some large datasets (3.3 TB and about 35
million files) and conventional backups take a long time (using
Netbackup 6.5 a FULL takes between two and three days, differential
incrementals, even with very few files changing, take between 15 and
20 hours). We already use snapshots for day to day restores, but we
need the 'real' backups for DR.


This is quite common.


   I have been testing zfs send throughput and have not been
getting promising results. Note that this is NOT OpenSolaris, but
Solaris 10U6 (10/08) with the IDR for the snapshot interrupts resilver
bug.


You will need to do this in parallel.

We have had some discussions about a possible white paper on this
topic, but as yet there is no funding.  So, it will remain in the  
world of

professional services for the time being.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send speed

2009-08-19 Thread Paul Kraus
Thank you for all your replies, I'm collecting my responses in one
message below:

On Tue, Aug 18, 2009 at 7:43 PM, Nicolas
Williams wrote:
> On Tue, Aug 18, 2009 at 04:22:19PM -0400, Paul Kraus wrote:
>>        We have a system with some large datasets (3.3 TB and about 35
>> million files) and conventional backups take a long time (using
>> Netbackup 6.5 a FULL takes between two and three days, differential
>> incrementals, even with very few files changing, take between 15 and
>> 20 hours). We already use snapshots for day to day restores, but we
>> need the 'real' backups for DR.
>
> zfs send will be very fast for "differential incrementals ... with very
> few files changing" since zfs send is a block-level diff based on the
> differences between the selected snapshots.  Where a traditional backup
> tool would have to traverse the entire filesystem (modulo pruning based
> on ctime/mtime), zfs send simply traverses a list of changed blocks
> that's kept up by ZFS as you make changes in the first place.

Our testing indicates that for incremental zfs send the speed is very
good, and seems to be bandwidth limited and not limited by file count.
For example, while testing incremental sends I got the following
results:

~450,000 files sent, ~8.3 GB sent @ 690 files/sec. and 13 MB/sec.
~900,000 files sent, ~13 GB sent @ 890 files/sec. and 13 MB/sec.
~450,000 files sent, ~ 4.6 GB sent @ 1,800 files/sec. and 19 MB/sec.

Full zfs sends produced:

~2.5 million files, ~87 GB @ 500 files/sec. and 18 MB/sec.
~3.4 million files, ~ 100 GB @ 600 files/sec. and 19 MB/sec.

> For a *full* backup zfs send and traditional backup tools will have
> similar results as both will be I/O bound and both will have more or
> less the same number of I/Os to do.

The zfs send FULLS are in close agreement with what we are seeing with
a FULL NBU backup.

> Caveat: zfs send formats are not guraranteed to be backwards
> compatible, therefore zfs send is not suitable for long-term backups.

Yup, we only need them for 5 weeks, and when we upgrade the
server (and ZFS version) we would need to do a new set of fulls.

On Tue, Aug 18, 2009 at 8:54 PM,  Mattias Pantzare  wrote:

> Conventional backups can be faster that that! I have not used
> netbackup but you should be able to configure netbackup to run several
> backup streams in parallel. You may have to point netbackup to subdirs
> instead of the file system root.

We have over 180 filesystems on the production server right
now, we are really trying to avoid any manual customization of the
backup policy. In a previous incarnation this data lived on a Mac OS X
server in one FS (only about 4 TB total at that point), full backups
took so long that we manually configured three NBU policies with many
individual directories ... it was a nighmare as new data (and
directories) were added.

On Tue, Aug 18, 2009 at 10:33 PM, Mike Gerdts  wrote:

> This was discussed in another thread as well.
>
> http://opensolaris.org/jive/thread.jspa?threadID=109751&tstart=0

Thanks for that pointer. I had missed that thread in my
search, I just hadn't hit the right keywords. This thread got me
thinking about our data layout. Currently the data broken up by both
department and project. Each department gets a zpool and each project
within the department a dataset/zfs. Departments range in size from
one mirrored pair of LUNs (512 GB) to 11 mirrored pairs of LUNs (5.5
TB). Projects range from a few KB to 3.3 TB (and 33 million files).
The data is all relatively small, images of documents, but there are
many, many of them.

Is there any throughput penalty for the dataset being part of
a bigger zpool ? In other words, am I more likely to get better FULL
throughput if I move the data to a dedicated zpool instead of a child
dataset ? We *can* change our model to assign each project a separate
zpool, but that would be wasteful of space. Perhaps move a given
project to it's own zpool when it grows to a certain size (>1 TB
maybe). But, if there would not be any performance advantage, it's not
worth the effort.

I had assumed that a full zfs send would just stream the
underlying zfs structure and not really deal with individual files,
but if the dataset is part of a shared zpool then I guess it has to
look at the files' metadata to determine if a given file is part of
that dataset.

P.S. We are planning to move the backend stoage to JBOD (probably
J4400), but that is not where we are today, and we can't count on that
happening soon.

-- 
{1-2-3-4-5-6-7-}
Paul Kraus
-> Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
-> Sound Designer, "The Pajama Game" @ Schenectady Light Opera Company
( http://www.sloctheater.org/ )
-> Technical Advisor, Lunacon 2010 (http://www.lunacon.org/)
-> Technical Advisor, RPI Players
___
zfs-discuss mailing list
zfs-discuss@opensol

Re: [zfs-discuss] zfs send speed

2009-08-18 Thread Mike Gerdts
On Tue, Aug 18, 2009 at 7:54 PM, Mattias Pantzare wrote:
> On Tue, Aug 18, 2009 at 22:22, Paul Kraus wrote:
>> Posted from the wrong address the first time, sorry.
>>
>> Is the speed of a 'zfs send' dependant on file size / number of files ?
>>
>>        We have a system with some large datasets (3.3 TB and about 35
>> million files) and conventional backups take a long time (using
>> Netbackup 6.5 a FULL takes between two and three days, differential
>> incrementals, even with very few files changing, take between 15 and
>> 20 hours). We already use snapshots for day to day restores, but we
>> need the 'real' backups for DR.
>
> Conventional backups can be faster that that! I have not used
> netbackup but you should be able to configure netbackup to run several
> backup streams in parallel. You may have to point netbackup to subdirs
> instead of the file system root.

This was discussed in another thread as well.

http://opensolaris.org/jive/thread.jspa?threadID=109751&tstart=0

In particular...

http://opensolaris.org/jive/thread.jspa?threadID=109751&tstart=0#405121
http://opensolaris.org/jive/thread.jspa?threadID=109751&tstart=0#404589
http://opensolaris.org/jive/thread.jspa?threadID=109751&tstart=0#405835
http://opensolaris.org/jive/thread.jspa?threadID=109751&tstart=0#405308

-- 
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send speed

2009-08-18 Thread Mattias Pantzare
On Tue, Aug 18, 2009 at 22:22, Paul Kraus wrote:
> Posted from the wrong address the first time, sorry.
>
> Is the speed of a 'zfs send' dependant on file size / number of files ?
>
>        We have a system with some large datasets (3.3 TB and about 35
> million files) and conventional backups take a long time (using
> Netbackup 6.5 a FULL takes between two and three days, differential
> incrementals, even with very few files changing, take between 15 and
> 20 hours). We already use snapshots for day to day restores, but we
> need the 'real' backups for DR.

Conventional backups can be faster that that! I have not used
netbackup but you should be able to configure netbackup to run several
backup streams in parallel. You may have to point netbackup to subdirs
instead of the file system root.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs send speed

2009-08-18 Thread Paul Kraus
Is the speed of a 'zfs send' dependant on file size / number of files ?

We have a system with some large datasets (3.3 TB and about 35
million files) and conventional backups take a long time (using
Netbackup 6.5 a FULL takes between two and three days, differential
incrementals, even with very few files changing, take between 15 and
20 hours). We already use snapshots for day to day restores, but we
need the 'real' backups for DR.

I have been testing zfs send throughput and have not been
getting promising results. Note that this is NOT OpenSolaris, but
Solaris 10U6 (10/08) with the IDR for the snapshot interrupts resilver
bug.

Server: V480, 4 CPU, 16 GB RAM (test server, production is an M4000)
Storage: two SE-3511, each with one 512 GB LUN presented

Simple mirror layout:

pkr...@nyc-sted1:/IDR-test/ppk> zpool status
  pool: IDR-test
 state: ONLINE
 scrub: resilver completed after 0h0m with 0 errors on Wed Jul  1 16:54:58 2009
config:

NAME   STATE READ WRITE CKSUM
IDR-test   ONLINE   0 0 0
  mirror   ONLINE   0 0 0
c6t600C0FF00927852FB91AD308d0  ONLINE   0 0 0
c6t600C0FF00922614781B19008d0  ONLINE   0 0 0

errors: No known data errors
pkr...@nyc-sted1:/IDR-test/ppk>

pkr...@nyc-sted1:/IDR-test/ppk> zfs list
NAME  USED  AVAIL  REFER  MOUNTPOINT
IDR-test  101G   399G  24.3M  /IDR-test
idr-t...@1250597527  96.8M  -   101M  -
idr-t...@1250604834  20.1M  -  24.3M  -
idr-t...@125060523616K  -  24.3M  -
idr-t...@125060540020K  -  24.3M  -
idr-t...@125060658220K  -  24.3M  -
idr-t...@125061255320K  -  24.3M  -
idr-t...@125061602620K  -  24.3M  -
IDR-test/dataset  101G   399G   100G  /IDR-test/dataset
IDR-test/data...@1250597527   313K  -  87.1G  -
IDR-test/data...@1250604834   266K  -  87.1G  -
IDR-test/data...@1250605236   187M  -  88.2G  -
IDR-test/data...@1250605400   192M  -  89.3G  -
IDR-test/data...@1250606582   246K  -  95.4G  -
IDR-test/data...@1250612553   233K  -  95.4G  -
IDR-test/data...@1250616026   230K  -   100G  -
pkr...@nyc-sted1:/IDR-test/ppk>

There are about 3.3 million files / directories in the 'dataset',
files range in size from 1 KB to 100 KB.

pkr...@nyc-sted1:/IDR-test/ppk> time sudo zfs send
IDR-test/data...@1250616026 >/dev/null

real91m19.024s
user0m0.022s
sys 11m51.422s
pkr...@nyc-sted1:/IDR-test/ppk>

Which translates to a little over 18 MB/sec. and 600 files/sec. That
would mean almost 16 hours per TB. Better, but not much better than
NBU.

I do not think the SE-3511 is limiting us, as I have seen much higher
throughput on them when resilvering one or more mirrors.

Any thoughts as to why I am not getting better throughput ?

Thanks.

-- 
{1-2-3-4-5-6-7-}
Paul Kraus
-> Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
-> Sound Designer, "The Pajama Game" @ Schenectady Light Opera Company
( http://www.sloctheater.org/ )
-> Technical Advisor, Lunacon 2010 (http://www.lunacon.org/)
-> Technical Advisor, RPI Players
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send speed

2009-08-18 Thread Nicolas Williams
On Tue, Aug 18, 2009 at 04:22:19PM -0400, Paul Kraus wrote:
>        We have a system with some large datasets (3.3 TB and about 35
> million files) and conventional backups take a long time (using
> Netbackup 6.5 a FULL takes between two and three days, differential
> incrementals, even with very few files changing, take between 15 and
> 20 hours). We already use snapshots for day to day restores, but we
> need the 'real' backups for DR.

zfs send will be very fast for "differential incrementals ... with very
few files changing" since zfs send is a block-level diff based on the
differences between the selected snapshots.  Where a traditional backup
tool would have to traverse the entire filesystem (modulo pruning based
on ctime/mtime), zfs send simply traverses a list of changed blocks
that's kept up by ZFS as you make changes in the first place.

For a *full* backup zfs send and traditional backup tools will have
similar results as both will be I/O bound and both will have more or
less the same number of I/Os to do.

Caveat: zfs send formats are not guraranteed to be backwards
compatible, therefore zfs send is not suitable for long-term backups.

Nico
-- 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send speed

2009-08-18 Thread Joseph L. Casale
>Is the speed of a 'zfs send' dependant on file size / number of files ?

I am going to say no, I have *far* inferior iron that I am running a backup
rig on, and doing a send/recv over ssh through gige and last night's replication
gave the following: "received 40.2GB stream in 3498 seconds (11.8MB/sec)"
I have seen it as high as your figures but usually between this and your number.

I assumed it was a result of the ssh overhead (arcfour yielded the best 
results).

>There are about 3.3 million files / directories in the 'dataset',
>files range in size from 1 KB to 100 KB.

The number of files I am replicating would be ~100!

jlc
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs send speed

2009-08-18 Thread Paul Kraus
Posted from the wrong address the first time, sorry.

Is the speed of a 'zfs send' dependant on file size / number of files ?

       We have a system with some large datasets (3.3 TB and about 35
million files) and conventional backups take a long time (using
Netbackup 6.5 a FULL takes between two and three days, differential
incrementals, even with very few files changing, take between 15 and
20 hours). We already use snapshots for day to day restores, but we
need the 'real' backups for DR.

       I have been testing zfs send throughput and have not been
getting promising results. Note that this is NOT OpenSolaris, but
Solaris 10U6 (10/08) with the IDR for the snapshot interrupts resilver
bug.

Server: V480, 4 CPU, 16 GB RAM (test server, production is an M4000)
Storage: two SE-3511, each with one 512 GB LUN presented

Simple mirror layout:

pkr...@nyc-sted1:/IDR-test/ppk> zpool status
 pool: IDR-test
 state: ONLINE
 scrub: resilver completed after 0h0m with 0 errors on Wed Jul  1 16:54:58 2009
config:

       NAME                                       STATE     READ WRITE CKSUM
       IDR-test                                   ONLINE       0     0     0
         mirror                                   ONLINE       0     0     0
           c6t600C0FF00927852FB91AD308d0  ONLINE       0     0     0
           c6t600C0FF00922614781B19008d0  ONLINE       0     0     0

errors: No known data errors
pkr...@nyc-sted1:/IDR-test/ppk>

pkr...@nyc-sted1:/IDR-test/ppk> zfs list
NAME                          USED  AVAIL  REFER  MOUNTPOINT
IDR-test                      101G   399G  24.3M  /IDR-test
idr-t...@1250597527          96.8M      -   101M  -
idr-t...@1250604834          20.1M      -  24.3M  -
idr-t...@1250605236            16K      -  24.3M  -
idr-t...@1250605400            20K      -  24.3M  -
idr-t...@1250606582            20K      -  24.3M  -
idr-t...@1250612553            20K      -  24.3M  -
idr-t...@1250616026            20K      -  24.3M  -
IDR-test/dataset              101G   399G   100G  /IDR-test/dataset
IDR-test/data...@1250597527   313K      -  87.1G  -
IDR-test/data...@1250604834   266K      -  87.1G  -
IDR-test/data...@1250605236   187M      -  88.2G  -
IDR-test/data...@1250605400   192M      -  89.3G  -
IDR-test/data...@1250606582   246K      -  95.4G  -
IDR-test/data...@1250612553   233K      -  95.4G  -
IDR-test/data...@1250616026   230K      -   100G  -
pkr...@nyc-sted1:/IDR-test/ppk>

There are about 3.3 million files / directories in the 'dataset',
files range in size from 1 KB to 100 KB.

pkr...@nyc-sted1:/IDR-test/ppk> time sudo zfs send
IDR-test/data...@1250616026 >/dev/null

real    91m19.024s
user    0m0.022s
sys     11m51.422s
pkr...@nyc-sted1:/IDR-test/ppk>

Which translates to a little over 18 MB/sec. and 600 files/sec. That
would mean almost 16 hours per TB. Better, but not much better than
NBU.

I do not think the SE-3511 is limiting us, as I have seen much higher
throughput on them when resilvering one or more mirrors.

Any thoughts as to why I am not getting better throughput ?

Thanks.

--
{1-2-3-4-5-6-7-}
Paul Kraus
-> Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
-> Sound Designer, "The Pajama Game" @ Schenectady Light Opera Company
( http://www.sloctheater.org/ )
-> Technical Advisor, Lunacon 2010 (http://www.lunacon.org/)
-> Technical Advisor, RPI Players
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zfs send speed. Was: User quota design discussion..

2009-05-27 Thread Dirk Wriedt

Au contraire...

>From what I have seen, larger file systems and large numbers of files
seem to slow down zfs send/receive, worsening the problem. So it may be
a good idea to partition your file system, subdividing it into smaller
ones, replicating each one separately. 

Dirk

Am Di, den 26.05.2009 schrieb Jorgen Lundman um 12:46:
> So you recommend I also do speed test on larger volumes? The test data I 
> had on the b114 server was only 90GB. Previous tests included 500G ufs 
> on zvol etc.  It is just it will take 4 days to send it to the b114 
> server to start with ;) (From Sol10 servers).
> 
> Lund
> 
> Dirk Wriedt wrote:
> > Jorgen,
> > 
> > what is the size of the sending zfs?
> > 
> > I thought replication speed depends on the size of the sending fs, too 
> > not only size of the snapshot being sent.
> > 
> > Regards
> > Dirk
> > 
> > 
> > --On Freitag, Mai 22, 2009 19:19:34 +0900 Jorgen Lundman 
> >  wrote:
> > 
> >> Sorry, yes. It is straight;
> >>
> >> # time zfs send zpool1/leroy_c...@speedtest | nc 172.20.12.232 3001
> >> real19m48.199s
> >>
> >> # /var/tmp/nc -l -p 3001 -vvv | time zfs recv -v zpool1/le...@speedtest
> >> received 82.3GB stream in 1195 seconds (70.5MB/sec)
> >>
> >>
> >> Sending is osol-b114.
> >> Receiver is Solaris 10 10/08
> >>
> >> When we tested Solaris 10 10/08 -> Solaris 10 10/08 these were the 
> >> results;
> >>
> >> zfs send | nc | zfs recv -> 1 MB/s
> >> tar -cvf /zpool/leroy | nc | tar -xvf -  -> 2.5 MB/s
> >> ufsdump | nc | ufsrestore-> 5.0 MB/s
> >>
> >> So, none of those solutions was usable with regular Sol 10. Note most 
> >> our volumes are ufs in
> >> zvol, but even zfs volumes were slow.
> >>
> >> Someone else had mentioned the speed was fixed in an earlier release, 
> >> I had not had a chance to
> >> upgrade. But since we wanted to try zfs user-quotas, I finally had the 
> >> chance.
> >>
> >> Lund
> >>
> >>
> >> Brent Jones wrote:
> >>> On Thu, May 21, 2009 at 10:17 PM, Jorgen Lundman  wrote:
>  To finally close my quest. I tested "zfs send" in osol-b114 version:
> 
>  received 82.3GB stream in 1195 seconds (70.5MB/sec)
> 
>  Yeeaahh!
> 
>  That makes it completely usable! Just need to change our support 
>  contract to
>  allow us to run b114 and we're set! :)
> 
> 
>  Thanks,
> 
>  Lund
> 
> 
>  Jorgen Lundman wrote:
> > We finally managed to upgrade the production x4500s to Sol 10 10/08
> > (unrelated to this) but with the hope that it would also make "zfs 
> > send"
> > usable.
> >
> > Exactly how does "build 105" translate to Solaris 10 10/08?  My 
> > current
> > speed test has sent 34Gb in 24 hours, which isn't great. Perhaps 
> > the next
> > version of Solaris 10 will have the improvements.
> >> 1>>>
> >
> >
> > Robert Milkowski wrote:
> >> Hello Jorgen,
> >>
> >> If you look at the list archives you will see that it made a huge
> >> difference for some people including me. Now I'm easily able to
> >> saturate GbE linke while zfs send|recv'ing.
> >>
> >>> Since build 105 it should be *MUCH* for faster.
> >
>  -- 
>  Jorgen Lundman   | 
>  Unix Administrator   | +81 (0)3 -5456-2687 ext 1017 (work)
>  Shibuya-ku, Tokyo| +81 (0)90-5578-8500  (cell)
>  Japan| +81 (0)3 -3375-1767  (home)
>  ___
>  zfs-discuss mailing list
>  zfs-discuss@opensolaris.org
>  http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> 
> >>>
> >>> Can you give any details about your data set, what you piped zfs
> >>> send/receive through (SSH?), hardware/network, etc?
> >>> I'm envious of your speeds!
> > 
> > 
> > 
> > -- 
> > Dirk Wriedt, dirk.wri...@sun.com, Sun Microsystems GmbH
> > Systemingenieur Strategic Accounts
> > Nagelsweg 55, 20097 Hamburg, Germany
> > Tel.: +49-40-251523-132 Fax: +49-40-251523-425 Mobile: +49 172 848 4166
> > "Never been afraid of chances I been takin'" - Joan Jett
> > 
> > Sitz der Gesellschaft: Sun Microsystems GmbH, Sonnenallee 1, D-85551 
> > Kirchheim-Heimstetten
> > Amtsgericht Muenchen: HRB 161028
> > Geschaeftsfuehrer: Thomas Schroeder, Wolfgang Engels, Wolf Frenkel
> > Vorsitzender des Aufsichtsrates: Martin Haering
> > 
> > 
> 
> -- 
> Jorgen Lundman   | 
> Unix Administrator   | +81 (0)3 -5456-2687 ext 1017 (work)
> Shibuya-ku, Tokyo| +81 (0)90-5578-8500  (cell)
> Japan| +81 (0)3 -3375-1767  (home)
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
-- 
Sitz der Gesellschaft: Sun Microsystems GmbH, Sonnenallee 1, D-85551
Kirchheim-Heimstetten, Amtsgericht Muenchen: HRB 161028
Geschaeftsfuehrer: Thomas Schröder, Wolfgang Engels, Wolf Frenkel
Vorsitzender des Aufsichtsrates

[zfs-discuss] ZFS send speed improvements in Solaris 10?

2009-05-27 Thread Ian Collins

Did the ZFS send speed improvements make it into Solaris 10 update 7?

If not, are they targeted for a Solaris 10 update?

Thanks,

--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zfs send speed. Was: User quota design discussion..

2009-05-27 Thread Jorgen Lundman


I changed to try zfs send on a UFS on zvolume as well:

received 92.9GB stream in 2354 seconds (40.4MB/sec)

Still fast enough to use. I have yet to get around to trying something 
considerably larger in size.


Lund


Jorgen Lundman wrote:



So you recommend I also do speed test on larger volumes? The test data I 
had on the b114 server was only 90GB. Previous tests included 500G ufs 
on zvol etc.  It is just it will take 4 days to send it to the b114 
server to start with ;) (From Sol10 servers).


Lund

Dirk Wriedt wrote:

Jorgen,

what is the size of the sending zfs?

I thought replication speed depends on the size of the sending fs, too 
not only size of the snapshot being sent.


Regards
Dirk


--On Freitag, Mai 22, 2009 19:19:34 +0900 Jorgen Lundman 
 wrote:



Sorry, yes. It is straight;

# time zfs send zpool1/leroy_c...@speedtest | nc 172.20.12.232 3001
real19m48.199s

# /var/tmp/nc -l -p 3001 -vvv | time zfs recv -v zpool1/le...@speedtest
received 82.3GB stream in 1195 seconds (70.5MB/sec)


Sending is osol-b114.
Receiver is Solaris 10 10/08

When we tested Solaris 10 10/08 -> Solaris 10 10/08 these were the 
results;


zfs send | nc | zfs recv -> 1 MB/s
tar -cvf /zpool/leroy | nc | tar -xvf -  -> 2.5 MB/s
ufsdump | nc | ufsrestore-> 5.0 MB/s

So, none of those solutions was usable with regular Sol 10. Note most 
our volumes are ufs in

zvol, but even zfs volumes were slow.

Someone else had mentioned the speed was fixed in an earlier release, 
I had not had a chance to
upgrade. But since we wanted to try zfs user-quotas, I finally had 
the chance.


Lund


Brent Jones wrote:
On Thu, May 21, 2009 at 10:17 PM, Jorgen Lundman  
wrote:

To finally close my quest. I tested "zfs send" in osol-b114 version:

received 82.3GB stream in 1195 seconds (70.5MB/sec)

Yeeaahh!

That makes it completely usable! Just need to change our support 
contract to

allow us to run b114 and we're set! :)


Thanks,

Lund


Jorgen Lundman wrote:

We finally managed to upgrade the production x4500s to Sol 10 10/08
(unrelated to this) but with the hope that it would also make "zfs 
send"

usable.

Exactly how does "build 105" translate to Solaris 10 10/08?  My 
current
speed test has sent 34Gb in 24 hours, which isn't great. Perhaps 
the next

version of Solaris 10 will have the improvements.

1>>>



Robert Milkowski wrote:

Hello Jorgen,

If you look at the list archives you will see that it made a huge
difference for some people including me. Now I'm easily able to
saturate GbE linke while zfs send|recv'ing.


Since build 105 it should be *MUCH* for faster.



--
Jorgen Lundman   | 
Unix Administrator   | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500  (cell)
Japan| +81 (0)3 -3375-1767  (home)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



Can you give any details about your data set, what you piped zfs
send/receive through (SSH?), hardware/network, etc?
I'm envious of your speeds!




--
Dirk Wriedt, dirk.wri...@sun.com, Sun Microsystems GmbH
Systemingenieur Strategic Accounts
Nagelsweg 55, 20097 Hamburg, Germany
Tel.: +49-40-251523-132 Fax: +49-40-251523-425 Mobile: +49 172 848 4166
"Never been afraid of chances I been takin'" - Joan Jett

Sitz der Gesellschaft: Sun Microsystems GmbH, Sonnenallee 1, D-85551 
Kirchheim-Heimstetten

Amtsgericht Muenchen: HRB 161028
Geschaeftsfuehrer: Thomas Schroeder, Wolfgang Engels, Wolf Frenkel
Vorsitzender des Aufsichtsrates: Martin Haering






--
Jorgen Lundman   | 
Unix Administrator   | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500  (cell)
Japan| +81 (0)3 -3375-1767  (home)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zfs send speed. Was: User quota design discussion..

2009-05-26 Thread Jorgen Lundman



So you recommend I also do speed test on larger volumes? The test data I 
had on the b114 server was only 90GB. Previous tests included 500G ufs 
on zvol etc.  It is just it will take 4 days to send it to the b114 
server to start with ;) (From Sol10 servers).


Lund

Dirk Wriedt wrote:

Jorgen,

what is the size of the sending zfs?

I thought replication speed depends on the size of the sending fs, too 
not only size of the snapshot being sent.


Regards
Dirk


--On Freitag, Mai 22, 2009 19:19:34 +0900 Jorgen Lundman 
 wrote:



Sorry, yes. It is straight;

# time zfs send zpool1/leroy_c...@speedtest | nc 172.20.12.232 3001
real19m48.199s

# /var/tmp/nc -l -p 3001 -vvv | time zfs recv -v zpool1/le...@speedtest
received 82.3GB stream in 1195 seconds (70.5MB/sec)


Sending is osol-b114.
Receiver is Solaris 10 10/08

When we tested Solaris 10 10/08 -> Solaris 10 10/08 these were the 
results;


zfs send | nc | zfs recv -> 1 MB/s
tar -cvf /zpool/leroy | nc | tar -xvf -  -> 2.5 MB/s
ufsdump | nc | ufsrestore-> 5.0 MB/s

So, none of those solutions was usable with regular Sol 10. Note most 
our volumes are ufs in

zvol, but even zfs volumes were slow.

Someone else had mentioned the speed was fixed in an earlier release, 
I had not had a chance to
upgrade. But since we wanted to try zfs user-quotas, I finally had the 
chance.


Lund


Brent Jones wrote:

On Thu, May 21, 2009 at 10:17 PM, Jorgen Lundman  wrote:

To finally close my quest. I tested "zfs send" in osol-b114 version:

received 82.3GB stream in 1195 seconds (70.5MB/sec)

Yeeaahh!

That makes it completely usable! Just need to change our support 
contract to

allow us to run b114 and we're set! :)


Thanks,

Lund


Jorgen Lundman wrote:

We finally managed to upgrade the production x4500s to Sol 10 10/08
(unrelated to this) but with the hope that it would also make "zfs 
send"

usable.

Exactly how does "build 105" translate to Solaris 10 10/08?  My 
current
speed test has sent 34Gb in 24 hours, which isn't great. Perhaps 
the next

version of Solaris 10 will have the improvements.

1>>>



Robert Milkowski wrote:

Hello Jorgen,

If you look at the list archives you will see that it made a huge
difference for some people including me. Now I'm easily able to
saturate GbE linke while zfs send|recv'ing.


Since build 105 it should be *MUCH* for faster.



--
Jorgen Lundman   | 
Unix Administrator   | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500  (cell)
Japan| +81 (0)3 -3375-1767  (home)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



Can you give any details about your data set, what you piped zfs
send/receive through (SSH?), hardware/network, etc?
I'm envious of your speeds!




--
Dirk Wriedt, dirk.wri...@sun.com, Sun Microsystems GmbH
Systemingenieur Strategic Accounts
Nagelsweg 55, 20097 Hamburg, Germany
Tel.: +49-40-251523-132 Fax: +49-40-251523-425 Mobile: +49 172 848 4166
"Never been afraid of chances I been takin'" - Joan Jett

Sitz der Gesellschaft: Sun Microsystems GmbH, Sonnenallee 1, D-85551 
Kirchheim-Heimstetten

Amtsgericht Muenchen: HRB 161028
Geschaeftsfuehrer: Thomas Schroeder, Wolfgang Engels, Wolf Frenkel
Vorsitzender des Aufsichtsrates: Martin Haering




--
Jorgen Lundman   | 
Unix Administrator   | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500  (cell)
Japan| +81 (0)3 -3375-1767  (home)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zfs send speed. Was: User quota design discussion..

2009-05-26 Thread Dirk Wriedt

Jorgen,

what is the size of the sending zfs?

I thought replication speed depends on the size of the sending fs, too not only size of the 
snapshot being sent.


Regards
Dirk


--On Freitag, Mai 22, 2009 19:19:34 +0900 Jorgen Lundman  wrote:


Sorry, yes. It is straight;

# time zfs send zpool1/leroy_c...@speedtest | nc 172.20.12.232 3001
real19m48.199s

# /var/tmp/nc -l -p 3001 -vvv | time zfs recv -v zpool1/le...@speedtest
received 82.3GB stream in 1195 seconds (70.5MB/sec)


Sending is osol-b114.
Receiver is Solaris 10 10/08

When we tested Solaris 10 10/08 -> Solaris 10 10/08 these were the results;

zfs send | nc | zfs recv -> 1 MB/s
tar -cvf /zpool/leroy | nc | tar -xvf -  -> 2.5 MB/s
ufsdump | nc | ufsrestore-> 5.0 MB/s

So, none of those solutions was usable with regular Sol 10. Note most our 
volumes are ufs in
zvol, but even zfs volumes were slow.

Someone else had mentioned the speed was fixed in an earlier release, I had not 
had a chance to
upgrade. But since we wanted to try zfs user-quotas, I finally had the chance.

Lund


Brent Jones wrote:

On Thu, May 21, 2009 at 10:17 PM, Jorgen Lundman  wrote:

To finally close my quest. I tested "zfs send" in osol-b114 version:

received 82.3GB stream in 1195 seconds (70.5MB/sec)

Yeeaahh!

That makes it completely usable! Just need to change our support contract to
allow us to run b114 and we're set! :)


Thanks,

Lund


Jorgen Lundman wrote:

We finally managed to upgrade the production x4500s to Sol 10 10/08
(unrelated to this) but with the hope that it would also make "zfs send"
usable.

Exactly how does "build 105" translate to Solaris 10 10/08?  My current
speed test has sent 34Gb in 24 hours, which isn't great. Perhaps the next
version of Solaris 10 will have the improvements.

1>>>



Robert Milkowski wrote:

Hello Jorgen,

If you look at the list archives you will see that it made a huge
difference for some people including me. Now I'm easily able to
saturate GbE linke while zfs send|recv'ing.


Since build 105 it should be *MUCH* for faster.



--
Jorgen Lundman   | 
Unix Administrator   | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500  (cell)
Japan| +81 (0)3 -3375-1767  (home)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



Can you give any details about your data set, what you piped zfs
send/receive through (SSH?), hardware/network, etc?
I'm envious of your speeds!




--
Dirk Wriedt, dirk.wri...@sun.com, Sun Microsystems GmbH
Systemingenieur Strategic Accounts
Nagelsweg 55, 20097 Hamburg, Germany
Tel.: +49-40-251523-132 Fax: +49-40-251523-425 Mobile: +49 172 848 4166
"Never been afraid of chances I been takin'" - Joan Jett

Sitz der Gesellschaft: Sun Microsystems GmbH, Sonnenallee 1, D-85551 
Kirchheim-Heimstetten
Amtsgericht Muenchen: HRB 161028
Geschaeftsfuehrer: Thomas Schroeder, Wolfgang Engels, Wolf Frenkel
Vorsitzender des Aufsichtsrates: Martin Haering

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zfs send speed. Was: User quota design discussion..

2009-05-22 Thread Nicolas Williams
On Fri, May 22, 2009 at 04:40:43PM -0600, Eric D. Mudama wrote:
> As another datapoint, the 111a opensolaris preview got me ~29MB/s
> through an SSH tunnel with no tuning on a 40GB dataset.
> 
> Sender was a Core2Duo E4500 reading from SSDs and receiver was a Xeon
> E5520 writing to a few mirrored 7200RPM SATA vdevs in a single pool.
> Network was a $35 8-port gigabit netgear switch.

Unfortunately the SunSSH doesn't know how to grow SSHv2 channel windows
to take full advantage of the TCP BDP, so you could probably have gone
faster.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zfs send speed. Was: User quota design discussion..

2009-05-22 Thread Eric D. Mudama

On Fri, May 22 at 11:05, Robert Milkowski wrote:



btw: caching data fro zfs send anf zfs recv on another side could make it 
even faster. you could use something like mbuffer with buffers of 1-2GB  
for example.


As another datapoint, the 111a opensolaris preview got me ~29MB/s
through an SSH tunnel with no tuning on a 40GB dataset.

Sender was a Core2Duo E4500 reading from SSDs and receiver was a Xeon
E5520 writing to a few mirrored 7200RPM SATA vdevs in a single pool.
Network was a $35 8-port gigabit netgear switch.

--eric

--
Eric D. Mudama
edmud...@mail.bounceswoosh.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zfs send speed. Was: User quota design discussion..

2009-05-22 Thread Jorgen Lundman

Sorry, yes. It is straight;

# time zfs send zpool1/leroy_c...@speedtest | nc 172.20.12.232 3001
real19m48.199s

# /var/tmp/nc -l -p 3001 -vvv | time zfs recv -v zpool1/le...@speedtest
received 82.3GB stream in 1195 seconds (70.5MB/sec)


Sending is osol-b114.
Receiver is Solaris 10 10/08

When we tested Solaris 10 10/08 -> Solaris 10 10/08 these were the results;

zfs send | nc | zfs recv -> 1 MB/s
tar -cvf /zpool/leroy | nc | tar -xvf -  -> 2.5 MB/s
ufsdump | nc | ufsrestore-> 5.0 MB/s

So, none of those solutions was usable with regular Sol 10. Note most 
our volumes are ufs in zvol, but even zfs volumes were slow.


Someone else had mentioned the speed was fixed in an earlier release, I 
had not had a chance to upgrade. But since we wanted to try zfs 
user-quotas, I finally had the chance.


Lund


Brent Jones wrote:

On Thu, May 21, 2009 at 10:17 PM, Jorgen Lundman  wrote:

To finally close my quest. I tested "zfs send" in osol-b114 version:

received 82.3GB stream in 1195 seconds (70.5MB/sec)

Yeeaahh!

That makes it completely usable! Just need to change our support contract to
allow us to run b114 and we're set! :)


Thanks,

Lund


Jorgen Lundman wrote:

We finally managed to upgrade the production x4500s to Sol 10 10/08
(unrelated to this) but with the hope that it would also make "zfs send"
usable.

Exactly how does "build 105" translate to Solaris 10 10/08?  My current
speed test has sent 34Gb in 24 hours, which isn't great. Perhaps the next
version of Solaris 10 will have the improvements.

1>>>



Robert Milkowski wrote:

Hello Jorgen,

If you look at the list archives you will see that it made a huge
difference for some people including me. Now I'm easily able to
saturate GbE linke while zfs send|recv'ing.


Since build 105 it should be *MUCH* for faster.



--
Jorgen Lundman   | 
Unix Administrator   | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500  (cell)
Japan| +81 (0)3 -3375-1767  (home)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



Can you give any details about your data set, what you piped zfs
send/receive through (SSH?), hardware/network, etc?
I'm envious of your speeds!



--
Jorgen Lundman   | 
Unix Administrator   | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500  (cell)
Japan| +81 (0)3 -3375-1767  (home)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zfs send speed. Was: User quota design discussion..

2009-05-22 Thread Ian Collins

Brent Jones wrote:

On Thu, May 21, 2009 at 10:17 PM, Jorgen Lundman  wrote:
  

To finally close my quest. I tested "zfs send" in osol-b114 version:

received 82.3GB stream in 1195 seconds (70.5MB/sec)


Can you give any details about your data set, what you piped zfs
send/receive through (SSH?), hardware/network, etc?
I'm envious of your speeds!

  
I've managed close to that for full sends on Solaris 10 using direct 
socket connections and a few seconds of buffering.


--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zfs send speed. Was: User quota design discussion..

2009-05-22 Thread Robert Milkowski



btw: caching data fro zfs send anf zfs recv on another side could make it 
even faster. you could use something like mbuffer with buffers of 1-2GB 
for example.



On Fri, 22 May 2009, Jorgen Lundman wrote:



To finally close my quest. I tested "zfs send" in osol-b114 version:

received 82.3GB stream in 1195 seconds (70.5MB/sec)

Yeeaahh!

That makes it completely usable! Just need to change our support contract to 
allow us to run b114 and we're set! :)



Thanks,

Lund


Jorgen Lundman wrote:


We finally managed to upgrade the production x4500s to Sol 10 10/08 
(unrelated to this) but with the hope that it would also make "zfs send" 
usable.


Exactly how does "build 105" translate to Solaris 10 10/08?  My current 
speed test has sent 34Gb in 24 hours, which isn't great. Perhaps the next 
version of Solaris 10 will have the improvements.




Robert Milkowski wrote:

Hello Jorgen,

If you look at the list archives you will see that it made a huge
difference for some people including me. Now I'm easily able to
saturate GbE linke while zfs send|recv'ing.


Since build 105 it should be *MUCH* for faster.





--
Jorgen Lundman   | 
Unix Administrator   | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500  (cell)
Japan| +81 (0)3 -3375-1767  (home)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zfs send speed. Was: User quota design discussion..

2009-05-21 Thread Brent Jones
On Thu, May 21, 2009 at 10:17 PM, Jorgen Lundman  wrote:
>
> To finally close my quest. I tested "zfs send" in osol-b114 version:
>
> received 82.3GB stream in 1195 seconds (70.5MB/sec)
>
> Yeeaahh!
>
> That makes it completely usable! Just need to change our support contract to
> allow us to run b114 and we're set! :)
>
>
> Thanks,
>
> Lund
>
>
> Jorgen Lundman wrote:
>>
>> We finally managed to upgrade the production x4500s to Sol 10 10/08
>> (unrelated to this) but with the hope that it would also make "zfs send"
>> usable.
>>
>> Exactly how does "build 105" translate to Solaris 10 10/08?  My current
>> speed test has sent 34Gb in 24 hours, which isn't great. Perhaps the next
>> version of Solaris 10 will have the improvements.
>>
>>
>>
>> Robert Milkowski wrote:
>>>
>>> Hello Jorgen,
>>>
>>> If you look at the list archives you will see that it made a huge
>>> difference for some people including me. Now I'm easily able to
>>> saturate GbE linke while zfs send|recv'ing.
>>>
 Since build 105 it should be *MUCH* for faster.
>>
>>
>
> --
> Jorgen Lundman       | 
> Unix Administrator   | +81 (0)3 -5456-2687 ext 1017 (work)
> Shibuya-ku, Tokyo    | +81 (0)90-5578-8500          (cell)
> Japan                | +81 (0)3 -3375-1767          (home)
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>

Can you give any details about your data set, what you piped zfs
send/receive through (SSH?), hardware/network, etc?
I'm envious of your speeds!

-- 
Brent Jones
br...@servuhome.net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zfs send speed. Was: User quota design discussion..

2009-05-21 Thread Jorgen Lundman


To finally close my quest. I tested "zfs send" in osol-b114 version:

received 82.3GB stream in 1195 seconds (70.5MB/sec)

Yeeaahh!

That makes it completely usable! Just need to change our support 
contract to allow us to run b114 and we're set! :)



Thanks,

Lund


Jorgen Lundman wrote:


We finally managed to upgrade the production x4500s to Sol 10 10/08 
(unrelated to this) but with the hope that it would also make "zfs send" 
usable.


Exactly how does "build 105" translate to Solaris 10 10/08?  My current 
speed test has sent 34Gb in 24 hours, which isn't great. Perhaps the 
next version of Solaris 10 will have the improvements.




Robert Milkowski wrote:

Hello Jorgen,

If you look at the list archives you will see that it made a huge
difference for some people including me. Now I'm easily able to
saturate GbE linke while zfs send|recv'ing.


Since build 105 it should be *MUCH* for faster.





--
Jorgen Lundman   | 
Unix Administrator   | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500  (cell)
Japan| +81 (0)3 -3375-1767  (home)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zfs send speed. Was: User quota design discussion..

2009-04-10 Thread Vladimir Kotal

Jorgen Lundman wrote:


We finally managed to upgrade the production x4500s to Sol 10 10/08 
(unrelated to this) but with the hope that it would also make "zfs send" 
usable.


Exactly how does "build 105" translate to Solaris 10 10/08?  My current 


There is no easy/obvious mapping of Solaris Nevada builds to Solaris 10 
update releases. Solaris Nevada started as a branch of S10 after it was 
released and is the place where new features (RFEs) are developed. For a 
bug fix or RFE to end up in Solaris 10 update release it needs to match 
certain criteria. Basically, only those CRs which are found "necessary" 
(and this applies to both bugs and features) are backported to S10uX.



v.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zfs send speed. Was: User quota design discussion..

2009-04-09 Thread Jorgen Lundman


We finally managed to upgrade the production x4500s to Sol 10 10/08 
(unrelated to this) but with the hope that it would also make "zfs send" 
usable.


Exactly how does "build 105" translate to Solaris 10 10/08?  My current 
speed test has sent 34Gb in 24 hours, which isn't great. Perhaps the 
next version of Solaris 10 will have the improvements.




Robert Milkowski wrote:

Hello Jorgen,

If you look at the list archives you will see that it made a huge
difference for some people including me. Now I'm easily able to
saturate GbE linke while zfs send|recv'ing.


Since build 105 it should be *MUCH* for faster.



--
Jorgen Lundman   | 
Unix Administrator   | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500  (cell)
Japan| +81 (0)3 -3375-1767  (home)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send speed

2007-03-20 Thread Matthew Ahrens

Torrey McMahon wrote:

Matthew Ahrens wrote:
I'm only doing an initial investigation now so I have no test data at 
this point. The reason I asked, and I should have tacked this on at the 
end of the last email, was a blog entry that stated zfs send was slow


http://www.lethargy.org/~jesus/archives/80-ZFS-send-trickle..html

Looking back through the discuss archives I didn't see anything else 
mentioned but some others mentioned it to me off line as well. It could 
be we all read the same blog entry so I figured I'd ask if anyone had 
seen such behavior recently. Hopefully, I can get a test bed setup 
fairly quickly and see how it works myself.


I'll address some of the points in the above-mentioned blog:

The author notes that it took 9 days to do a full zfs send.  Elsewhere 
they note that they have "about 1TB of information on ZFS", so I'm left 
to guess that their zfs send went at about 1.3MB/s.  Without knowing 
their underlying storage hardware, I couldn't say what a reasonable 
expectation would be, but even a single modern spindle could do more 
sequential reads.  Random I/O is another matter, so the layout of the 
data on disk would play in.  Any other load on the system would also 
impact the time that the 'zfs send' would be expected to take.


That said, I'd still guess that they are right -- 'zfs send' could be a 
lot faster if it issued more i/o in parallel.  Finding the right balance 
of 'zfs send' performance vs. other i/o priority will be tricky, but 
it's something we're going to work on.


Based on the time it took to do a full zfs send, the author says 
"Somehow I think that doing daily incremental backups is out of the 
question."  However, the data does not support this conclusion.  If the 
amount of data changed is small (which the author claims: "very very 
large files ... that have minimal changes to them"), then the 
incremental zfs send will be quite fast.


While I agree that large improvements are possible, the data presented 
does not support the conclusion that zfs send is not an acceptable 
solution for daily incremental backups for this workload.


--matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send speed

2007-03-20 Thread Torrey McMahon

Matthew Ahrens wrote:

Torrey McMahon wrote:

Howdy folks.

I've a customer looking to use ZFS in a DR situation. They have a 
large data store where they will be taking snapshots every N minutes 
or so, sending the difference of the snapshot and previous snapshot 
with zfs send -i to a remote host, and in case of DR firing up the 
secondary.


Cool!


I sure hope so. ;-)



However, I've seen a few references to the speed of zfs send being, 
well, a bit slow. Anyone want to comment on the current speed of "zfs 
send"? Any recent changes or issues found in this area?


What bits are you running?  I made some recent improvements (6490104, 
fixed in build 53, targeted for s10u4).  There are still a few issues, 
but by and large, performance should be very good.


Can you describe what problem you're experiencing?  How much data, how 
many files, how big of a stream, what transport, how long it takes, 
are you seeing lots of CPU or disk activity on the sending or 
receiving side when it's slow?


I'm only doing an initial investigation now so I have no test data at 
this point. The reason I asked, and I should have tacked this on at the 
end of the last email, was a blog entry that stated zfs send was slow


http://www.lethargy.org/~jesus/archives/80-ZFS-send-trickle..html

Looking back through the discuss archives I didn't see anything else 
mentioned but some others mentioned it to me off line as well. It could 
be we all read the same blog entry so I figured I'd ask if anyone had 
seen such behavior recently. Hopefully, I can get a test bed setup 
fairly quickly and see how it works myself.




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs send speed

2007-03-20 Thread Matthew Ahrens

Torrey McMahon wrote:

Howdy folks.

I've a customer looking to use ZFS in a DR situation. They have a large 
data store where they will be taking snapshots every N minutes or so, 
sending the difference of the snapshot and previous snapshot with zfs 
send -i to a remote host, and in case of DR firing up the secondary.


Cool!

However, I've seen a few references to the speed of zfs send being, 
well, a bit slow. Anyone want to comment on the current speed of "zfs 
send"? Any recent changes or issues found in this area?


What bits are you running?  I made some recent improvements (6490104, 
fixed in build 53, targeted for s10u4).  There are still a few issues, 
but by and large, performance should be very good.


Can you describe what problem you're experiencing?  How much data, how 
many files, how big of a stream, what transport, how long it takes, are 
you seeing lots of CPU or disk activity on the sending or receiving side 
when it's slow?


--matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs send speed

2007-03-20 Thread Torrey McMahon

Howdy folks.

I've a customer looking to use ZFS in a DR situation. They have a large 
data store where they will be taking snapshots every N minutes or so, 
sending the difference of the snapshot and previous snapshot with zfs 
send -i to a remote host, and in case of DR firing up the secondary.


However, I've seen a few references to the speed of zfs send being, 
well, a bit slow. Anyone want to comment on the current speed of "zfs 
send"? Any recent changes or issues found in this area?


Thanks.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss