Re: [zfs-discuss] zfs send/receive script

2012-03-09 Thread Cameron Hanover
I've seen similar error messages from a script I've written, as well. Mine does create a lock file and won't run if a `zfs send` is already in progress. My only guess is that the second (or third, or...) filesystem starts sending to the receiving host before the latter has fully finished the

Re: [zfs-discuss] zfs send/receive script

2012-03-09 Thread Ian Collins
On 03/10/12 02:48 AM, Cameron Hanover wrote: On Mar 6, 2012, at 8:26 AM, Carsten John wrote: Hello everybody, I set up a script to replicate all zfs filesystems (some 300 user home directories in this case) within a given pool to a mirror machine. The basic idea is to send the snapshots

[zfs-discuss] zfs send/receive script

2012-03-06 Thread Carsten John
Hello everybody, I set up a script to replicate all zfs filesystems (some 300 user home directories in this case) within a given pool to a mirror machine. The basic idea is to send the snapshots incremental if the corresponding snapshot exists on the remote side or send a complete snapshot if

Re: [zfs-discuss] zfs send/receive script

2012-03-06 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Carsten John I set up a script to replicate all zfs filesystems (some 300 user home directories in this case) within a given pool to a mirror machine. The basic idea is to send the snapshots

Re: [zfs-discuss] zfs send/receive script

2012-03-06 Thread Paul Kraus
On Tue, Mar 6, 2012 at 10:19 AM, Edward Ned Harvey opensolarisisdeadlongliveopensola...@nedharvey.com wrote: From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Carsten John snip cannot receive new filesystem stream: dataset is busy or cannot

Re: [zfs-discuss] zfs send/receive scenario problem w/ auto-snap service

2011-11-07 Thread HUGE | David Stahl
I think altering the amount of copies method would work best for us. The hold feature could also work, but seems like it might be more complicated as there will be a large number of snapshots in between the two we are sending. I am going to try and implement this keep change and see if it does the

Re: [zfs-discuss] zfs send/receive scenario problem w/ auto-snap service

2011-11-05 Thread Jim Klimov
2011-11-05 2:12, HUGE | David Stahl wrote: Our problem is that we need to use the -R to snapshot and send all the child zvols, yet since we have a lot of data (3.5 TB), the hourly snapshots are cleaned on the sending side, and breaks the script as it is running. In recent OpenSolaris and

[zfs-discuss] zfs send/receive scenario problem w/ auto-snap service

2011-11-04 Thread HUGE | David Stahl
Hi, I am having some problems with architecting a zfs snapshot replication scheme that would suit the needs of my company. Presently we do hour/daily/weekly snapshots of our file server. This file system is organized in parent/child/child type zvols. so think *pool/zvol1/zvol2/zvol3,

Re: [zfs-discuss] zfs send/receive and ashift

2011-07-27 Thread Fred Liu
The only way you will know of decrypting and decompressing causes a problem in that case is if you try it on your systems. I seriously doubt it will be unless the system is already heavily CPU bound and your backup window is already very tight. That is true. My understanding of the

Re: [zfs-discuss] zfs send/receive and ashift

2011-07-27 Thread Darren J Moffat
On 07/27/11 10:24, Fred Liu wrote: The alternative is to have the node in your NDMP network that does the writing to the tape to do the compression and encryption of the data stream before putting it on the tape. I see. T1C is a monster to have if possible ;-). And doing the job on NDMP

Re: [zfs-discuss] zfs send/receive and ashift

2011-07-27 Thread Pawel Jakub Dawidek
On Tue, Jul 26, 2011 at 03:28:10AM -0700, Fred Liu wrote: The ZFS Send stream is at the DMU layer at this layer the data is uncompress and decrypted - ie exactly how the application wants it. Even the data compressed/encrypted by ZFS will be decrypted? If it is true, will it be

Re: [zfs-discuss] zfs send/receive and ashift

2011-07-27 Thread Darren J Moffat
On 07/27/11 12:51, Pawel Jakub Dawidek wrote: On Tue, Jul 26, 2011 at 03:28:10AM -0700, Fred Liu wrote: The ZFS Send stream is at the DMU layer at this layer the data is uncompress and decrypted - ie exactly how the application wants it. Even the data compressed/encrypted by ZFS will be

[zfs-discuss] zfs send/receive and ashift

2011-07-26 Thread Andrew Gabriel
Does anyone know if it's OK to do zfs send/receive between zpools with different ashift values? -- Andrew Gabriel ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] zfs send/receive and ashift

2011-07-26 Thread Darren J Moffat
On 07/26/11 10:14, Andrew Gabriel wrote: Does anyone know if it's OK to do zfs send/receive between zpools with different ashift values? The ZFS Send stream is at the DMU layer at this layer the data is uncompress and decrypted - ie exactly how the application wants it. The ashift is a vdev

Re: [zfs-discuss] zfs send/receive and ashift

2011-07-26 Thread Fred Liu
The ZFS Send stream is at the DMU layer at this layer the data is uncompress and decrypted - ie exactly how the application wants it. Even the data compressed/encrypted by ZFS will be decrypted? If it is true, will it be any CPU overhead? And ZFS send/receive tunneled by ssh becomes the

Re: [zfs-discuss] zfs send/receive and ashift

2011-07-26 Thread Darren J Moffat
On 07/26/11 11:28, Fred Liu wrote: The ZFS Send stream is at the DMU layer at this layer the data is uncompress and decrypted - ie exactly how the application wants it. Even the data compressed/encrypted by ZFS will be decrypted? Yes, which is exactly what I said. All data as seen by the

Re: [zfs-discuss] zfs send/receive and ashift

2011-07-26 Thread Fred Liu
Yes, which is exactly what I said. All data as seen by the DMU is decrypted and decompressed, the DMU layer is what the ZPL layer is built ontop of so it has to be that way. Understand. Thank you. ;-) There is always some overhead for doing a decryption and decompression, the

Re: [zfs-discuss] zfs send/receive and ashift

2011-07-26 Thread Frank Van Damme
Op 26-07-11 12:56, Fred Liu schreef: Any alternatives, if you don't mind? ;-) vpn's, openssl piped over netcat, a password-protected zip file,... ;) ssh would be the most practical, probably. -- No part of this copyright message may be reproduced, read or seen, dead or alive or by any means,

Re: [zfs-discuss] zfs send/receive and ashift

2011-07-26 Thread Darren J Moffat
On 07/26/11 11:56, Fred Liu wrote: It is up to how big the delta is. It does matter if the data backup can not be finished within the required backup window when people use zfs send/receive to do the mass data backup. The only way you will know of decrypting and decompressing causes a

Re: [zfs-discuss] ZFS send/receive to Solaris/FBSD/OpenIndiana/Nexenta VM guest?

2011-04-07 Thread Nikola M.
On 04/ 6/11 07:14 PM, Brandon High wrote: On Tue, Apr 5, 2011 at 12:38 PM, Joe Auty j...@netmusician.org mailto:j...@netmusician.org wrote: How about getting a little more crazy... What if this entire server temporarily hosting this data was a VM guest running ZFS? I don't

Re: [zfs-discuss] ZFS send/receive to Solaris/FBSD/OpenIndiana/Nexenta VM guest?

2011-04-07 Thread Joe Auty
Thanks for all of this info guys, I'm still digesting it... My source computer is running Solaris 10 ZFS version 15. Does this mean that I'd be asking for trouble doing a zfs send back to this machine from any other ZFS machine running a version 15? I just want to

Re: [zfs-discuss] ZFS send/receive to Solaris/FBSD/OpenIndiana/Nexenta VM guest?

2011-04-07 Thread Brandon High
On Thu, Apr 7, 2011 at 4:01 PM, Joe Auty j...@netmusician.org wrote: My source computer is running Solaris 10 ZFS version 15. Does this mean that I'd be asking for trouble doing a zfs send back to this machine from any other ZFS machine running a version 15? I just want to make sure I

[zfs-discuss] ZFS send/receive to Solaris/FBSD/OpenIndiana/Nexenta VM guest?

2011-04-06 Thread Joe Auty
Hello, I'm debating an OS change and also thinking about my options for data migration to my next server, whether it is on new or the same hardware. Migrating to a new machine I understand is a simple matter of ZFS

Re: [zfs-discuss] ZFS send/receive to Solaris/FBSD/OpenIndiana/Nexenta VM guest?

2011-04-06 Thread David Dyer-Bennet
On Tue, April 5, 2011 14:38, Joe Auty wrote: Migrating to a new machine I understand is a simple matter of ZFS send/receive, but reformatting the existing drives to host my existing data is an area I'd like to learn a little more about. In the past I've asked about this and was told that it

Re: [zfs-discuss] ZFS send/receive to Solaris/FBSD/OpenIndiana/Nexenta VM guest?

2011-04-06 Thread David Magda
On Wed, April 6, 2011 10:51, David Dyer-Bennet wrote: I'm a big fan of rsync, in cronjobs or wherever. What it won't do is properly preserve ZFS ACLs, and ZFS snapshots, though. I moved from using rsync to using zfs send/receive for my backup scheme at home, and had considerable trouble

Re: [zfs-discuss] ZFS send/receive to Solaris/FBSD/OpenIndiana/Nexenta VM guest?

2011-04-06 Thread Paul Kraus
On Wed, Apr 6, 2011 at 10:51 AM, David Dyer-Bennet d...@dd-b.net wrote: On Tue, April 5, 2011 14:38, Joe Auty wrote: Also, more generally, is ZFS send/receive mature enough that when you do data migrations you don't stress about this? Piece of cake? The difficulty of this whole undertaking

Re: [zfs-discuss] ZFS send/receive to Solaris/FBSD/OpenIndiana/Nexenta VM guest?

2011-04-06 Thread Brandon High
On Tue, Apr 5, 2011 at 12:38 PM, Joe Auty j...@netmusician.org wrote: How about getting a little more crazy... What if this entire server temporarily hosting this data was a VM guest running ZFS? I don't foresee this being a problem either, but with so The only thing to watch out for is to

Re: [zfs-discuss] ZFS send/receive to Solaris/FBSD/OpenIndiana/Nexenta VM guest?

2011-04-06 Thread Paul Kraus
On Wed, Apr 6, 2011 at 1:14 PM, Brandon High bh...@freaks.com wrote: The only thing to watch out for is to make sure that the receiving datasets aren't a higher version that the zfs version that you'll be using on the replacement server. Because you can't downgrade a dataset, using snv_151a

Re: [zfs-discuss] ZFS send/receive to Solaris/FBSD/OpenIndiana/Nexenta VM guest?

2011-04-06 Thread Lori Alt
On 04/ 6/11 11:42 AM, Paul Kraus wrote: On Wed, Apr 6, 2011 at 1:14 PM, Brandon Highbh...@freaks.com wrote: The only thing to watch out for is to make sure that the receiving datasets aren't a higher version that the zfs version that you'll be using on the replacement server. Because you

Re: [zfs-discuss] ZFS send/receive to Solaris/FBSD/OpenIndiana/Nexenta VM guest?

2011-04-06 Thread Brandon High
On Wed, Apr 6, 2011 at 10:42 AM, Paul Kraus pk1...@gmail.com wrote:    I thought I saw that with zpool 10 (or was it 15) the zfs send format had been committed and you *could* send/recv between different version of zpool/zfs. From Solaris 10U9 with zpool 22 manpage for zfs: There is still a

[zfs-discuss] ZFS send/receive to Solaris/FBSD/OpenIndiana/Nexenta VM guest?

2011-04-05 Thread Joe Auty
Hello, I'm debating an OS change and also thinking about my options for data migration to my next server, whether it is on new or the same hardware. Migrating to a new machine I understand is a simple matter of ZFS

Re: [zfs-discuss] ZFS send/receive while write is enabled on receive side?

2010-12-15 Thread Richard Elling
On Dec 9, 2010, at 3:31 PM, Moazam Raja wrote: Hi all, from much of the documentation I've seen, the advice is to set readonly=on on volumes on the receiving side during send/receive operations. Is this still a requirement? I've been trying the send/receive while NOT setting the receiver to

[zfs-discuss] ZFS send/receive while write is enabled on receive side?

2010-12-09 Thread Moazam Raja
Hi all, from much of the documentation I've seen, the advice is to set readonly=on on volumes on the receiving side during send/receive operations. Is this still a requirement? I've been trying the send/receive while NOT setting the receiver to readonly and haven't seen any problems even though

Re: [zfs-discuss] ZFS send/receive while write is enabled on receive side?

2010-12-09 Thread Ian Collins
On 12/10/10 12:31 PM, Moazam Raja wrote: Hi all, from much of the documentation I've seen, the advice is to set readonly=on on volumes on the receiving side during send/receive operations. Is this still a requirement? I've been trying the send/receive while NOT setting the receiver to readonly

Re: [zfs-discuss] ZFS send/receive while write is enabled on receive side?

2010-12-09 Thread Matthew Ahrens
On Thu, Dec 9, 2010 at 5:31 PM, Ian Collins i...@ianshome.com wrote:  On 12/10/10 12:31 PM, Moazam Raja wrote: So, is it OK to send/recv while having the receive volume write enabled? A write can fail if a filesystem is unmounted for update. True, but ZFS recv will not normally unmount a

Re: [zfs-discuss] zfs send receive problem/questions

2010-12-03 Thread Matthew Ahrens
On Wed, Dec 1, 2010 at 10:30 AM, Don Jackson don.jack...@gmail.com wrote: # zfs send -R naspool/open...@xfer-11292010 | zfs receive -Fv  npool/openbsd receiving full stream of naspool/open...@xfer-11292010 into npool/open...@xfer-11292010 received 23.5GB stream in 883 seconds (27.3MB/sec)

Re: [zfs-discuss] zfs send receive problem/questions

2010-12-03 Thread Don Jackson
Try using the -d option to zfs receive.  The ability to do zfs send -R ... | zfs receive [without -d] was added relatively recently, and you may be encountering a bug that is specific to receiving a send of a whole pool. I just tried this, didn't work, new error: # zfs send -R

Re: [zfs-discuss] zfs send receive problem/questions

2010-12-03 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Don Jackson # zfs send -R naspool/open...@xfer-11292010 | zfs recv -d npool/openbsd cannot receive new filesystem stream: out of space The destination pool is much larger (by

Re: [zfs-discuss] zfs send receive problem/questions

2010-12-02 Thread Cindy Swearingen
Hi Don, I'm no snapshot expert but I think you will have to remove the previous receiving side snapshots, at least. I created a file system hierarchy that includes a lower-level snapshot, created a recursive snapshot of that hierarchy and sent it over to a backup pool. Then, did the same steps

[zfs-discuss] zfs send receive problem/questions

2010-12-01 Thread Don Jackson
Hello, I am attempting to move a bunch of zfs filesystems from one pool to another. Mostly this is working fine, but one collection of file systems is causing me problems, and repeated re-reading of man zfs and the ZFS Administrators Guide is not helping. I would really appreciate some

Re: [zfs-discuss] zfs send receive problem/questions

2010-12-01 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Don Jackson # zfs send -R naspool/open...@xfer-11292010 | zfs receive -Fv npool/openbsd receiving full stream of naspool/open...@xfer-11292010 into npool/open...@xfer-11292010 received

Re: [zfs-discuss] zfs send receive problem/questions

2010-12-01 Thread Don Jackson
Here is some more info on my system: This machine is running Solaris 10 U9, with all the patches as of 11/10/2010. The source zpool I am attempting to transfer from was originally created on a older OpenSolaris (specifically Nevada) release, I think it was 111. I did a zpool export on that

Re: [zfs-discuss] zfs send/receive?

2010-11-09 Thread Lapo Luchini
Casper Dik wrote on 2010-09-26: A incremental backup: zfs snapshot -r exp...@backup-2010-07-13 zfs send -R -I exp...@backup-2010-07-12 exp...@backup-2010-07-13 | zfs receive -v -u -d -F portable/export Unfortunately zfs receive -F does not skip existing snapshots

Re: [zfs-discuss] ZFS send/receive and locking

2010-11-04 Thread Byte Internet
The problem is not with how the replication is done. The locking happens during the basic zfs operations. We noticed: on server2 (which is quite busy serving maildirs) we did zfs create tank/newfs rsync 4GB from someotherserver to /tank/newfs zfs destroy tank/newfs Destroying newfs took more

Re: [zfs-discuss] ZFS send/receive and locking

2010-11-03 Thread Chris Mosetick
Sorry i'm not able to provide more insight but I thought some of the concepts in this article might help you, as well as Mike's replication script, also available on this page: http://blog.laspina.ca/ubiquitous/provisioning_disaster_recovery_with_zfs You also might want to look at InfraGeeks

[zfs-discuss] ZFS send/receive and locking

2010-11-02 Thread Byte Internet
Hello, We are trying to setup a pair of ZFS file servers, each backing-up data from another. The simplified setup is as follows: server1 tank/prod/web tank/backup/mail server2 tank/prod/mail tank/backup/web server1:tank/prod/web is a test setup with 10GB of data for 60 websites.

[zfs-discuss] zfs send/receive?

2010-09-25 Thread Roy Sigurd Karlsbakk
hi all I'm using a custom snaopshot scheme which snapshots every hour, day, week and month, rotating 24h, 7d, 4w and so on. What would be the best way to zfs send/receive these things? I'm a little confused about how this works for delta udpates... Vennlige hilsener / Best regards roy --

[zfs-discuss] zfs send, receive, compress, dedup

2010-07-08 Thread Edward Ned Harvey
Suppose I have a fileserver, which may be zpool 10, 14, or 15. No compression, no dedup. Suppose I have a backupserver. I want to zfs send from the fileserver to the backupserver, and I want the backupserver to receive and store compressed and/or dedup'd. The backupserver can be a more

Re: [zfs-discuss] zfs send, receive, compress, dedup

2010-07-08 Thread Ian Collins
On 07/ 9/10 09:21 AM, Edward Ned Harvey wrote: Suppose I have a fileserver, which may be zpool 10, 14, or 15. No compression, no dedup. Suppose I have a backupserver. I want to zfs send from the fileserver to the backupserver, and I want the backupserver to receive and store compressed

Re: [zfs-discuss] zfs send, receive, compress, dedup

2010-07-08 Thread Brandon High
On Thu, Jul 8, 2010 at 2:21 PM, Edward Ned Harvey solar...@nedharvey.com wrote: Can I zfs send from the fileserver to the backupserver and expect it to be compressed and/or dedup'd upon receive? Does zfs send preserve the properties of the originating filesystem? Will the zfs receive clobber

Re: [zfs-discuss] zfs send, receive, compress, dedup

2010-07-08 Thread Ian Collins
On 07/ 9/10 10:59 AM, Brandon High wrote: Personally, I've started organizing datasets in a hierarchy, setting the properties that I want for descendant datasets at a level where it will apply to everything that I want to get it. So if you have your source at tank/export/foo and your

Re: [zfs-discuss] zfs send/receive as backup tool

2010-06-09 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Toyama Shunji Certainly I feel it is difficult, but is it logically impossible to write a filter program to do that, with reasonable memory use? Good question. I don't know the answer. If

Re: [zfs-discuss] zfs send/receive as backup tool

2010-06-09 Thread Khyron
My inclination, based on what I've read and heard from others, is to say no. But again, the best way to find out is to write the code. :\ On Wed, Jun 9, 2010 at 11:45, Edward Ned Harvey solar...@nedharvey.comwrote: From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-

[zfs-discuss] zfs send/receive as backup tool

2010-06-07 Thread Toyama Shunji
Can I extract one or more specific files from zfs snapshot stream? Without restoring full file system. Like ufs based 'restore' tool. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] zfs send/receive as backup tool

2010-06-07 Thread David Magda
On Mon, June 7, 2010 10:34, Toyama Shunji wrote: Can I extract one or more specific files from zfs snapshot stream? Without restoring full file system. Like ufs based 'restore' tool. No. (Check the archives of zfs-discuss for more details. Send/recv has been discussed at length many times.)

Re: [zfs-discuss] zfs send/receive as backup tool

2010-06-07 Thread Cindy Swearingen
Hi Toyama, You cannot restore an individual file from a snapshot stream like the ufsrestore command. If you have snapshots stored on your system, you might be able to access them from the .zfs/snapshot directory. See below. Thanks, Cindy % rm reallyimportantfile % cd .zfs/snapshot % cd

Re: [zfs-discuss] zfs send/receive as backup tool

2010-06-07 Thread Toyama Shunji
Thank you David, Thank you Cindy, Certainly I feel it is difficult, but is it logically impossible to write a filter program to do that, with reasonable memory use? -- This message posted from opensolaris.org ___ zfs-discuss mailing list

Re: [zfs-discuss] zfs send/receive as backup tool

2010-06-07 Thread Khyron
To answer the question you asked here...the answer is no. There have been MANY discussions of this in the past. Here's the lng thread I started back in May about backup strategies for ZFS pools and file systems: http://mail.opensolaris.org/pipermail/zfs-discuss/2010-March/038678.html But

[zfs-discuss] ZFS Send/Receive Question

2010-04-12 Thread Robert Loper
I am trying to duplicate a filesystem from one zpool to another zpool. I don't care so much about snapshots on the destination side...I am more trying to duplicate how RSYNC would copy a filesystem, and then only copy incrementals from the source side to the destination side in subsequent runs

Re: [zfs-discuss] zfs send/receive - actual performance

2010-04-01 Thread tomwaters
If you see the workload on the wire go through regular patterns of fast/slow response then there are some additional tricks that can be applied to increase the overall throughput and smooth the jaggies. But that is fodder for another post... Can you pls. elaborate on what can be done here as I

Re: [zfs-discuss] zfs send/receive - actual performance

2010-04-01 Thread Richard Elling
On Apr 1, 2010, at 12:43 AM, tomwaters wrote: If you see the workload on the wire go through regular patterns of fast/slow response then there are some additional tricks that can be applied to increase the overall throughput and smooth the jaggies. But that is fodder for another post...

Re: [zfs-discuss] zfs send/receive - actual performance

2010-03-26 Thread Erik Ableson
On 25 mars 2010, at 22:00, Bruno Sousa bso...@epinfante.com wrote: Hi, Indeed the 3 disks per vdev (raidz2) seems a bad idea...but it's the system i have now. Regarding the performance...let's assume that a bonnie++ benchmark could go to 200 mg/s in. The possibility of getting the same

Re: [zfs-discuss] zfs send/receive - actual performance

2010-03-26 Thread Bruno Sousa
Hi, I think that in this case the cpu is not the bottleneck, since i'm not using ssh. However my 1gb network link probably is the bottleneck. Bruno On 26-3-2010 9:25, Erik Ableson wrote: On 25 mars 2010, at 22:00, Bruno Sousa bso...@epinfante.com wrote: Hi, Indeed the 3 disks per vdev

Re: [zfs-discuss] zfs send/receive - actual performance

2010-03-26 Thread Bruno Sousa
Hi, The jumbo-frames in my case give me a boost of around 2 mb/s, so it's not that much. Now i will play with link aggregation and see how it goes, and of course i'm counting that incremental replication will be slower...but since the amount of data would be much less probably it will still

Re: [zfs-discuss] zfs send/receive - actual performance

2010-03-26 Thread Richard Elling
On Mar 26, 2010, at 2:34 AM, Bruno Sousa wrote: Hi, The jumbo-frames in my case give me a boost of around 2 mb/s, so it's not that much. That is about right. IIRC, the theoretical max is about 4% improvement, for MTU of 8KB. Now i will play with link aggregation and see how it goes,

[zfs-discuss] zfs send/receive - actual performance

2010-03-25 Thread Bruno Sousa
Hi all, The more readings i do about ZFS, and experiments the more i like this stack of technologies. Since we all like to see real figures in real environments , i might as well share some of my numbers .. The replication has been achieved with the zfs send / zfs receive but piped with mbuffer

Re: [zfs-discuss] zfs send/receive - actual performance

2010-03-25 Thread Bruno Sousa
Thanks for the tip..btw is there any advantage with jbod vs simple volumes? Bruno On 25-3-2010 21:08, Richard Jahnel wrote: BTW, if you download the solaris drivers for the 52445 from adaptec, you can use jbod instead of simple volumes. smime.p7s Description: S/MIME Cryptographic

Re: [zfs-discuss] zfs send/receive - actual performance

2010-03-25 Thread Ian Collins
On 03/26/10 08:47 AM, Bruno Sousa wrote: Hi all, The more readings i do about ZFS, and experiments the more i like this stack of technologies. Since we all like to see real figures in real environments , i might as well share some of my numbers .. The replication has been achieved with the

Re: [zfs-discuss] zfs send/receive - actual performance

2010-03-25 Thread Bruno Sousa
Hi, Indeed the 3 disks per vdev (raidz2) seems a bad idea...but it's the system i have now. Regarding the performance...let's assume that a bonnie++ benchmark could go to 200 mg/s in. The possibility of getting the same values (or near) in a zfs send / zfs receive is just a matter of putting ,

Re: [zfs-discuss] zfs send/receive - actual performance

2010-03-25 Thread Ian Collins
On 03/26/10 10:00 AM, Bruno Sousa wrote: [Boy top-posting sure mucks up threads!] Hi, Indeed the 3 disks per vdev (raidz2) seems a bad idea...but it's the system i have now. Regarding the performance...let's assume that a bonnie++ benchmark could go to 200 mg/s in. The possibility of

[zfs-discuss] zfs send/receive and file system properties

2010-03-22 Thread Len Zaifman
I am trying to coordinate properties and data between 2 file servers. on file server 1 I have: zfs get all zfs52/export/os/sles10sp2 NAME PROPERTY VALUE SOURCE zfs52/export/os/sles10sp2 type filesystem

Re: [zfs-discuss] zfs send/receive : panic and reboot

2010-02-16 Thread Lori Alt
Hi Bruno, I've tried to reproduce this panic you are seeing. However, I had difficulty following your procedure. See below: On 02/08/10 15:37, Bruno Damour wrote: On 02/ 8/10 06:38 PM, Lori Alt wrote: Can you please send a complete list of the actions taken: The commands you used to

Re: [zfs-discuss] zfs send/receive : panic and reboot

2010-02-09 Thread Bruno Damour
On 02/ 8/10 06:38 PM, Lori Alt wrote: Can you please send a complete list of the actions taken: The commands you used to create the send stream, the commands used to receive the stream. Also the output of `zfs list -t all` on both the sending and receiving sides. If you were able to

Re: [zfs-discuss] zfs send/receive : panic and reboot

2010-02-09 Thread Andrey Kuzmin
Just an observation: panic occurs in avl_add when called from find_ds_by_guid that tries to add existing snapshot id to the avl tree (http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/fs/zfs/dmu_send.c#find_ds_by_guid). HTH, Andrey On Tue, Feb 9, 2010 at 1:37 AM, Bruno

[zfs-discuss] zfs send/receive : panic and reboot

2010-02-08 Thread Bruno Damour
copied from opensolaris-dicuss as this probably belongs here. I kept on trying to migrate my pool with children (see previous threads) and had the (bad) idea to try the -d option on the receive part. The system reboots immediately. Here is the log in /var/adm/messages Feb 8 16:07:09 amber

Re: [zfs-discuss] zfs send/receive : panic and reboot

2010-02-08 Thread Lori Alt
Can you please send a complete list of the actions taken: The commands you used to create the send stream, the commands used to receive the stream. Also the output of `zfs list -t all` on both the sending and receiving sides. If you were able to collect a core dump (it should be in

Re: [zfs-discuss] zfs send/receive : panic and reboot

2010-02-08 Thread Victor Latushkin
Lori Alt wrote: Can you please send a complete list of the actions taken: The commands you used to create the send stream, the commands used to receive the stream. Also the output of `zfs list -t all` on both the sending and receiving sides. If you were able to collect a core dump (it

Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-25 Thread Robert Milkowski
On 21/01/2010 11:55, Julian Regel wrote: Until you try to pick one up and put it in a fire safe! Then you backup to tape from x4540 whatever data you need. In case of enterprise products you save on licensing here as you need a one client license per x4540 but in fact can backup data from

Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-25 Thread Greg
uep, This solution seems like the best and most efficient way of handling large filesystems. My biggest question however is, when backing this up to tape, can it be split across several tapes? I will be using bacula to back this up. Will i need to tar or star this filesystem before writing it

Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-22 Thread Mike Gerdts
On Thu, Jan 21, 2010 at 11:28 AM, Richard Elling richard.ell...@gmail.com wrote: On Jan 21, 2010, at 3:55 AM, Julian Regel wrote: Until you try to pick one up and put it in a fire safe! Then you backup to tape from x4540 whatever data you need. In case of enterprise products you save on

Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-22 Thread A Darren Dunham
On Wed, Jan 20, 2010 at 08:11:27AM +1300, Ian Collins wrote: True, but I wonder how viable its future is. One of my clients requires 17 LT04 types for a full backup, which cost more and takes up more space than the equivalent in removable hard drives. What kind of removable hard drives are

Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-22 Thread A Darren Dunham
On Thu, Jan 21, 2010 at 12:38:56AM +0100, Ragnar Sundblad wrote: On 21 jan 2010, at 00.20, Al Hopper wrote: I remember for about 5 years ago (before LT0-4 days) that streaming tape drives would go to great lengths to ensure that the drive kept streaming - because it took so much time to

Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-22 Thread Ian Collins
A Darren Dunham wrote: On Wed, Jan 20, 2010 at 08:11:27AM +1300, Ian Collins wrote: True, but I wonder how viable its future is. One of my clients requires 17 LT04 types for a full backup, which cost more and takes up more space than the equivalent in removable hard drives. What kind

Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-21 Thread Robert Milkowski
On 20/01/2010 15:45, David Dyer-Bennet wrote: On Wed, January 20, 2010 09:23, Robert Milkowski wrote: Now you rsync all the data from your clients to a dedicated filesystem per client, then create a snapshot. Is there an rsync out there that can reliably replicate all file

Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-21 Thread Robert Milkowski
On 20/01/2010 19:20, Ian Collins wrote: Julian Regel wrote: It is actually not that easy. Compare a cost of 2x x4540 with 1TB disks to equivalent solution on LTO. Each x4540 could be configured as: 4x 11 disks in raidz-2 + 2x hot spare + 2x OS disks. The four raidz2 group form a single

Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-21 Thread Ian Collins
Robert Milkowski wrote: On 20/01/2010 19:20, Ian Collins wrote: Julian Regel wrote: It is actually not that easy. Compare a cost of 2x x4540 with 1TB disks to equivalent solution on LTO. Each x4540 could be configured as: 4x 11 disks in raidz-2 + 2x hot spare + 2x OS disks. The four

Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-21 Thread Andrew Gabriel
Robert Milkowski wrote: I think one should actually compare whole solutions - including servers, fc infrastructure, tape drives, robots, software costs, rack space, ... Servers like x4540 are ideal for zfs+rsync backup solution - very compact, good $/GB ratio, enough CPU power for its

Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-21 Thread Robert Milkowski
On 21/01/2010 09:07, Ian Collins wrote: Robert Milkowski wrote: On 20/01/2010 19:20, Ian Collins wrote: Julian Regel wrote: It is actually not that easy. Compare a cost of 2x x4540 with 1TB disks to equivalent solution on LTO. Each x4540 could be configured as: 4x 11 disks in raidz-2 + 2x

Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-21 Thread Julian Regel
Until you try to pick one up and put it in a fire safe! Then you backup to tape from x4540 whatever data you need. In case of enterprise products you save on licensing here as you need a one client license per x4540 but in fact can backup data from many clients which are there. Which brings

Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-21 Thread Richard Elling
On Jan 21, 2010, at 3:55 AM, Julian Regel wrote: Until you try to pick one up and put it in a fire safe! Then you backup to tape from x4540 whatever data you need. In case of enterprise products you save on licensing here as you need a one client license per x4540 but in fact can backup

Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-21 Thread Ian Collins
Julian Regel wrote: Until you try to pick one up and put it in a fire safe! Then you backup to tape from x4540 whatever data you need. In case of enterprise products you save on licensing here as you need a one client license per x4540 but in fact can backup data from many clients which are

Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-20 Thread Ian Collins
Allen Eastwood wrote: On Jan 19, 2010, at 22:54 , Ian Collins wrote: Allen Eastwood wrote: On Jan 19, 2010, at 18:48 , Richard Elling wrote: Many people use send/recv or AVS for disaster recovery on the inexpensive side. Obviously, enterprise backup systems also provide DR

Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-20 Thread Ragnar Sundblad
On 19 jan 2010, at 20.11, Ian Collins wrote: Julian Regel wrote: Based on what I've seen in other comments, you might be right. Unfortunately, I don't feel comfortable backing up ZFS filesystems because the tools aren't there to do it (built into the operating system or using

Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-20 Thread Joerg Schilling
Richard Elling richard.ell...@gmail.com wrote: ufsdump/restore was perfect in that regard. The lack of equivalent functionality is a big problem for the situations where this functionality is a business requirement. How quickly we forget ufsdump's limitations :-). For example, it

Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-20 Thread Joerg Schilling
Ian Collins i...@ianshome.com wrote: The correct way to archivbe ACLs would be to put them into extended POSIX tar attrubutes as star does. See http://cdrecord.berlios.de/private/man/star/star.4.html for a format documentation or have a look at ftp://ftp.berlios.de/pub/star/alpha,

Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-20 Thread Joerg Schilling
Edward Ned Harvey sola...@nedharvey.com wrote: Star implements this in a very effective way (by using libfind) that is even faster that the find(1) implementation from Sun. Even if I just find my filesystem, it will run for 7 hours. But zfs can create my whole incremental snapshot in a

Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-20 Thread Julian Regel
While I can appreciate that ZFS snapshots are very useful in being able to recover files that users might have deleted, they do not do much to help when the entire disk array experiences a crash/corruption or catches fire. Backing up to a second array helps if a) the array is off-site and for

Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-20 Thread Julian Regel
If you like to have a backup that allows to access files, you need a file based backup and I am sure that even a filesystem level scan for recently changed files will not be much faster than what you may achive with e.g. star. Note that ufsdump directly accesees the raw disk device and

Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-20 Thread Joerg Schilling
Julian Regel jrmailgate-zfsdisc...@yahoo.co.uk wrote: If you like to have a backup that allows to access files, you need a file based backup and I am sure that even a filesystem level scan for recently changed files will not be much faster than what you may achive with e.g. star.

Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-20 Thread Julian Regel
While I am sure that star is technically a fine utility, the problem is that it is effectively an unsupported product. From this viewpoint, you may call most of Solaris unsupported. From the perspective of the business, the contract with Sun provides that support. If our customers find a

  1   2   3   >