On Mon, Jun 28, 2010 at 11:26 AM, Tristram Scott
<tristram.sc...@quantmodels.co.uk> wrote:
> For quite some time I have been using zfs send -R fsn...@snapname | dd 
> of=/dev/rmt/1ln to make a tape backup of my zfs file system.  A few weeks 
> back the size of the file system grew to larger than would fit on a single 
> DAT72 tape, and I once again searched for a simple solution to allow dumping 
> of a zfs file system to multiple tapes.  Once again I was disappointed...
>
> I expect there are plenty of other ways this could have been handled, but 
> none leapt out at me.  I didn't want to pay large sums of cash for a 
> commercial backup product, and I didn't see that Amanda would be an easy 
> thing to fit into my existing scripts.  In particular, (and I could well be 
> reading this incorrectly) it seems that the commercial products, Amanda, 
> star, all are dumping the zfs file system file by file (with or without 
> ACLs).  I found none which would allow me to dump the file system and its 
> snapshots, unless I used zfs send to a scratch disk, and dumped to tape from 
> there.  But, of course, that assumes I have a scratch disk large enough.
>
> So, I have implemented zfsdump as a ksh script.  The method is as follows:
> 1. Make a bunch of fifos.
> 2. Pipe the stream from zfs send to split, with split writing to the fifos 
> (in sequence).

would be nice if i could pipe the zfs send stream to a split and then
send of those splitted stream over the
network to a remote system. it would help sending it over to remote
system quicker. can your tool do that?

something like this

                           s | -----> | j
          zfs send     p | -----> | o   zfs recv
           (local)       l  | -----> | i    (remote)
                           t  | -----> | n


> 3. Use dd to copy from the fifos to tape(s).
>
> When the first tape is complete, zfsdump returns.  One then calls it again, 
> specifying that the second tape is to be used, and so on.
>
> From the man page:
>
>     Example 1.  Dump the @Tues snapshot of the  tank  filesystem
>     to  the  non-rewinding,  non-compressing  tape,  with a 36GB
>     capacity:
>
>          zfsdump -z t...@tues -a "-R" -f /dev/rmt/1ln  -s  36864 -t 0
>
>     For the second tape:
>
>          zfsdump -z t...@tues -a "-R" -f /dev/rmt/1ln  -s  36864 -t 1
>
> If you would like to try it out, download the package from:
> http://www.quantmodels.co.uk/zfsdump/
>
> I have packaged it up, so do the usual pkgadd stuff to install.
>
> Please, though, [b]try this out with caution[/b].  Build a few test file 
> systems, and see that it works for you.
> [b]It comes without warranty of any kind.[/b]
>
>
> Tristram
> --
> This message posted from opensolaris.org
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>



-- 
Asif Iqbal
PGP Key: 0xE62693C5 KeyServer: pgp.mit.edu
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to