>>>>> "da" == David Abrahams <[EMAIL PROTECTED]> writes:

    da> how to deal with backups to my Amazon s3 storage area.  Does
    da> zfs send avoid duplicating common data in clones and
    da> snapshots?

how can you afford to use something so expensive as S3 for backups?
Anyway 'zfs send' does avoid duplication but you must never store a
'zfs send' stream.  They're not robust like 'tar' and 'cpio' streams.
A bit flip will ruin the entire stream, both before and after the bit
flip, while tar/cpio will just search for the next file header and
lose very little.  Also, correctly restoring them depends on a whole
mess of kernel code which is not well-checked for inter-version
compatibility.  And there is no standalone stream-testing tool.  

For zpools it seems well-tested that later ZFS code can import earlier
zpools, and also x86/SPARC zpools/kernels work together, but neither
has been consistently true of the 'zfs send' format.  You can only use
'zfs send' inside a pipe, where you can try again or give up if it
doesn't work.

I asked for access to the si wiki so I could write a clearer 'zfs
send' warning than the rather mild one that's up there now, but I got
no response from [EMAIL PROTECTED]

It sounds silly, but you'd actually be much better off making a
compressed zpool on top of an 'mkfile' vdev, fill it with data, export
it, and send that to s3.  I don't know of any proper stream storage
format which captures the snapshot/clone tree and also has the
relevant characteristics of tarballs: robust to endiness, kernel
versions, and bit flips, and validateable without restoring.

Attachment: pgpR0WcTMMCLk.pgp
Description: PGP signature

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to