> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Gregory J. Benscoter > > After looking through the archives I havent been able to assess the > reliability of a backup procedure which employs zfs send and recv.
If there's data corruption in the "zfs send" datastream, then the whole datastream is lost. If you are piping your "zfs send" into "zfs receive" then there is no problem. It's ok to do this via ssh, mbuffer, etc, provided that you're not storing the "zfs send" datastream expecting to receive it later. If you're receiving it immediately, and there is any data corruption, the zfs receive will fail, and you'll know immediately that there was something wrong. If you're not storing the data stream for later, you will not have bad data sitting around undetected giving you a false sense of security. There are two reasons why they say "zfs send is not a backup solution." The issue above is one of them. The other issue is: You cannot restore a subset of the filesystem. You can only restore the *entire* filesystem. > Currently Im attempting to create a script that will allow me to write > a zfs stream to a tape via tar like below. Despite what I've said above, there are people who do it anyway. Logic similar to ... "I do a full backup every week. I am 99% certain I'll never need it, and if I do, I am 99% certain the latest tape will be good. And if I'm wrong, then I'm 99% certain the one-week-older tape will be good...." Couple this with "This is not the only form of backups I'm doing..." AKA, some people are willing to take the calculated risk of tapes possibly corrupting data. > # zfs send R p...@something | tar c > /dev/tape Hmmm... In the above, your data must all fit on a single tape. In fact, why use tar at all? Just skip tar and write to tape. My experience is that performance this way is terrible. Perhaps mbuffer would solve that? I never tried. If your whole data stream will fit on a single tape, consider backing up to external hard drive instead (or in addition.) The cool thing about having a backup on hard drive is (a) no restore time necessary; just mount it and use it. (b) yes, you can extract a subset of the filesystem. (c) You've already done the "zfs receive" so you are already sure the data is good. You can see the filesystem, so you *really* know the data is good. (d) if you run out of space on the disk, you can just add more devices to the external pool. ;-) But you've got to keep the group together. The bad thing about backup to hard drive: If it's an external drive, it's easy to accidentally knock out the power, which would make the filesystem disappear and therefore the system is likely to hang. So if you're using an external disk, you want to attach it to a non-critical system, and pipe the data over ssh or mbuffer or something. Also, hard drives don't have the same shelf life, nor physical impact survival rate that tapes have. And if you're going to be writing once and archiving permamently ... then the cost per GB might be a factor too. > Im primarily concerned with in the possibility of a bit flop. If this > occurs will the stream be lost? Or will the file that that bit flop > occurred in be the only degraded file? Lastly how does the reliability > of this plan compare to more traditional backup tools like tar, cpio, > etc ? The advantage of "zfs send" is that you can do incrementals, which require zero time to calculate. You only need enough time to transfer the number of bytes that have changed. For example, I have a filesystem which takes 20 hrs to fully write to external media. It takes 6 hours just to walk the tree (rsync, tar, find, etc) scanning for files that have changed and consequently should be copied for a tar-style or cpio-style incremental backup. Or ... When I use "zfs send" on average the total incremental process takes only 7 minutes. But of course it varies linearly based on how much data has changed. The advantage of tar, cpio, etc, is that they can write to tape, without people telling you not to, as I have done above regarding "zfs send" to tape. _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss