I'm trying to use serialized BTRFS snapshots as a backup system. The problem is 
that I don't know how to avoid sending duplicate data and also have the ability 
to prune old backups.

Specifically I've considered the following:

#snapshot
btrfs subvolume snapshot -r  live-volume  volume-date

#serialize snapshot for transmission to remote machine
btrfs send -f backup.date volume-date -p volume-yesterday

However this means I have to keep every serialized snapshot forever. I've tried 
unpacking these incremental snapshots, deleting intermediate volumes, and 
repacking the latest version. Unfortunately, deleting an intermediate snapshot 
appears to change the IDs for later snapshots, and future serialized snapshots 
can't be unpacked. In other words, I have incremental snapshots 1-4, I unpack 
1-3, erase 2, and now I can't unpack 4 due to: "ERROR: could not find parent 
subvolume".

I've also considered keeping a chain of incremental monthly backups, and basing 
daily backups on both the monthly and previous daily. This would allow me to 
delete daily backups in the future, but now I have to send twice as much data 
to the backup machine.

What bothers me is that subvolume 3 (from the example above) really has all the 
same data before and after I delete subvolume 2. The IDs of the volume change, 
preventing me from unpacking incremental 4, and that's causing my problems.

Are there any better ideas I haven't thought of?

Currently running BTRFS 3.12 on kernel 3.13 (Ubuntu 14.04).

Thanks,
David Player

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to