Nicolas Williams wrote:
> On Wed, Jan 07, 2009 at 01:38:33PM -0500, Tom Georgoulias wrote:
>> Absolutely.  I've been hacking on a zfs replication shell script that I 
>> found on a blog and trying to make it work for my setup.  What you've 
>> said has just reassured me that the headaches I was running into aren't 
>> just a result of my minimal zfs knowledge, as I was struggling with a 
>> way to "rotate" the snapshots on the source server (making a new one, 
>> getting rid of the oldest, and renaming those in the middle to reflect 
>> their new status) and keeping the remote server sync'd up with the 
>> source.  So far I haven't been successful at that.
> 
> Right, incremental zfs send streams do not include information about
> snapshot deletions done at times between the two snapshots in question.
> 
> What you can do is script around zfs list output to detect which
> snapshots have been deleted (or even renamed, using the creation time --
> it'd be nice if every dataset and snapshot had a UUID, no? user-defined
> properties might help with that).  Renames are going to be the hardest
> to deal with.

Renames *were* hard--I never got that working so I gave up.  I was 
encouraged to see your suggestion of a user-defined properties, I was 
trying to use them to get around the renaming problem and wasn't sure if 
that was too much of a brute force approach.

The current approach I'm working on is to a use date timestamp for the 
name of the snapshot and a local zfs user defined property to label the 
snapshot (i.e. daily.1, daily.2, etc).  I will look for the label that 
is the oldest, daily.5, delete it, then go through the rest of the list 
and relabel them accordingly.  daily.4 becomes daily.5 and so on.  Then 
I create a new snapshot and label it daily.0, then try to send an 
incremental snapshot between the snapshot labeled daily.0 and daily.1. 
I have most of the script in place except for the snapshot.

I'm not sure that is the best approach, I think I would prefer something 
that keeps essentially the same snapshots on both servers (or just the 
has the latest snapshot) and has the latest snapshot on the remote 
server mounted and ready to go.  Seems like the zfs-auto-snapshot tool 
worked more like this.  I got the latest zfs-auto-snapshot code from the 
hg repo today and began messing around with it to see what I need to do 
(outside of creating trust between the zfssnap users on both servers).

Tom
_______________________________________________
indiana-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/indiana-discuss

Reply via email to