OK, but I'm trying to do is to keep the last n revisions - NOT the last
n weeks.....

So what if I have a file that changes once every 6 weeks?  I want to
keep 4 revisions, so that means I have to go back 6 months.

But now the file next to it gets updated daily....

You see my problem?

I want to keep a specific depth of old files, not a time increment.  I
have jobs that remain dormant for years, then re-activate and get a
flurry of activity, then go dormant again.

The problem is that if something happens as a job is going dormant, I
may not realize I've lost that particular file until months later.  I
lost an entire client directory (6 projects and hundreds of files)
exactly this way.  So I want to keep the last n revisisons, no matter
how old they are.

--Yan

[EMAIL PROTECTED] wrote:
> 
> There isn't one?
> rsync has the --backup-dir= option.
> keep each set of backups to a different directory, then merge them back into the 
>main heirarchy if needed.  Since they're already sifted out, it'd be easy to archive 
>them, as well.
> if it's a daily, --backup-dir=$(( $(date +%j) % 28 )) will keep 4 weeks worth, then 
>go back over the old ones.
> Of course, you'd probably want to get that integer seperately, and use it to delete 
>the one you're about to write into, to keep it clean.
> 
> The point is, somebody already anticipated your need, and made it easy to script it.
> 
> Tim Conway
> [EMAIL PROTECTED]
> 303.682.4917
> Philips Semiconductor - Colorado TC
> 1880 Industrial Circle
> Suite D
> Longmont, CO 80501
> 
> [EMAIL PROTECTED]@[EMAIL PROTECTED] on 03/22/2001 03:17:35 AM
> Sent by:        [EMAIL PROTECTED]
> To:     [EMAIL PROTECTED]@SMTP
> cc:
> Subject:        Re: unlimited backup revisions?
> Classification:
> 
> I've been trying to come up with a scripting solution for this for some time, and
> I'm convinced there isn't one.
> 
> You definitely want to handle the revisions in the same way as logrotate: keep a
> certain depth, delete the oldest, and renumber all the older ones.
> 
> If you want to get real ambitius, you could even include an option for compressing
> the backups to save space.
> 
> My biggest concern is a disk failure on the primay during sync (unlikely, but I
> had my main raid fail during sync, and it wiped out both my mirrors.)  A managed
> backup strategy is the only thing that saved my bacon.....
> 
> Some tools for managing the backups (listing them - what's the revision history on
> this file type of query, donig a mass recover, etc)  would be useful.
> 
> Just some random thoughts,
> 
> --Yan
> 
> "Sean J. Schluntz" wrote:
> 
> > >> That's what I figured.  Well, I need it for a project so I guess you all
> > >> won't mind if I code it and submit a patch ;)
> > >>
> > >> How does --revisions=XXX sound.  --revisions=0 would be unlimited, any other
> > >> number would be the limiter for the number of revisions.
> > >
> > >And when it reaches that number, do you want it to delete old
> > >revisions, or stop making new revisions?
> >
> > You would delete the old one as you continue rolling down.
> >
> > >Perhaps something like --backup=numeric would be a better name.  In
> > >the long term it might be better to handle this with scripting.
> >
> > I don't see a good scripting solution to this. The closest I could come up
> > with was using the --backup-dir and then remerging the tree after the copy
> > and that is a real cluge.  The scripting solution I see would be to clean
> > up if you had the backup copys set to unlimited so you don't run out of
> > disk space.
> >
> > -Sean


Reply via email to