>>>>> "yan" == yan seiner <[EMAIL PROTECTED]> writes:

    yan> I've been trying to come up with a scripting solution for
    yan> this for some time, and I'm convinced there isn't one.

    yan> You definitely want to handle the revisions in the same way
    yan> as logrotate: keep a certain depth, delete the oldest, and
    yan> renumber all the older ones.

Another option that I've implemented is based on amount of free disk
space rather than number of incremental backups.  I keep all of the
(date/time based) incrementals on their own filesystem.  Before
starting a new backup I check whether the disk usage on the filesystem
is above a certain threshold and, if it is, I delete the oldest
incremental.  Repeat until disk usage on the incremental filesystem is
below the threshold and then do the new backup.

In this way I don't have to guess the number of incremental backups
that I can afford to keep...  it is based purely on free disk space.
Naturally, if there's an unusual amount of activity on a particular
day then this system can also be screwed over...  :-)

Someone else noted that it is more useful to keep a certain number of
revisions of files, rather than a certain number of days worth of
backups.  It would be relatively easy to implement this sort of scheme
on top of date/time-based incrementals.  Use "find" on each
incremental directory (starting at the oldest) and either keep a map
(using TDB?) of filenames and information about the various copies
around the place or use locate to find how many copies there are of
each file...  or a combination of the 2: the map would say how many
copies there are, but not where they are; if you're over the threshold
then use locate to find and remove the oldest ones...

It isn't cheap, but what else does your system have to do on a Sunday
morning?  :-)

I might implement something like that...

peace & happiness,
martin

Reply via email to