Re: unlimited backup revisions?

2001-03-22 Thread Yan Seiner

OK, but I'm trying to do is to keep the last n revisions - NOT the last
n weeks.

So what if I have a file that changes once every 6 weeks?  I want to
keep 4 revisions, so that means I have to go back 6 months.

But now the file next to it gets updated daily

You see my problem?

I want to keep a specific depth of old files, not a time increment.  I
have jobs that remain dormant for years, then re-activate and get a
flurry of activity, then go dormant again.

The problem is that if something happens as a job is going dormant, I
may not realize I've lost that particular file until months later.  I
lost an entire client directory (6 projects and hundreds of files)
exactly this way.  So I want to keep the last n revisisons, no matter
how old they are.

--Yan

[EMAIL PROTECTED] wrote:
 
 There isn't one?
 rsync has the --backup-dir= option.
 keep each set of backups to a different directory, then merge them back into the 
main heirarchy if needed.  Since they're already sifted out, it'd be easy to archive 
them, as well.
 if it's a daily, --backup-dir=$(( $(date +%j) % 28 )) will keep 4 weeks worth, then 
go back over the old ones.
 Of course, you'd probably want to get that integer seperately, and use it to delete 
the one you're about to write into, to keep it clean.
 
 The point is, somebody already anticipated your need, and made it easy to script it.
 
 Tim Conway
 [EMAIL PROTECTED]
 303.682.4917
 Philips Semiconductor - Colorado TC
 1880 Industrial Circle
 Suite D
 Longmont, CO 80501
 
 [EMAIL PROTECTED]@[EMAIL PROTECTED] on 03/22/2001 03:17:35 AM
 Sent by:[EMAIL PROTECTED]
 To: [EMAIL PROTECTED]@SMTP
 cc:
 Subject:Re: unlimited backup revisions?
 Classification:
 
 I've been trying to come up with a scripting solution for this for some time, and
 I'm convinced there isn't one.
 
 You definitely want to handle the revisions in the same way as logrotate: keep a
 certain depth, delete the oldest, and renumber all the older ones.
 
 If you want to get real ambitius, you could even include an option for compressing
 the backups to save space.
 
 My biggest concern is a disk failure on the primay during sync (unlikely, but I
 had my main raid fail during sync, and it wiped out both my mirrors.)  A managed
 backup strategy is the only thing that saved my bacon.
 
 Some tools for managing the backups (listing them - what's the revision history on
 this file type of query, donig a mass recover, etc)  would be useful.
 
 Just some random thoughts,
 
 --Yan
 
 "Sean J. Schluntz" wrote:
 
   That's what I figured.  Well, I need it for a project so I guess you all
   won't mind if I code it and submit a patch ;)
  
   How does --revisions=XXX sound.  --revisions=0 would be unlimited, any other
   number would be the limiter for the number of revisions.
  
  And when it reaches that number, do you want it to delete old
  revisions, or stop making new revisions?
 
  You would delete the old one as you continue rolling down.
 
  Perhaps something like --backup=numeric would be a better name.  In
  the long term it might be better to handle this with scripting.
 
  I don't see a good scripting solution to this. The closest I could come up
  with was using the --backup-dir and then remerging the tree after the copy
  and that is a real cluge.  The scripting solution I see would be to clean
  up if you had the backup copys set to unlimited so you don't run out of
  disk space.
 
  -Sean





Re: unlimited backup revisions?

2001-03-22 Thread Mike Lang

Sounds like you want rsync to be CVS, or maybe CVS to have an rsync module.

How about rsync then run a script to do a cvs commit?

 --Mike
At 10:33 AM 3/22/01 -0500, Yan Seiner wrote:
OK, but I'm trying to do is to keep the last n revisions - NOT the last
n weeks.

So what if I have a file that changes once every 6 weeks?  I want to
keep 4 revisions, so that means I have to go back 6 months.

___
Michael Lang[EMAIL PROTECTED]
Los AlamosNational Laboratory
ph:505-665-5756, fax:665-5638
MS B256, Los Alamos, NM 87545






Re: unlimited backup revisions?

2001-03-22 Thread Sean J. Schluntz


In message [EMAIL PROTECTED], Mike Lang writes:
Sounds like you want rsync to be CVS, or maybe CVS to have an rsync module.

How about rsync then run a script to do a cvs commit?


How well does CVS deal with binary files? What would the change log look
like for a JPG that has been updated or an binary database.  I've seen
the past discussions on this and I think the revisions has a place for
some people.  

Thanks to those who have given feedback on the flag, since it's something I
need I'm going to code it anyway and give out the diff, those who need it
can use it :)  The joy of open source :)

-Sean




Re: unlimited backup revisions?

2001-03-22 Thread Sean J. Schluntz


Maybe the solution is a --backup-script= option, that would call an
external script and hand it the filename. That way, each user could
customize it to their heart's content.

Now I though about that, but it seems like a way to really pound your system.
Like doing a find -exec, if there are a lot of files then that script is
going to get called a lot.

-Sean




RE: unlimited backup revisions?

2001-03-22 Thread Willis, Ian (Ento, Canberra)

What I would like it to create a backup server that maintains a copy of all
the files on a server and also maintains a copy of 20 or so deltas so that
the last twenty or so revisions can be rebuilt. Is this possible or what
would be the best approach to create such a beast?

--
Ian Willis
Systems Administrator
Division of Entomology CSIRO
GPO Box 1700 
Canberra ACT 2601
ph  02 6246 4391
fax 02 6246 4000


-Original Message-
From: Martin Pool [mailto:[EMAIL PROTECTED]]
Sent: Thursday, 22 March 2001 1:17 PM
To: Sean J. Schluntz
Cc: [EMAIL PROTECTED]
Subject: Re: unlimited backup revisions?


On 21 Mar 2001, "Sean J. Schluntz" [EMAIL PROTECTED] wrote:
 
 Sorry is this is a repeat, but there is no search option (that I can find)
 on the list server.
 
 Is there a way to make rsync keep unlimited backup copies. When ever a
file
 is changed for it to (with out any of the file merge stuff going) just
push 
 the earler backups down a counting number and drop the new one in place.
That
 way we would have revision histories on all of the files stored on the
 server.

There is no such option at the moment.  People often use

 --backup-dir=/backup/`date +%j`

to put all files on that day in a separate directory, but that's not
quite what you need.  Why not put it in the FAQ wishlist.

-- 
Martin Pool, Human Resource
Linuxcare. Inc.   +61 2 6262 8990
[EMAIL PROTECTED], http://linuxcare.com.au/
Linuxcare.  Putting Open Source to work.




Re: unlimited backup revisions?

2001-03-22 Thread Martin Schwenke

 "yan" == yan seiner [EMAIL PROTECTED] writes:

yan I've been trying to come up with a scripting solution for
yan this for some time, and I'm convinced there isn't one.

yan You definitely want to handle the revisions in the same way
yan as logrotate: keep a certain depth, delete the oldest, and
yan renumber all the older ones.

Another option that I've implemented is based on amount of free disk
space rather than number of incremental backups.  I keep all of the
(date/time based) incrementals on their own filesystem.  Before
starting a new backup I check whether the disk usage on the filesystem
is above a certain threshold and, if it is, I delete the oldest
incremental.  Repeat until disk usage on the incremental filesystem is
below the threshold and then do the new backup.

In this way I don't have to guess the number of incremental backups
that I can afford to keep...  it is based purely on free disk space.
Naturally, if there's an unusual amount of activity on a particular
day then this system can also be screwed over...  :-)

Someone else noted that it is more useful to keep a certain number of
revisions of files, rather than a certain number of days worth of
backups.  It would be relatively easy to implement this sort of scheme
on top of date/time-based incrementals.  Use "find" on each
incremental directory (starting at the oldest) and either keep a map
(using TDB?) of filenames and information about the various copies
around the place or use locate to find how many copies there are of
each file...  or a combination of the 2: the map would say how many
copies there are, but not where they are; if you're over the threshold
then use locate to find and remove the oldest ones...

It isn't cheap, but what else does your system have to do on a Sunday
morning?  :-)

I might implement something like that...

peace  happiness,
martin




Re: unlimited backup revisions?

2001-03-22 Thread yan seiner

I glanced at the source code, and it appears pretty trivial (a change to
options.c and a change to backup.c) to implement a user-selected backup
script.  All it appears to involve is adding an option to the command
line and an extra if statement in backup.c

I might give it a shot in my ample spare time...  It would make rsync a
lot more useful to me anyway  A welcome change from the 5 hour
public meeting I just had to chair - and I have cpu cycles to spare.

--Yan

Martin Schwenke wrote:
 
  "yan" == yan seiner [EMAIL PROTECTED] writes:
 
 yan I've been trying to come up with a scripting solution for
 yan this for some time, and I'm convinced there isn't one.
 
 yan You definitely want to handle the revisions in the same way
 yan as logrotate: keep a certain depth, delete the oldest, and
 yan renumber all the older ones.
 
 Another option that I've implemented is based on amount of free disk
 space rather than number of incremental backups.  I keep all of the
 (date/time based) incrementals on their own filesystem.  Before
 starting a new backup I check whether the disk usage on the filesystem
 is above a certain threshold and, if it is, I delete the oldest
 incremental.  Repeat until disk usage on the incremental filesystem is
 below the threshold and then do the new backup.
 
 In this way I don't have to guess the number of incremental backups
 that I can afford to keep...  it is based purely on free disk space.
 Naturally, if there's an unusual amount of activity on a particular
 day then this system can also be screwed over...  :-)
 
 Someone else noted that it is more useful to keep a certain number of
 revisions of files, rather than a certain number of days worth of
 backups.  It would be relatively easy to implement this sort of scheme
 on top of date/time-based incrementals.  Use "find" on each
 incremental directory (starting at the oldest) and either keep a map
 (using TDB?) of filenames and information about the various copies
 around the place or use locate to find how many copies there are of
 each file...  or a combination of the 2: the map would say how many
 copies there are, but not where they are; if you're over the threshold
 then use locate to find and remove the oldest ones...
 
 It isn't cheap, but what else does your system have to do on a Sunday
 morning?  :-)
 
 I might implement something like that...
 
 peace  happiness,
 martin




unlimited backup revisions?

2001-03-21 Thread Sean J. Schluntz


Sorry is this is a repeat, but there is no search option (that I can find)
on the list server.

Is there a way to make rsync keep unlimited backup copies. When ever a file
is changed for it to (with out any of the file merge stuff going) just push 
the earler backups down a counting number and drop the new one in place. That
way we would have revision histories on all of the files stored on the
server.

-Sean




Re: unlimited backup revisions?

2001-03-21 Thread Martin Pool

On 21 Mar 2001, "Sean J. Schluntz" [EMAIL PROTECTED] wrote:
 
 Sorry is this is a repeat, but there is no search option (that I can find)
 on the list server.
 
 Is there a way to make rsync keep unlimited backup copies. When ever a file
 is changed for it to (with out any of the file merge stuff going) just push 
 the earler backups down a counting number and drop the new one in place. That
 way we would have revision histories on all of the files stored on the
 server.

There is no such option at the moment.  People often use

 --backup-dir=/backup/`date +%j`

to put all files on that day in a separate directory, but that's not
quite what you need.  Why not put it in the FAQ wishlist.

-- 
Martin Pool, Human Resource
Linuxcare. Inc.   +61 2 6262 8990
[EMAIL PROTECTED], http://linuxcare.com.au/
Linuxcare.  Putting Open Source to work.




Re: unlimited backup revisions?

2001-03-21 Thread Martin Pool

On 21 Mar 2001, "Sean J. Schluntz" [EMAIL PROTECTED] wrote:
 
  Sorry is this is a repeat, but there is no search option (that I can find)
  on the list server.
  
  Is there a way to make rsync keep unlimited backup copies. When ever a file
  is changed for it to (with out any of the file merge stuff going) just push 
  the earler backups down a counting number and drop the new one in place. Tha
 t
  way we would have revision histories on all of the files stored on the
  server.
 
 There is no such option at the moment.  People often use
 
 
 to put all files on that day in a separate directory, but that's not
 quite what you need.  Why not put it in the FAQ wishlist.
 
 That's what I figured.  Well, I need it for a project so I guess you all
 won't mind if I code it and submit a patch ;)
 
 How does --revisions=XXX sound.  --revisions=0 would be unlimited, any other
 number would be the limiter for the number of revisions.

And when it reaches that number, do you want it to delete old
revisions, or stop making new revisions?

Perhaps something like --backup=numeric would be a better name.  In
the long term it might be better to handle this with scripting.

-- 
Martin Pool

Revolutions do not require corporate support.




Re: unlimited backup revisions?

2001-03-21 Thread Sean J. Schluntz


 That's what I figured.  Well, I need it for a project so I guess you all
 won't mind if I code it and submit a patch ;)
 
 How does --revisions=XXX sound.  --revisions=0 would be unlimited, any other
 number would be the limiter for the number of revisions.

And when it reaches that number, do you want it to delete old
revisions, or stop making new revisions?

You would delete the old one as you continue rolling down.


Perhaps something like --backup=numeric would be a better name.  In
the long term it might be better to handle this with scripting.

I don't see a good scripting solution to this. The closest I could come up
with was using the --backup-dir and then remerging the tree after the copy
and that is a real cluge.  The scripting solution I see would be to clean
up if you had the backup copys set to unlimited so you don't run out of
disk space.

-Sean




Re: unlimited backup revisions?

2001-03-21 Thread Alberto Accomazzi

In message [EMAIL PROTECTED], "Sean J. Schluntz" writes:

 
  That's what I figured.  Well, I need it for a project so I guess you all
  won't mind if I code it and submit a patch ;)
  
  How does --revisions=XXX sound.  --revisions=0 would be unlimited, any oth
er
  number would be the limiter for the number of revisions.
 
 And when it reaches that number, do you want it to delete old
 revisions, or stop making new revisions?
 
 You would delete the old one as you continue rolling down.
 
 
 Perhaps something like --backup=numeric would be a better name.  In
 the long term it might be better to handle this with scripting.

I would suggest no reinventing the wheel and doing this the way GNU cp
does it:

  -V, --version-control=WORD   override the usual version control

The backup suffix is ~, unless set with SIMPLE_BACKUP_SUFFIX.  The
version control may be set with VERSION_CONTROL, values are:

  t, numbered make numbered backups
  nil, existing   numbered if numbered backups exist, simple otherwise
  never, simple   always make simple backups


Unless there is some overwhelming reason not follow this scheme.


-- Alberto



Alberto Accomazzi  mailto:[EMAIL PROTECTED]
NASA Astrophysics Data System  http://adsabs.harvard.edu
Harvard-Smithsonian Center for Astrophysicshttp://cfawww.harvard.edu
60 Garden Street, MS 83, Cambridge, MA 02138 USA   





Re: unlimited backup revisions?

2001-03-21 Thread Martin Pool

On 22 Mar 2001, Alberto Accomazzi [EMAIL PROTECTED] wrote:
   -V, --version-control=WORD   override the usual version control
 
 The backup suffix is ~, unless set with SIMPLE_BACKUP_SUFFIX.  The
 version control may be set with VERSION_CONTROL, values are:
 
   t, numbered make numbered backups
   nil, existing   numbered if numbered backups exist, simple otherwise
   never, simple   always make simple backups

Recent GNU standards and fileutils deprecate the 'version-control'
name (because it's not really version control), and call this
parameter --backup instead. :-)

I don't think many people use the elisp-style names.

-- 
Martin Pool