Re: unlimited backup revisions?

2001-03-22 Thread yan seiner

I glanced at the source code, and it appears pretty trivial (a change to
options.c and a change to backup.c) to implement a user-selected backup
script.  All it appears to involve is adding an option to the command
line and an extra if statement in backup.c

I might give it a shot in my ample spare time...  It would make rsync a
lot more useful to me anyway  A welcome change from the 5 hour
public meeting I just had to chair - and I have cpu cycles to spare.

--Yan

Martin Schwenke wrote:
> 
> > "yan" == yan seiner <[EMAIL PROTECTED]> writes:
> 
> yan> I've been trying to come up with a scripting solution for
> yan> this for some time, and I'm convinced there isn't one.
> 
> yan> You definitely want to handle the revisions in the same way
> yan> as logrotate: keep a certain depth, delete the oldest, and
> yan> renumber all the older ones.
> 
> Another option that I've implemented is based on amount of free disk
> space rather than number of incremental backups.  I keep all of the
> (date/time based) incrementals on their own filesystem.  Before
> starting a new backup I check whether the disk usage on the filesystem
> is above a certain threshold and, if it is, I delete the oldest
> incremental.  Repeat until disk usage on the incremental filesystem is
> below the threshold and then do the new backup.
> 
> In this way I don't have to guess the number of incremental backups
> that I can afford to keep...  it is based purely on free disk space.
> Naturally, if there's an unusual amount of activity on a particular
> day then this system can also be screwed over...  :-)
> 
> Someone else noted that it is more useful to keep a certain number of
> revisions of files, rather than a certain number of days worth of
> backups.  It would be relatively easy to implement this sort of scheme
> on top of date/time-based incrementals.  Use "find" on each
> incremental directory (starting at the oldest) and either keep a map
> (using TDB?) of filenames and information about the various copies
> around the place or use locate to find how many copies there are of
> each file...  or a combination of the 2: the map would say how many
> copies there are, but not where they are; if you're over the threshold
> then use locate to find and remove the oldest ones...
> 
> It isn't cheap, but what else does your system have to do on a Sunday
> morning?  :-)
> 
> I might implement something like that...
> 
> peace & happiness,
> martin




Re: gid problem on DGUX

2001-03-22 Thread Tim Potter

Martin Pool writes:

> > I was not sure on how to ask a question on the Rsync FAQ page, the only
> > links it had was to answer questions. I saw your email address all over
> > the FAQ page, so I figured I'd try emailing you. Heres my question:
> > The most likely problem is that there is a 
> 
>   gid = somegroup
> 
> entry in your rsyncd.conf that does not correspond to an entry in
> /etc/group.  For example, you may be trying to set the group to
> `nobody', but perhaps DGUX requires `nogroup' or `65533' or something
> similar.  

I've added this to the Faq-O-Matic thingy.


Tim.





gid problem on DGUX

2001-03-22 Thread Martin Pool

The most likely problem is that there is a 

  gid = somegroup

entry in your rsyncd.conf that does not correspond to an entry in
/etc/group.  For example, you may be trying to set the group to
`nobody', but perhaps DGUX requires `nogroup' or `65533' or something
similar.  

-- 
Martin


- Forwarded message from Gerry Maddock <[EMAIL PROTECTED]> -

Message-ID: <[EMAIL PROTECTED]>
Date: Thu, 22 Mar 2001 10:08:14 -0500
From: Gerry Maddock <[EMAIL PROTECTED]>
To: [EMAIL PROTECTED]
Subject: rsync FAQ

I was not sure on how to ask a question on the Rsync FAQ page, the only
links it had was to answer questions. I saw your email address all over
the FAQ page, so I figured I'd try emailing you. Heres my question:
I just downloaded the latest rsync and installed it on my DGUX sys. I
run rsync on all of my linux boxes, and it runs well. Once I had the
/rsync dir created, wrote an rsyncd.conf in /etc, I started rsync with
the --daemon option. Next, from one of my linux boxes, I tried to rsync
files out of my dir listed in rsyncd.conf, but I always get this error:
[root@penguin /]# /usr/bin/rsync -e ssh 'bailey::tst/*' /tmp/
@ERROR: invalid gid

I was just wondering if you or anyone has come across this error before
and has any suggestions for me. THANK YOU IN ADVANCE!


- End forwarded message -




Re: unlimited backup revisions?

2001-03-22 Thread Martin Schwenke

> "yan" == yan seiner <[EMAIL PROTECTED]> writes:

yan> I've been trying to come up with a scripting solution for
yan> this for some time, and I'm convinced there isn't one.

yan> You definitely want to handle the revisions in the same way
yan> as logrotate: keep a certain depth, delete the oldest, and
yan> renumber all the older ones.

Another option that I've implemented is based on amount of free disk
space rather than number of incremental backups.  I keep all of the
(date/time based) incrementals on their own filesystem.  Before
starting a new backup I check whether the disk usage on the filesystem
is above a certain threshold and, if it is, I delete the oldest
incremental.  Repeat until disk usage on the incremental filesystem is
below the threshold and then do the new backup.

In this way I don't have to guess the number of incremental backups
that I can afford to keep...  it is based purely on free disk space.
Naturally, if there's an unusual amount of activity on a particular
day then this system can also be screwed over...  :-)

Someone else noted that it is more useful to keep a certain number of
revisions of files, rather than a certain number of days worth of
backups.  It would be relatively easy to implement this sort of scheme
on top of date/time-based incrementals.  Use "find" on each
incremental directory (starting at the oldest) and either keep a map
(using TDB?) of filenames and information about the various copies
around the place or use locate to find how many copies there are of
each file...  or a combination of the 2: the map would say how many
copies there are, but not where they are; if you're over the threshold
then use locate to find and remove the oldest ones...

It isn't cheap, but what else does your system have to do on a Sunday
morning?  :-)

I might implement something like that...

peace & happiness,
martin




RE: unlimited backup revisions?

2001-03-22 Thread Willis, Ian (Ento, Canberra)

What I would like it to create a backup server that maintains a copy of all
the files on a server and also maintains a copy of 20 or so deltas so that
the last twenty or so revisions can be rebuilt. Is this possible or what
would be the best approach to create such a beast?

--
Ian Willis
Systems Administrator
Division of Entomology CSIRO
GPO Box 1700 
Canberra ACT 2601
ph  02 6246 4391
fax 02 6246 4000


-Original Message-
From: Martin Pool [mailto:[EMAIL PROTECTED]]
Sent: Thursday, 22 March 2001 1:17 PM
To: Sean J. Schluntz
Cc: [EMAIL PROTECTED]
Subject: Re: unlimited backup revisions?


On 21 Mar 2001, "Sean J. Schluntz" <[EMAIL PROTECTED]> wrote:
> 
> Sorry is this is a repeat, but there is no search option (that I can find)
> on the list server.
> 
> Is there a way to make rsync keep unlimited backup copies. When ever a
file
> is changed for it to (with out any of the file merge stuff going) just
push 
> the earler backups down a counting number and drop the new one in place.
That
> way we would have revision histories on all of the files stored on the
> server.

There is no such option at the moment.  People often use

 --backup-dir=/backup/`date +%j`

to put all files on that day in a separate directory, but that's not
quite what you need.  Why not put it in the FAQ wishlist.

-- 
Martin Pool, Human Resource
Linuxcare. Inc.   +61 2 6262 8990
[EMAIL PROTECTED], http://linuxcare.com.au/
Linuxcare.  Putting Open Source to work.




Re: rsync stops during transfer

2001-03-22 Thread Ragnar Wisløff

On torsdag 22. mars 2001, 19:39, Dave Dykstra wrote:

> >
> > >   rsync version(s)
> >
> > 2.4.1 protocol v. 24
>
> That's your problem.  SSH hangs were a known problem in rsync 2.4.1.
> Upgrade to 2.4.6.
>

Right. Some day I will learn to only use last versions. Thanks.

R.




Re: ssh

2001-03-22 Thread Ragnar Wisløff

On torsdag 22. mars 2001, 16:21, you wrote:
> Hi Williams,
> You were right, my sshd was not running on host2.
>
> However, when I re-start sshd and run the command I am asked for a root
> password.
> When I run rsync with --rsh option I am NOT prompt for password.
>
> How can I rsync with -e ssh without being prompt for a password? I run the
> commands from the script and can't be asked for a password.
>
> ${RSYNC_COMMAND} ${DOCUMENT_ROOT} host:${DOCUMENT_ROOT} > ${LOG_FILEB}
>
> RSYNC_COMMAND=/usr/local/bin/rsync --rsync-path=/usr/local/bin/rsync -av -e
> /usr/local/bin/ssh
>
> Any hints?


You need to generate a set of keys for ssh to use when authenticating. This 
is done using the ssh-keygen command which is part of the OpenSSH package. 
When generating, don't specify a passphrase when asked. You need to move your 
identity.pub (public key) to the host your are connecting to, while the 
identity key stays on the host you are connecting from. If this is all 
gobbledygook (or perhaps as understandable as Norwegian), then you need to 
read a bit of the OpenSSH documentation. Be aware of the fact that there are 
at least two versions of the ssh protocol in use, and several versions of the 
keys. All to make life easy. Come back if you hit the wall.


Best regards
-- 

Ragnar Wisløff (speaker of Norwegian)
--
life is a reach. then you gybe.





Re: RSYNC PROBLEM on DGUX

2001-03-22 Thread Gerry Maddock

Thanks Dave that was the problem, Tim Conway helped me with that one. One
other thing I just noticed is:
When I rsync from a linux box to a linux box using:
/usr/bin/rsync -e ssh 'bailey::tst/*' /tmp/
Linux knows the "*" means all, when I'm on my linux box and I try to sync with
the DGUX, the DGUX "seems" to treat "*" as an actual file. So, I actually have
to type in the file or files I want to sync
IE: /usr/bin/rsync -e ssh 'bailey::tst/test.txt' /tmp/
WEIRD!

Dave Dykstra wrote:

> On Thu, Mar 22, 2001 at 01:16:15PM -0500, Gerry Maddock wrote:
> > I just downloaded the latest rsync and installed it on my DGUX sys. I
> > run rsync on all of my linux boxes, and it runs well. Once I had the
> > /rsync dir created, wrote an rsyncd.conf in /etc, I started rsync with
> > the --daemon option. Next, from one of my linux boxes, I tried to rsync
> > files out of my dir listed in rsyncd.conf, but I always get this error:
> > [root@penguin /]# /usr/bin/rsync -e ssh 'bailey::tst/*' /tmp/
> > @ERROR: invalid gid
>
> Note that when you use the '::' syntax that -e ssh is ignored.
>
> You don't show your rsyncd.conf, but by default it sets 'gid = nobody' and
> perhaps you don't have such a group.  See the rsyncd.conf.5 man page.
>
> - Dave Dykstra





Re: RSYNC PROBLEM on DGUX

2001-03-22 Thread Dave Dykstra

On Thu, Mar 22, 2001 at 01:16:15PM -0500, Gerry Maddock wrote:
> I just downloaded the latest rsync and installed it on my DGUX sys. I
> run rsync on all of my linux boxes, and it runs well. Once I had the
> /rsync dir created, wrote an rsyncd.conf in /etc, I started rsync with
> the --daemon option. Next, from one of my linux boxes, I tried to rsync
> files out of my dir listed in rsyncd.conf, but I always get this error:
> [root@penguin /]# /usr/bin/rsync -e ssh 'bailey::tst/*' /tmp/
> @ERROR: invalid gid

Note that when you use the '::' syntax that -e ssh is ignored.

You don't show your rsyncd.conf, but by default it sets 'gid = nobody' and
perhaps you don't have such a group.  See the rsyncd.conf.5 man page.

- Dave Dykstra




Re: rsync stops during transfer

2001-03-22 Thread Dave Dykstra

On Thu, Mar 22, 2001 at 01:25:37PM +0100, Ragnar Wisløff wrote:
> Robert Scholten skrev:
> > Hi Ragnar,
> 
> Wow, that's quick response! Not waiting for Mir to fall on top of you, 
> I hope ...
> 
> > It's a common (and nagging) problem.  Could you post again, with:
> 
> :-(
> 
> >
> >   Machine type(s)
> 
> 2 x Dell PowerEdge 2450 (identical)
> 
> >   Operating system(s)
> 
> Red Hat Linux 6.2, kernel 2.2.16
> 
> >   rsync version(s)
> 
> 2.4.1 protocol v. 24

That's your problem.  SSH hangs were a known problem in rsync 2.4.1.
Upgrade to 2.4.6.

- Dave Dykstra




RSYNC PROBLEM on DGUX

2001-03-22 Thread Gerry Maddock

I just downloaded the latest rsync and installed it on my DGUX sys. I
run rsync on all of my linux boxes, and it runs well. Once I had the
/rsync dir created, wrote an rsyncd.conf in /etc, I started rsync with
the --daemon option. Next, from one of my linux boxes, I tried to rsync
files out of my dir listed in rsyncd.conf, but I always get this error:
[root@penguin /]# /usr/bin/rsync -e ssh 'bailey::tst/*' /tmp/
@ERROR: invalid gid

I was just wondering if you or anyone has come across this error before
and has any suggestions for me. THANK YOU IN ADVANCE!







binaries in cvs (OT!)

2001-03-22 Thread Rusty Carruth


I asked a friend of mine who has used cvs extensively, and he said:

When adding files to cvs (somewhat equivilent to 'cleartool mkelem ...'),
you must tell cvs to treat a file as binary using the -kb flag, and/or 
set global defaults for different file types (e.g., *.jpg) using a 
cvswrappers file in the repository, e.g.:

cvs add -kb binaryfile.jpg

Thereafter, the file is treated as binary, and no additional effort is needed.
Change logs are preserved, but cvs leaves it up to the user to figure 
out how to diff and merge 2 versions of a binary file (i.e. cvs understands 
that a text diff of 2 binary files doesn't make sense).

Steve


Rusty Carruth  Email: [EMAIL PROTECTED] or [EMAIL PROTECTED]
Voice: (480) 345-3621  SnailMail: Schlumberger ATE
FAX:   (480) 345-8793 7855 S. River Parkway, Suite 116
Ham: N7IKQ @ 146.82+,pl 162.2 Tempe, AZ 85284-1825
ICBM: 33 20' 44"N   111 53' 47"W




Re[2]: unlimited backup revisions?

2001-03-22 Thread Rusty Carruth

"Sean J. Schluntz" <[EMAIL PROTECTED]> wrote:
> 
> >Maybe the solution is a --backup-script= option, that would call an
> >external script and hand it the filename. That way, each user could
> >customize it to their heart's content.
> 
> Now I though about that, but it seems like a way to really pound your system.
> Like doing a find -exec, if there are a lot of files then that script is
> going to get called a lot.
> 
> -Sean
> 

Maybe the solution here is to heavily document the fact that you can
pound your system to death using that option, just like using find 
(wow, what a surprise ;-)

Then, of course, someone will come along with a program that reads filenames from
a pipe and does the backup, and they'll want --backup-pipe-process=/path/to/executable
or --backup-pipe=/path/to/pipe or --backup-daemon=ipaddr:port .  Hmm. Is
that good or bad? ;-)

rc



Rusty Carruth  Email: [EMAIL PROTECTED] or [EMAIL PROTECTED]
Voice: (480) 345-3621  SnailMail: Schlumberger ATE
FAX:   (480) 345-8793 7855 S. River Parkway, Suite 116
Ham: N7IKQ @ 146.82+,pl 162.2 Tempe, AZ 85284-1825
ICBM: 33 20' 44"N   111 53' 47"W




Re: unlimited backup revisions?

2001-03-22 Thread Sean J. Schluntz


>Maybe the solution is a --backup-script= option, that would call an
>external script and hand it the filename. That way, each user could
>customize it to their heart's content.

Now I though about that, but it seems like a way to really pound your system.
Like doing a find -exec, if there are a lot of files then that script is
going to get called a lot.

-Sean




Re: unlimited backup revisions?

2001-03-22 Thread Sean J. Schluntz


In message <[EMAIL PROTECTED]>, Mike Lang writes:
>Sounds like you want rsync to be CVS, or maybe CVS to have an rsync module.
>
>How about rsync then run a script to do a cvs commit?


How well does CVS deal with binary files? What would the change log look
like for a JPG that has been updated or an binary database.  I've seen
the past discussions on this and I think the revisions has a place for
some people.  

Thanks to those who have given feedback on the flag, since it's something I
need I'm going to code it anyway and give out the diff, those who need it
can use it :)  The joy of open source :)

-Sean




Re: unlimited backup revisions?

2001-03-22 Thread Yan Seiner

The little bit I know of CVS, it's overkill for what I need.  CVS keeps
a revision history, authors, etc.

All I need is for something (maybe an external archive script to rsync?)
to:
delete the oldest backup
rotate the remaining backups
create the new backup with a .1

Maybe the solution is a --backup-script= option, that would call an
external script and hand it the filename. That way, each user could
customize it to their heart's content.

--Yan

Mike Lang wrote:
> 
> Sounds like you want rsync to be CVS, or maybe CVS to have an rsync module.
> 
> How about rsync then run a script to do a cvs commit?
> 
>  --Mike
> At 10:33 AM 3/22/01 -0500, Yan Seiner wrote:
> >OK, but I'm trying to do is to keep the last n revisions - NOT the last
> >n weeks.
> >
> >So what if I have a file that changes once every 6 weeks?  I want to
> >keep 4 revisions, so that means I have to go back 6 months.
> 
> ___
> Michael Lang[EMAIL PROTECTED]
> Los AlamosNational Laboratory
> ph:505-665-5756, fax:665-5638
> MS B256, Los Alamos, NM 87545





Re: unlimited backup revisions?

2001-03-22 Thread Mike Lang

Sounds like you want rsync to be CVS, or maybe CVS to have an rsync module.

How about rsync then run a script to do a cvs commit?

 --Mike
At 10:33 AM 3/22/01 -0500, Yan Seiner wrote:
>OK, but I'm trying to do is to keep the last n revisions - NOT the last
>n weeks.
>
>So what if I have a file that changes once every 6 weeks?  I want to
>keep 4 revisions, so that means I have to go back 6 months.

___
Michael Lang[EMAIL PROTECTED]
Los AlamosNational Laboratory
ph:505-665-5756, fax:665-5638
MS B256, Los Alamos, NM 87545






Re: unlimited backup revisions?

2001-03-22 Thread Yan Seiner

OK, but I'm trying to do is to keep the last n revisions - NOT the last
n weeks.

So what if I have a file that changes once every 6 weeks?  I want to
keep 4 revisions, so that means I have to go back 6 months.

But now the file next to it gets updated daily

You see my problem?

I want to keep a specific depth of old files, not a time increment.  I
have jobs that remain dormant for years, then re-activate and get a
flurry of activity, then go dormant again.

The problem is that if something happens as a job is going dormant, I
may not realize I've lost that particular file until months later.  I
lost an entire client directory (6 projects and hundreds of files)
exactly this way.  So I want to keep the last n revisisons, no matter
how old they are.

--Yan

[EMAIL PROTECTED] wrote:
> 
> There isn't one?
> rsync has the --backup-dir= option.
> keep each set of backups to a different directory, then merge them back into the 
>main heirarchy if needed.  Since they're already sifted out, it'd be easy to archive 
>them, as well.
> if it's a daily, --backup-dir=$(( $(date +%j) % 28 )) will keep 4 weeks worth, then 
>go back over the old ones.
> Of course, you'd probably want to get that integer seperately, and use it to delete 
>the one you're about to write into, to keep it clean.
> 
> The point is, somebody already anticipated your need, and made it easy to script it.
> 
> Tim Conway
> [EMAIL PROTECTED]
> 303.682.4917
> Philips Semiconductor - Colorado TC
> 1880 Industrial Circle
> Suite D
> Longmont, CO 80501
> 
> [EMAIL PROTECTED]@[EMAIL PROTECTED] on 03/22/2001 03:17:35 AM
> Sent by:[EMAIL PROTECTED]
> To: [EMAIL PROTECTED]@SMTP
> cc:
> Subject:Re: unlimited backup revisions?
> Classification:
> 
> I've been trying to come up with a scripting solution for this for some time, and
> I'm convinced there isn't one.
> 
> You definitely want to handle the revisions in the same way as logrotate: keep a
> certain depth, delete the oldest, and renumber all the older ones.
> 
> If you want to get real ambitius, you could even include an option for compressing
> the backups to save space.
> 
> My biggest concern is a disk failure on the primay during sync (unlikely, but I
> had my main raid fail during sync, and it wiped out both my mirrors.)  A managed
> backup strategy is the only thing that saved my bacon.
> 
> Some tools for managing the backups (listing them - what's the revision history on
> this file type of query, donig a mass recover, etc)  would be useful.
> 
> Just some random thoughts,
> 
> --Yan
> 
> "Sean J. Schluntz" wrote:
> 
> > >> That's what I figured.  Well, I need it for a project so I guess you all
> > >> won't mind if I code it and submit a patch ;)
> > >>
> > >> How does --revisions=XXX sound.  --revisions=0 would be unlimited, any other
> > >> number would be the limiter for the number of revisions.
> > >
> > >And when it reaches that number, do you want it to delete old
> > >revisions, or stop making new revisions?
> >
> > You would delete the old one as you continue rolling down.
> >
> > >Perhaps something like --backup=numeric would be a better name.  In
> > >the long term it might be better to handle this with scripting.
> >
> > I don't see a good scripting solution to this. The closest I could come up
> > with was using the --backup-dir and then remerging the tree after the copy
> > and that is a real cluge.  The scripting solution I see would be to clean
> > up if you had the backup copys set to unlimited so you don't run out of
> > disk space.
> >
> > -Sean





RE: ssh

2001-03-22 Thread Magdalena Hewryk

Hi Williams, 
You were right, my sshd was not running on host2.  

However, when I re-start sshd and run the command I am asked for a root
password.
When I run rsync with --rsh option I am NOT prompt for password.

How can I rsync with -e ssh without being prompt for a password? I run the
commands from the script and can't be asked for a password.

${RSYNC_COMMAND} ${DOCUMENT_ROOT} host:${DOCUMENT_ROOT} > ${LOG_FILEB}

RSYNC_COMMAND=/usr/local/bin/rsync --rsync-path=/usr/local/bin/rsync -av -e
/usr/local/bin/ssh 
 
Any hints?
Thanks,
Magda

> -Original Message-
> From: Williams, William [mailto:[EMAIL PROTECTED]]
> Sent: Thursday, March 22, 2001 10:07 AM
> To: 'Magdalena Hewryk'; [EMAIL PROTECTED]
> Subject: RE: ssh
> 
> 
> Magda,
> 
> Problem 1 here is caused by sshd either not running
> or not being installed on the receiving host (host2).
> 
> If you login to host2 and start up sshd you should be
> in pretty good shape.
> 
> -Original Message-
> From: Magdalena Hewryk [mailto:[EMAIL PROTECTED]]
> Sent: Thursday, March 22, 2001 9:46 AM
> To: [EMAIL PROTECTED]
> Subject: ssh
> 
> 
> Hi,
> I'm trying to use the "ssh" option instead of "rsh" but I am getting
> some
> errors.
> Below are examples:
> 
> 1.
> # /usr/local/bin/rsync --rsync-path=/usr/local/bin/rsync -av -e ssh
> /tmp/hello root@host2:/tmp
> Secure connection to host2 refused; reverting to insecure method.
> Using rsh.  WARNING: Connection will not be encrypted.
> building file list ... done
> hello
> wrote 103 bytes  read 32 bytes  24.55 bytes/sec
> total size is 5  speedup is 0.04
> 
> (if I add the whole path "-e /usr/local/bin/ssh" I get the 
> same WARNING)
> 
> 2. 
> # /usr/local/bin/rsync --rsync-path=/usr/local/bin/rsync --ssh
> "/usr/local/bin/ssh" -av /tmp/hello root@host2:/tmp>
> /usr/local/bin/rsync: unrecognized option `--ssh'
> 
> I do have ssh installed on the system and I can ssh to other machines:
> # which ssh
> /usr/local/bin/ssh
> 
> # ssh -l magda hostX
> magda@hostX password: 
> Last login: Thu Mar 22 09:24:59 2001 from 000.00.00.00
> 
> Any idea why ssh doesn't work for me?
> 
> Thanks,
> Magda
> 




Re: unlimited backup revisions?

2001-03-22 Thread tim . conway

There isn't one?
rsync has the --backup-dir= option.
keep each set of backups to a different directory, then merge them back into the main 
heirarchy if needed.  Since they're already sifted out, it'd be easy to archive them, 
as well.
if it's a daily, --backup-dir=$(( $(date +%j) % 28 )) will keep 4 weeks worth, then go 
back over the old ones.
Of course, you'd probably want to get that integer seperately, and use it to delete 
the one you're about to write into, to keep it clean.

The point is, somebody already anticipated your need, and made it easy to script it.

Tim Conway
[EMAIL PROTECTED]
303.682.4917
Philips Semiconductor - Colorado TC
1880 Industrial Circle
Suite D
Longmont, CO 80501





[EMAIL PROTECTED]@[EMAIL PROTECTED] on 03/22/2001 03:17:35 AM
Sent by:[EMAIL PROTECTED]
To: [EMAIL PROTECTED]@SMTP
cc:  
Subject:Re: unlimited backup revisions?
Classification: 

I've been trying to come up with a scripting solution for this for some time, and
I'm convinced there isn't one.

You definitely want to handle the revisions in the same way as logrotate: keep a
certain depth, delete the oldest, and renumber all the older ones.

If you want to get real ambitius, you could even include an option for compressing
the backups to save space.

My biggest concern is a disk failure on the primay during sync (unlikely, but I
had my main raid fail during sync, and it wiped out both my mirrors.)  A managed
backup strategy is the only thing that saved my bacon.

Some tools for managing the backups (listing them - what's the revision history on
this file type of query, donig a mass recover, etc)  would be useful.

Just some random thoughts,

--Yan

"Sean J. Schluntz" wrote:

> >> That's what I figured.  Well, I need it for a project so I guess you all
> >> won't mind if I code it and submit a patch ;)
> >>
> >> How does --revisions=XXX sound.  --revisions=0 would be unlimited, any other
> >> number would be the limiter for the number of revisions.
> >
> >And when it reaches that number, do you want it to delete old
> >revisions, or stop making new revisions?
>
> You would delete the old one as you continue rolling down.
>
> >Perhaps something like --backup=numeric would be a better name.  In
> >the long term it might be better to handle this with scripting.
>
> I don't see a good scripting solution to this. The closest I could come up
> with was using the --backup-dir and then remerging the tree after the copy
> and that is a real cluge.  The scripting solution I see would be to clean
> up if you had the backup copys set to unlimited so you don't run out of
> disk space.
>
> -Sean








RE: ssh

2001-03-22 Thread Williams, William

Magda,

Problem 1 here is caused by sshd either not running
or not being installed on the receiving host (host2).

If you login to host2 and start up sshd you should be
in pretty good shape.

-Original Message-
From: Magdalena Hewryk [mailto:[EMAIL PROTECTED]]
Sent: Thursday, March 22, 2001 9:46 AM
To: [EMAIL PROTECTED]
Subject: ssh


Hi,
I'm trying to use the "ssh" option instead of "rsh" but I am getting
some
errors.
Below are examples:

1.
# /usr/local/bin/rsync --rsync-path=/usr/local/bin/rsync -av -e ssh
/tmp/hello root@host2:/tmp
Secure connection to host2 refused; reverting to insecure method.
Using rsh.  WARNING: Connection will not be encrypted.
building file list ... done
hello
wrote 103 bytes  read 32 bytes  24.55 bytes/sec
total size is 5  speedup is 0.04

(if I add the whole path "-e /usr/local/bin/ssh" I get the same WARNING)

2. 
# /usr/local/bin/rsync --rsync-path=/usr/local/bin/rsync --ssh
"/usr/local/bin/ssh" -av /tmp/hello root@host2:/tmp>
/usr/local/bin/rsync: unrecognized option `--ssh'

I do have ssh installed on the system and I can ssh to other machines:
# which ssh
/usr/local/bin/ssh

# ssh -l magda hostX
magda@hostX password: 
Last login: Thu Mar 22 09:24:59 2001 from 000.00.00.00

Any idea why ssh doesn't work for me?

Thanks,
Magda




ssh

2001-03-22 Thread Magdalena Hewryk

Hi,
I'm trying to use the "ssh" option instead of "rsh" but I am getting some
errors.
Below are examples:

1.
# /usr/local/bin/rsync --rsync-path=/usr/local/bin/rsync -av -e ssh
/tmp/hello root@host2:/tmp
Secure connection to host2 refused; reverting to insecure method.
Using rsh.  WARNING: Connection will not be encrypted.
building file list ... done
hello
wrote 103 bytes  read 32 bytes  24.55 bytes/sec
total size is 5  speedup is 0.04

(if I add the whole path "-e /usr/local/bin/ssh" I get the same WARNING)

2. 
# /usr/local/bin/rsync --rsync-path=/usr/local/bin/rsync --ssh
"/usr/local/bin/ssh" -av /tmp/hello root@host2:/tmp>
/usr/local/bin/rsync: unrecognized option `--ssh'

I do have ssh installed on the system and I can ssh to other machines:
# which ssh
/usr/local/bin/ssh

# ssh -l magda hostX
magda@hostX password: 
Last login: Thu Mar 22 09:24:59 2001 from 000.00.00.00

Any idea why ssh doesn't work for me?

Thanks,
Magda




Re: rsync stops during transfer

2001-03-22 Thread Vinay Bharel

I get the same thing when I use Rsync on BSD/OS 4.2 to mirror two hard
drives. 

If there are a lot of files, it will get stuck and then I usually have to
CTRL+C it and start again.

The machine has plenty of RAM and free space.

- Vinay Bharel <[EMAIL PROTECTED]>

On Thu, 22 Mar 2001, Ragnar [iso-8859-1] Wisløff wrote:

> Hello list members,
> 
> I've come up against an rsync problem which I hope someone here can 
> help me with.
> 
> Running rsync to transfer a relatively large number of files causes 
> rsync to freeze during transfer. I'm running rsync from the command 
> line, no daemon or any such stuff. This is the command:
> 
> # rsync -a -e ssh /usr2 rmach:/smach --verbose --stat
> 
> There is no error message, the transfer just stops. There is plenty of 
> space on the receiving machine. I've tried running rsync through 
> strace, and the message when transfer stops is:
> 
> select(5, [NULL], 4, [NULL], {60,0}) = 0 (timeout)
> 
> The file rsync stops at varies, there seems to be no pattern with e.g. 
> the files being open. No memory problems, no inode-problems, no 
> max-files problems.
> 
> Any help appreciated.
> 
> 
> Best regards,
> -- 
> Ragnar Wisløff
> Linuxlabs AS
> Oslo, Norway
> 
> 





Re: rsync stops during transfer

2001-03-22 Thread Ragnar Wisløff

Robert Scholten skrev:
> Hi Ragnar,

Wow, that's quick response! Not waiting for Mir to fall on top of you, 
I hope ...

> It's a common (and nagging) problem.  Could you post again, with:

:-(

>
>   Machine type(s)

2 x Dell PowerEdge 2450 (identical)

>   Operating system(s)

Red Hat Linux 6.2, kernel 2.2.16

>   rsync version(s)

2.4.1 protocol v. 24

>   ssh version(s)

OpenSSH 2.1.1, protocol 1.5/2.0
compiled with SSL

>   memory available on each side

512MB, 530MB swap

>   number of files on each side

Approx 10-20.000 files, 9.3 GB


Ragnar





rsync stops during transfer

2001-03-22 Thread Ragnar Wisløff

Hello list members,

I've come up against an rsync problem which I hope someone here can 
help me with.

Running rsync to transfer a relatively large number of files causes 
rsync to freeze during transfer. I'm running rsync from the command 
line, no daemon or any such stuff. This is the command:

# rsync -a -e ssh /usr2 rmach:/smach --verbose --stat

There is no error message, the transfer just stops. There is plenty of 
space on the receiving machine. I've tried running rsync through 
strace, and the message when transfer stops is:

select(5, [NULL], 4, [NULL], {60,0}) = 0 (timeout)

The file rsync stops at varies, there seems to be no pattern with e.g. 
the files being open. No memory problems, no inode-problems, no 
max-files problems.

Any help appreciated.


Best regards,
-- 
Ragnar Wisløff
Linuxlabs AS
Oslo, Norway





Re: rsync-speed with win98

2001-03-22 Thread Robert Scholten

What happens if you do a "dry run", i.e. with the -n option? That way you
can tell if it's network-related.  I've found that network speed with
cygwin stuff is pretty sad.  For example, using Tridge's socklib network
speed test (bewdiful - thanks Tridge!), I get 2MB/s under Win2k+cygwin,
vs. 10MB/s under Linux, same box in both cases.

You might also try the -malign-double (or whatever the heck it
is) compile option.  I doubt this is relevant since you're using the -W
rsync option and hence not doing checksumming, but some of my unrelated
code is awful unless I specify the above.

Good luck.


On Thu, 22 Mar 2001, Dirk Markwardt wrote:

> Hello !
> 
> I want to do a backup of some Win98-Workstations to a Linux-Server. I
> compiled rsync for Windows using Cygwin 1.18 and the instructions in
> win95.txt. All worked fine, but it is terribly slow !!!
> 
> For testing purposes I took two machines, one with linux, the other
> with linux and win98. I want to sync the Windows-Partition of machine
> two with a directory of machine one. In this partition there are about
> 1.6 GB of data. The link is a 100 MBit Crossover-Cable.
> 
> Doing a
> 
> rsync --delete -arvW . one::rsynctest
> 
> with Win98 it takes 49 Minutes. The same command with linux takes only
> 16 Minutes. Why this ??? I certainly deleted the data on one before
> doing the second test.
> 
> Is there a way to speed up the Windows rsync ? Some Compiler-Options
> for Cygwin-gcc or even the Cygwin-dll. Or is there a way to compile
> rsync with Microsoft Visual C++ 6.0 ?
> 
> It would be nice if someone could give me some hints !
> 
> Thanks in advance !
> 
> Dirk
> 
> -- 
> Best regards,
> Dirk Markwardt mailto:[EMAIL PROTECTED]
> 
> 
> 
> 

--
Robert Scholten   Tel:   +61 3 8344 5457  Mob: 0412 834 196
School of Physics Fax:   +61 3 9347 4783
University of Melbourne   email: [EMAIL PROTECTED]
Victoria 3010  AUSTRALIA  http://www.ph.unimelb.edu.au/~scholten





Re: unlimited backup revisions?

2001-03-22 Thread yan seiner

I've been trying to come up with a scripting solution for this for some time, and
I'm convinced there isn't one.

You definitely want to handle the revisions in the same way as logrotate: keep a
certain depth, delete the oldest, and renumber all the older ones.

If you want to get real ambitius, you could even include an option for compressing
the backups to save space.

My biggest concern is a disk failure on the primay during sync (unlikely, but I
had my main raid fail during sync, and it wiped out both my mirrors.)  A managed
backup strategy is the only thing that saved my bacon.

Some tools for managing the backups (listing them - what's the revision history on
this file type of query, donig a mass recover, etc)  would be useful.

Just some random thoughts,

--Yan

"Sean J. Schluntz" wrote:

> >> That's what I figured.  Well, I need it for a project so I guess you all
> >> won't mind if I code it and submit a patch ;)
> >>
> >> How does --revisions=XXX sound.  --revisions=0 would be unlimited, any other
> >> number would be the limiter for the number of revisions.
> >
> >And when it reaches that number, do you want it to delete old
> >revisions, or stop making new revisions?
>
> You would delete the old one as you continue rolling down.
>
> >Perhaps something like --backup=numeric would be a better name.  In
> >the long term it might be better to handle this with scripting.
>
> I don't see a good scripting solution to this. The closest I could come up
> with was using the --backup-dir and then remerging the tree after the copy
> and that is a real cluge.  The scripting solution I see would be to clean
> up if you had the backup copys set to unlimited so you don't run out of
> disk space.
>
> -Sean





rsync-speed with win98

2001-03-22 Thread Dirk Markwardt

Hello !

I want to do a backup of some Win98-Workstations to a Linux-Server. I
compiled rsync for Windows using Cygwin 1.18 and the instructions in
win95.txt. All worked fine, but it is terribly slow !!!

For testing purposes I took two machines, one with linux, the other
with linux and win98. I want to sync the Windows-Partition of machine
two with a directory of machine one. In this partition there are about
1.6 GB of data. The link is a 100 MBit Crossover-Cable.

Doing a

rsync --delete -arvW . one::rsynctest

with Win98 it takes 49 Minutes. The same command with linux takes only
16 Minutes. Why this ??? I certainly deleted the data on one before
doing the second test.

Is there a way to speed up the Windows rsync ? Some Compiler-Options
for Cygwin-gcc or even the Cygwin-dll. Or is there a way to compile
rsync with Microsoft Visual C++ 6.0 ?

It would be nice if someone could give me some hints !

Thanks in advance !

Dirk

-- 
Best regards,
Dirk Markwardt mailto:[EMAIL PROTECTED]