Re: --owner --group without root access?

2002-01-02 Thread tim . conway

rsync makes exact copies of filesystems.  It's a mirroring tool, not a 
backup tool.  It stores the information as a filesystem, and if it's not 
allowed to save group and user id on the filesystem, it doesn't.  Perhaps 
you need an archiving system?  maybe doing incremental backups?
That said, if you need to record who owns what, how about using the 
--write-batch/--read-batch (rsync+) features?  rsync it once just getting 
the files and their permissions.  Then, do it again with --write-batch, 
which will create a file containing instructions on how to fix the 
ownerships.  then, to restore, you can just rsync the files back, then 
apply the ownerships file with --read-batch.  Of course, the batch file 
will contain other changes made between the sync and the batch, but at a 
quiescent time, it should be minimal, and you'd probably want the changes 
anyway.


Tim Conway
[EMAIL PROTECTED]
303.682.4917
Philips Semiconductor - Longmont TC
1880 Industrial Circle, Suite D
Longmont, CO 80501
Available via SameTime Connect within Philips, n9hmg on AIM
perl -e 'print pack(, 
19061,29556,8289,28271,29800,25970,8304,25970,27680,26721,25451,25970), 
.\n '
There are some who call me Tim?




Philip Mak [EMAIL PROTECTED]
Sent by: [EMAIL PROTECTED]
12/30/2001 02:24 AM

 
To: [EMAIL PROTECTED]
cc: (bcc: Tim Conway/LMT/SC/PHILIPS)
Subject:--owner --group without root access?
Classification: 



Is there a way to preserve the owner and group permissions without having
root access?

Well, this is not possible on the filesystem level of course, but what
about storing the owner/group information in a supplementary file that can
be read by rsync to later reconstruct this information?

I'm using rsync to perform a server-to-server backup of a machine's hard
drive. If the hard drive being backed up were to actually fail, I would
want to be able to restore all the files with their exact ownership
information.

However, I think that needing to have root access on the backup server
should not be necessary to do this...









Re: reverse delete?

2002-01-02 Thread tim . conway

#!/bin/sh

for file in `rsh remote 'cd ~/Maildir;find . -type f -print'`
do
[ -f ~/Maildir/$file ]  rm ~/Maildir/$file
done

Tim Conway
[EMAIL PROTECTED]
303.682.4917
Philips Semiconductor - Longmont TC
1880 Industrial Circle, Suite D
Longmont, CO 80501
Available via SameTime Connect within Philips, n9hmg on AIM
perl -e 'print pack(, 
19061,29556,8289,28271,29800,25970,8304,25970,27680,26721,25451,25970), 
.\n '
There are some who call me Tim?




Graham Guttocks [EMAIL PROTECTED]
Sent by: [EMAIL PROTECTED]
01/01/2002 01:57 PM

 
To: [EMAIL PROTECTED]
cc: (bcc: Tim Conway/LMT/SC/PHILIPS)
Subject:reverse delete?
Classification: 



Greetings,

I'm looking for an option that deletes from the receiving side
any files contained on the sending side.  For example,

If local:~/Maildir/ contains:

123.txt
456.txt

And remote:~/Maildir/ contains:

123.txt
456.txt
789.txt

Running rsync local:~/Maildir/ remote:~/Maildir/ with the appropriate
options would leave remote:~/Maildir/ with:

789.txt

and local:~/Maildir/ would remain unchanged with:

123.txt
456.txt

Regards,
Graham


__
Do You Yahoo!?
Send your FREE holiday greetings online!
http://greetings.yahoo.com








Re: No --delete-after?

2002-01-02 Thread Dave Dykstra

On Fri, Dec 21, 2001 at 04:05:01PM -0500, Mack, Daemian wrote:
 Is anyone successfully using the Cygwin rsync on Win2k (or NT4) as both
 daemon and client, with --delete-after working on the client?
 
 I can get --delete to work, but I'd prefer to delete files only on a
 successful transfer, to ensure that the end-user has a working collection of
 files, no matter what release.  For some reason, --delete-after does nothing
 for me, even as administrator on the Win2k box that's acting as a client.
 
 
 Daemian Mack

--delete is already not supposed to delete anything if there are any errors
on the sending machine.  Are you in particular concerned about errors on
the receiving machine?  I've never used --delete-after, but in looking at
the code it looks it should move all the deletes to after all the files
have been received.  Can you itemize a series of simple steps, starting
from scratch, that demonstrates the problem you're seeing?

- Dave Dykstra




Re: --backup-dir confusion

2002-01-02 Thread Dave Dykstra

On Sun, Dec 30, 2001 at 04:15:37AM -0500, Philip Mak wrote:
 With the following rsync settings:
 
 cd /home/lina_backup
 rsync -R -v -z -rlptgo --delete \
   --password-file=password \
   --include-from=include --exclude=* \
   --backup --backup-dir=./`date -d yesterday +%Y-%m-%d` \
   rsync://root@lina/backup current
 
 I would expect the backup directory to be /home/lina_backup/2001-12-29.
 But it becomes /home/lina_backup/current/2001-12-29. Is this a bug? (Am I
 abusing the rsync command by trying to use relative paths and shouldn't do
 this?)

I'm pretty sure backup-dir is relative to the destination directory. 
If anybody confirms it for me, I'll change the man page.


 BTW, will my settings clobber a file if something goes wrong with the
 rsync? Or will the existance of the backup directory make certain that if
 the rsync doesn't work, it at least keeps the older copy of a file?

It should.  In general rsync always does all it's work in a temporary file
and only does a rename when it's all done so it never really completely loses
a file.

- Dave Dykstra




Re: hosts allow secure?

2002-01-02 Thread Dave Dykstra

On Sun, Dec 30, 2001 at 05:32:28AM -0500, Philip Mak wrote:
 How secure is hosts allow?

It's not.

 I have hosts allow = bkup in my rsyncd.conf. Then in /etc/hosts I have:
 
 64.29.16.235  bkup
 
 This makes only 64.29.16.235 able to connect to rsync.
 
 Could someone spoof their hostname somehow to trick rsync into letting
 them in, though? e.g. If they reverse DNS says that they're called bkup.

In general somebody could spoof the DNS, although not if you have it in
/etc/hosts like that (assuming /etc/nsswitch.conf is set to give priority
to files over dns).  If the bkup machine is on the same subnet in a secured
machine room, it's also pretty unlikely that somebody would be able to hijack
a live session.  However, if you're going over a long distance network it's
vulnerable.  There's no host verification or session integrity.  If you can,
use SSH.

This is really no different than tcp wrappers.

- Dave Dykstra




Re: 2.5.1pre3 - Bugs in configure script / config.h.in breaks build.

2002-01-02 Thread Dave Dykstra

On Tue, Jan 01, 2002 at 06:45:59PM -0600, John Malmberg wrote:
 Compaq C 6.5
 OpenVMS Alpha 7.3
...
 A second issue, is the line:
 
 #undef socklen_t
 
 It is not in the standard format for the other lines in the configure 
 script.
 
 It would be helpful for it to be:
 
 #undef HAVE_SOCKLEN_T
 
 And then somewhere else, where the defintion is used, or in rsync.h
 
 #ifndef HAVE_SOCKLEN_T
 typedef socklen_t size_t
 #endif


That's not enough because it needs to figure out what value to use to
define socklen_t; the current logic in aclocal.m4 tries
int size_t unsigned long unsigned long
until it gets one to compile.


 I can do debugging or testing of the configure scripts.  So someone with 
 a UNIX platform will need to verify what the fixes to the configure 
 scripts need to be.
 
 
 OpenVMS does not execute configure scripts.  They are harder to port 
 than the applications, and tend to generate incorrect results for the 
 OpenVMS platform even after they are ported.
 
 Instead, an OpenVMS DCL procedure is used to read the CONFIG.H.IN file 
 and uses it to search the system libraries to see what routines and 
 header definitions are present.


Is there some other syntax that the procedure accepts to allow passing a
value through?



 This works very well when all of the #undef lines are in a standard 
 format.
 
 -John
 [EMAIL PROTECTED]
 Personal Opinion Only


- Dave Dykstra




Re: (patch) memory leak in loadparm.c

2002-01-02 Thread Colin Walters

On Wed, 2002-01-02 at 17:29, Dave Dykstra wrote:
 Isn't there some solution that doesn't have to explicitly list every
 variable name?  I think that's asking for future bugs; just because there's
 an instruction in a comment doesn't mean people will remember to do what
 it says when they add a new variable.

I don't think there is a way to do it without fairly major surgery in
that code (though I would be happy to be proved wrong), and if I was
going to spend any amount of time on it, the only possible course of
action I see would be to trash the whole thing and rewrite it
sanely...which I personally don't have time to do, but if someone else
does, please do that instead of using my patch.





File system usage

2002-01-02 Thread Duane Meyer



This is a simple question. How much file system 
overhead is there with this system? Is it only as large as the largest file 
transfered or could you potentially (even if configured correctly) end up with 
double what you started out with on the sending or receiving end? 

The reason I need to be sure is that I have a file 
system that's literally several hundred thousand files. Complete it's around 
25-30GB.. these are sparse files so there's lots of room for compression. 
..tar'd and compressed it's 3-4GB.

I just need to be sure there will be enough room if 
no single file is larger than about 200K and I have about 5GB of space left of 
the partition.

It's just taking way too long to tar and compress the complete directory, 
transfer than uncompress and untar.. the last run took around 24 hours front to 
back. I'm hoping to cut that to around 4-5 hours. :) (probably optimistic, but 
who knows!) 

Anybody know of limitations on file count or size I 
might run into here?

Any help is appreciated for this rsync 
newbie.

Regards,
Duane Meyer


Re: 2.5.1pre3 - Bugs in configure script / config.h.in breaks build.

2002-01-02 Thread John E. Malmberg

Dave Dykstra wrote:

  On Tue, Jan 01, 2002 at 06:45:59PM -0600, John Malmberg wrote:
 
 Compaq C 6.5
 OpenVMS Alpha 7.3
 
  ...
 
 A second issue, is the line:
 
 #undef socklen_t
 
 It is not in the standard format for the other lines in the configure
 script.
 
 It would be helpful for it to be:
 
 #undef HAVE_SOCKLEN_T
 
 And then somewhere else, where the defintion is used, or in rsync.h
 
 #ifndef HAVE_SOCKLEN_T
 typedef socklen_t size_t
 #endif
 
 
 
  That's not enough because it needs to figure out what value to use to
  define socklen_t; the current logic in aclocal.m4 tries
  int size_t unsigned long unsigned long
  until it gets one to compile.


Depending on the compile options selected or defaulted, and if standard
headers are selected or not, a C compiler may not produce the expected
results from one of the test programs in a configure script.

One of the main reasons that the configure scripts do not work well off
of a UNIX platform is that some UNIX library functions have multiple
implementations for mapping to native operating system features.

There are a number of cases where a programmer needs to select at
compile time which behavior is correct.  Or if the compiler is to
conform to new standard or library routine behavior change or to be
backwards compatable.

So even if I were to obtain a UNIX compatable shell for OpenVMS like
GNV, the configure results would likely still be misleading.

For an autoconfigure to be truly cross platform, it must be table
driven, so that a family of similar Operating systems like the *NIX
could share common scripts, but non-*NIX operating systems can supply
scripts that are specific to them.


---

After this I ran into the ALLOCA mess. :-(

With Compaq C, the ALLOCA function is __ALLOCA(X),

and the prototype is:

void * __ALLOCA(size_t x);

But I do not have an alloca.h header file.

So I had to rework both SYSTEM.H and POPT.C to get things to compile, as
it was assuming the absence of alloca.h implied no alloca function, and
if the compiler was not GCC and not on AIX, that there was a specific
prototype for the alloca() function.

So it appears that POPT.C needs to use #ifdef HAVE_ALLOCA instead of
#ifdef HAVE_ALLOCA_H, which of course needs something to properly set it.

I do not know what the correct fix for SYSTEM.H, I am not sure that a
prototype should even be needed for a built in function.


 
 I can do debugging or testing of the configure scripts.  So someone with
 a UNIX platform will need to verify what the fixes to the configure
 scripts need to be.
 
 
 OpenVMS does not execute configure scripts.  They are harder to port
 than the applications, and tend to generate incorrect results for the
 OpenVMS platform even after they are ported.
 
 Instead, an OpenVMS DCL procedure is used to read the CONFIG.H.IN file
 and uses it to search the system libraries to see what routines and
 header definitions are present.
 
 
 
  Is there some other syntax that the procedure accepts to allow passing a
  value through?

The procedure is basically looks for symbols in the libraries.  Sort of
like using grep to look things up in the header files instead of doing the
test files.


If it does not get a match from reading the config.h.in file, it then
scans the configure. file to see if there is a simple assignment in it like:

symbol = n

If so, it then does a #define symbol n

The DCL procedure has some special cases coded into it.  I can place an
additional one for socklen_t, now that I know about it.  But it is still
better to use the HAVE_SOCKLEN_T indicate if a substitution is needed.

Basically I will put in a special case that if it sees
 #undef HAVE_SOCKLEN_T, and it does not find it in the standard
header files, it will insert a #define HAVE_SOCKLEN_T 1 and then the
needed typedef for socklen_t.

The procedure is not perfect, but it is can handle building most of the
config.h file.  As a last step, the generate config.h file has a
#include config_vms.h to load in a hand edited file that does the fine
tuning.


I am now getting an almost clean compile, but I had to fix a lot of
char * definitions that either should have been unsigned char * or
void * in order for the build to complete.

I am still getting several diagnostics where the compiler is concerned
about being asked to put negative constants into unsigned variables.


Before I can start testing, I have to deal with the issue of OpenVMS not
currently having a fork() function.  I think that I can move one of the
processes to run as a AST thread, which has much less overhead than
the current implementation.  I do not yet know how much code I will have
to change to get that to work.

-John

[EMAIL PROTECTED]
Personal Opinion Only








Re: .plan to avoid unhappy users

2002-01-02 Thread Martin Pool

On 22 Dec 2001, Han [EMAIL PROTECTED] wrote:

 I am on a developpers list for mandrake: cooker@ and the rsync-servers
 broke which resulted in a lot of very unhappy people cause their rsync
 directories got empied.

Sorry about that...

rsync should never delete local files just because the server is
unreachable.  Is that what happened?


 # these are the packages that are no longer on the server.
 
 rm package-2.4.8
 rm nother1-4.5.6
 
 Now all the rsync user has to do is having a look at that list and sh
 it if he agrees.

You can get that behaviour by using --dry-run first to see if the
proposed modifications are reasonable.  If you discover any bugs that
cause --dry-run not to be accurate then please report them.

-- 
Martin 

The daffodils are coming. Are you?
  linux.conf.au, February 2002, Brisbane, Australia
--- http://www.linux.org.au/conf




Re: rsync *Still* Copying All Files?

2002-01-02 Thread Martin Pool

On 20 Dec 2001, Mack, Daemian [EMAIL PROTECTED] wrote:
  The question is, why does it work?  Are you indeed copying 
  between two NTFS
  filesystems, with rsync running under Windows  cygwin on 
  both sides?  I
  would have thought that would result in matching timestamps 
  granularity on
  both sides so rsync would always end up comparing the same values.
 
 Now that you've jogged my memory, I remember coming across a Knowledge Base
 article around a year ago that discussed this filesystem timestamp
 granularity.  I can't recall the reason for it, or the set of circumstances
 under which it's an issue, but maybe this article touches on the issue long
 enough to be of relevance:
 
   http://support.microsoft.com/default.aspx?scid=kb;EN-US;Q127830
 
 My hunch is that this is happening because I'm dealing with NTFS5 on Win2K
 and NTFS4 on NT4.

I think there are some NT APIs that only return time in 2sec
resolution.  Possibly Cygwin is using them?  (Why would it, though?)

-- 
Martin 

The daffodils are coming. Are you?
  linux.conf.au, February 2002, Brisbane, Australia
--- http://www.linux.org.au/conf




Re: .plan to avoid unhappy users

2002-01-02 Thread Mark Santcroos

On Thu, Jan 03, 2002 at 06:40:22PM +1100, Martin Pool wrote:
 You can get that behaviour by using --dry-run first to see if the
 proposed modifications are reasonable.  If you discover any bugs that
 cause --dry-run not to be accurate then please report them.

There are. I hope to come up with a patch Real Soon Now.

Mark


-- 
Mark Santcroos  RIPE Network Coordination Centre
http://www.ripe.net/home/mark/  New Projects Group/TTM




rsync+ tidyup (was Re: move rsync development tree to BitKeeper?)

2002-01-02 Thread Martin Pool

On  6 Dec 2001, Jos Backus [EMAIL PROTECTED] wrote:
 I will also pound a little bit more on the rsync+ bits. Two more small nits:
 
 rsync.1: -f, --read-batch=FILE   read batch file
 rsync.yo: -f, --read-batch=FILE   read batch file
 
 Here, FILE should be EXT, as it specifies the extension for the
 batch files.



 
 Another thought: maybe we should reserve -f and -F for something else and just
 stick with the long options? What do you think?

That sounds like a good idea, as long as not too many people have
started using them already.


-- 
Martin 

The daffodils are coming. Are you?
  linux.conf.au, February 2002, Brisbane, Australia
--- http://www.linux.org.au/conf