[Bug 8941] New: Solved problem with hard links and schg flag

2012-05-14 Thread samba-bugs
https://bugzilla.samba.org/show_bug.cgi?id=8941

   Summary: Solved problem with hard links and schg flag
   Product: rsync
   Version: 3.0.9
  Platform: All
OS/Version: FreeBSD
Status: NEW
  Severity: major
  Priority: P5
 Component: core
AssignedTo: way...@samba.org
ReportedBy: fr...@electromail.org
 QAContact: rsync...@samba.org


Created attachment 7559
  -- https://bugzilla.samba.org/attachment.cgi?id=7559
Patch for syscall.c

Hi!

Using rsync under FreeBSD with hard links and files having schg set result in
EPERM Operation not permitted. This behavior can be observed if rsyncing
/usr/bin/.

The patch fileflags.diff tries to deal with this situation but changes the
flags of the parent directory only. It doesn't change the flags of the files
itself.

do_link() in syscall.c has to be fixed. patch-syscall.c.txt is a patch which
have the be applied after fileflags.diff.

Best regards
Franz

-- 
Configure bugmail: https://bugzilla.samba.org/userprefs.cgi?tab=email
--- You are receiving this mail because: ---
You are the QA contact for the bug.
-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Solved problem with hard links and schg flag under FreeBSD

2012-05-07 Thread Franz Schwartau

Hi!

Using rsync under FreeBSD with hard links and files having schg set 
result in EPERM Operation not permitted. This behavior can be observed 
if rsyncing /usr/bin/.


The patch fileflags.diff tries to deal with this situation but changes 
the flags of the parent directory only. It doesn't change the flags of 
the files itself.


do_link() in syscall.c has to be fixed. The attached 
syscall-do_link.c.txt contains the complete function do_link(). 
patch-syscall.c.txt is a patch which have the be applied after 
fileflags.diff.


Please have a look at the changes.

What is the official way of asking for inclusion in the rsync 
distribution? Reporting a bug via bugzilla?


Best regards
Franz
#ifdef HAVE_LINK
int do_link(const char *fname1, const char *fname2)
{
if (dry_run) return 0;
RETURN_ERROR_IF_RO_OR_LO;
if (link(fname1, fname2) == 0)
return 0;

#ifdef SUPPORT_FORCE_CHANGE
if (force_change  (errno == EPERM || errno == EACCES)) {
char parent[MAXPATHLEN];
int parent_flags;
int saved_errno = errno;
int file_flags = make_mutable(fname1, NULL, NO_FFLAGS, 
force_change);
if (file_flags) {
int ret = link(fname1, fname2);
undo_make_mutable(fname1, file_flags);
if (ret == 0)
return 0;
}
parent_flags = make_parentdir_mutable(fname2, force_change, 
parent, sizeof parent);
if (parent_flags) {
int ret = link(fname1, fname2);
undo_make_mutable(parent, parent_flags);
if (ret == 0)
return 0;
}
errno = saved_errno;
}
#endif

return -1;
}
#endif
--- syscall.c.orig  2012-05-07 16:30:28.0 +0200
+++ syscall.c   2012-05-07 16:30:44.0 +0200
@@ -114,8 +114,16 @@
 #ifdef SUPPORT_FORCE_CHANGE
if (force_change  (errno == EPERM || errno == EACCES)) {
char parent[MAXPATHLEN];
+   int parent_flags;
int saved_errno = errno;
-   int parent_flags = make_parentdir_mutable(fname2, force_change, 
parent, sizeof parent);
+   int file_flags = make_mutable(fname1, NULL, NO_FFLAGS, 
force_change);
+   if (file_flags) {
+   int ret = link(fname1, fname2);
+   undo_make_mutable(fname1, file_flags);
+   if (ret == 0)
+   return 0;
+   }
+   parent_flags = make_parentdir_mutable(fname2, force_change, 
parent, sizeof parent);
if (parent_flags) {
int ret = link(fname1, fname2);
undo_make_mutable(parent, parent_flags);
-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html

Re: Problem with hard links

2007-10-05 Thread limule pika
On 9/28/07, Matt McCutchen [EMAIL PROTECTED] wrote:

 Fabian's suggestion to use the CVS rsync with incremental recursion is
 good; that will be an improvement.  However, rsync still has to
 remember all files in S1 that had multiple hard links in case they
 show up again in S2.  If remembering the contents of even one of the
 directories makes rsync run out of memory, you'll have to do something
 different.


Thanks for your reply.I think that there is too many files in S1 ...



 Not in the general case, but if the hard links are between
 corresponding files (e.g., S1/path/to/X and S2/path/to/X; often the
 case in incremental backups), you can simply use --link-dest on the
 second run, like this:

 rsync options P/S1/ remote:P/S1/
 rsync options --link-dest=../S1/ P/S2/ remote:P/S2/



I'm using rsync to generate a copy of  a list of backups generated by
backuppc, and unfortunately the structure of S1 and S2 are absolutely not
the same ...
-- 
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html

Re: Problem with hard links

2007-10-05 Thread Matt McCutchen
On 10/5/07, limule pika [EMAIL PROTECTED] wrote:
 On 9/28/07, Matt McCutchen [EMAIL PROTECTED] wrote:
  [...]  If remembering the contents of even one of the
  directories makes rsync run out of memory, you'll have to do something
  different.

 Thanks for your reply.I think that there is too many files in S1 ...

 I'm using rsync to generate a copy of  a list of backups generated by
 backuppc, and unfortunately the structure of S1 and S2 are absolutely not
 the same ...

Then, unfortunately, there is no good way to preserve the hard links
without adding more memory to the system.

Matt
-- 
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: Problem with hard links

2007-09-28 Thread Matt McCutchen
Fabian's suggestion to use the CVS rsync with incremental recursion is
good; that will be an improvement.  However, rsync still has to
remember all files in S1 that had multiple hard links in case they
show up again in S2.  If remembering the contents of even one of the
directories makes rsync run out of memory, you'll have to do something
different.

On 9/28/07, limule pika [EMAIL PROTECTED] wrote:
 Is there a solution to keep the hard links between S2 and S1 when running
 two separated command ?

Not in the general case, but if the hard links are between
corresponding files (e.g., S1/path/to/X and S2/path/to/X; often the
case in incremental backups), you can simply use --link-dest on the
second run, like this:

rsync options P/S1/ remote:P/S1/
rsync options --link-dest=../S1/ P/S2/ remote:P/S2/

(Note the ../S1/, because basis directory paths are interpreted
relative to the destination directory.)  If you do this and use the
incremental recursion mode, rsync will remember only up to a few
thousand files at a time and won't run out of memory.  You can even do
the copy in a single pass if you like: create a directory P/basis
containing a symlink S2 - ../S1, and then run something like:

rsync options --link-dest=basis/ P/ remote:P/

Matt
-- 
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Re: Problem with hard links

2007-09-28 Thread Fabian Cenedese
At 09:50 28.09.2007 +0200, limule pika wrote:
Hello,

I have a problem with rsync and hard links :

I have 1 folder : P, with 2 subfolders : S1 and S2

S2 contains a lot of hard links to file stored in folder S1.

   P : -S1
-S2 (S2 files hard links to S1 files) 

I would like to rsync the folder P to another computer, but each sub folder S1 
(110 Go) and S2 (10 Go + hard link to 100 Go of S1) contains thousands of 
thousands of files, and when i try to rsync the folder P i have an out of 
memory error. 

The command used is : rsync --recursive --hard-links -e ssh --stats --delete 
--links --perms --times

So i try to rsync the subfolder S1 and S2 in two rsync commands (with same 
argument as above), and then the hard links between S2 and S1 are not 
preserved. 

Is there a solution to keep the hard links between S2 and S1 when running two 
separated command ?

I don't know an answer to this, but if possible you can use an rsync from cvs.
The actual development version uses incremental file list (in remote mode
both rsync binaries have to support this). This should save you from the
memory problems and you can do it in one step.

bye  Fabi


-- 
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


Problem with hard links

2007-09-28 Thread limule pika
Hello,

I have a problem with rsync and hard links :

I have 1 folder : P, with 2 subfolders : S1 and S2

S2 contains a lot of hard links to file stored in folder S1.

   P : -S1
-S2 (S2 files hard links to S1 files)

I would like to rsync the folder P to another computer, but each sub folder
S1 (110 Go) and S2 (10 Go + hard link to 100 Go of S1) contains thousands of
thousands of files, and when i try to rsync the folder P i have an out of
memory error.

The command used is : rsync --recursive --hard-links -e ssh --stats --delete
--links --perms --times

So i try to rsync the subfolder S1 and S2 in two rsync commands (with same
argument as above), and then the hard links between S2 and S1 are not
preserved.

Is there a solution to keep the hard links between S2 and S1 when running
two separated command ?

Thank you,
Limulezzz
-- 
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html