On Aug 18, 2008, at 10:39 PM, Josh Nisly wrote:

Andrew Ferguson wrote:
On Aug 2, 2008, at 6:55 AM, kartweel wrote:
I'm backing up from windows to linux over ssh using the new 1.2.0 on both client and server. Dispite my efforts and fiddling with clocks, it stores a diff for every file even when there are no changes. It isn't a big deal, but I have some large files which take several minutes to generate diffs when there is no need. I have tried synching clocks. I've tried the -- no-acls
option with no difference.

Any ideas or any ways to try and further narrow down what the problem is?

The command I am using from windows is:

rdiff-backup -v8 --remote-schema "bin/ssh -C -l backup -i sshkey -o
\"StrictHostKeyChecking no\" %s rdiff-backup --server" --no-acls
--print-statistics C:\temp\test 192.168.2.2::test8

Thanks everyone.


Thanks for the bug report. If you add the "--no-hard-links" option to rdiff-backup, then the problem goes away.

Josh, does Windows even support hardlinks properly? The problem is that rdiff-backup uses the inode numbers to keep track of hardlinks and since the inode numbers are all zero on Windows, rdiff-backup believes the file has changed. (iirc) The relevant function is Hardlink.rorp_eq(src_rorp, dest_rorp) which is Hardlink.py:86

I'll be away next week, so no rush on the patch.

Sorry for not getting back sooner. Since Windows has no support for hardlinks, I think we should check against os.name in fs_abilities.py and completely ignore hardlinks (equivalent to --no- hard-links) if os.name == "nt". I can submit a patch, but it might be a while, since I'm pretty swamped with other things right now.



Fixed in CVS. The --no-hard-links behavior is now enabled by default when the backup source or restore target is Windows.


Andrew



_______________________________________________
rdiff-backup-users mailing list at rdiff-backup-users@nongnu.org
http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users
Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki

Reply via email to