Harald Amtmann wrote at about 19:29:07 +0100 on Monday, December 7, 2009: > So, for anyone who cares (doesn't seem to be anyone on this list who > noticed), I found this post from 2006 stating and analyzing my exact problem:
You are assuming something that is not true... > > http://www.topology.org/linux/backuppc.html > On this site, search for "Design flaw: Avoidable re-transmission of massive > amounts of data." > > > For future reference and archiving, I quote here in full: > > "2006-6-7: > During the last week while using BackupPC in earnest, I have > noticed a very serious design flaw which it totally avoidable by > making a small change to the software. First I will describe the > flaw with an example. .... details snipped.... > > The design flaw here is crystal clear. Consider a single file > home1/xyz.txt. The authors has designed the BackupPC system so that > the file home1/xyz.txt is sent in full from client1 to server1 > unless > .... details snipped.... > > The cure for this design flaw is very easy indeed, and it would > save me several days of saturated LAN bandwidth when I make > back-ups. It's very sad that the authors did not design the > software correctly. Here is how the software design flaw can be > fixed. This is an open source project -- rather than repetitively talking about "serious design flaws" in a very workable piece of software (to which I believe you have contributed nothing) and instead of talking about how "sad" it is that the authors didn't correct it, why don't you stop complaining and code a better version. I'm sure if you produce a demonstrably better version and test it under a range of use-cases to validate its robustness that people would be more than happy to use your fix for this "serious" design flaw. And you win a bigger bonus if you do this all using tar or rsync without the requirement for any client software of any other remotely executed commands... > The above design concept would make BackupPC much more efficient > even under normal circumstances where the variable > $Conf{RsyncShareName} is unchanging. At present, rsyncd will only > refrain from sending a file if it is present in the same path in > the same module in a previous full back-up. If server1 already has > the same identical file in any other location, the file is sent by > rsyncd and then discarded after it arrives. It sounds like you know what you want to do so start coding and stop complaining... > If the above serious design flaw is not fixed, it will not do much > harm to people whose files are rarely changing and rarely > moving. But if, for example, you move a directory tree from once > place to another, BackupPC will re-send the whole lot across the > LAN, and then it will discard the files when they arrive on the > BackupPC server. This will keep on happening until after you have > made a full back-up of the files in the new location. " No one is stopping you from fixing this "serious design flaw" which obviously is not keeping the bulk of us users up at night worrying. And for the record, I don't necessarily disagree with you that there are things that can be improved but your attitude is going to get you less than nowhere. Also, the coders are hardly stupid and there are good reasons for the various tradeoffs they have made that you would be wise in trying to understand before disparaging them and their software. ------------------------------------------------------------------------------ Return on Information: Google Enterprise Search pays you back Get the facts. http://p.sf.net/sfu/google-dev2dev _______________________________________________ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List: https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki: http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/