I have a client on which about 100 GB of data has been moved from one directory to another -- otherwise its exactly the same.
As I understand it, since the data has been moved, BackupPC 3 will transfer all the data again (and discard it once it realizes the data is already in the pool) i.e. it does not skip the transfer of each file even though the checksum is identical to an existing file in the pool. I am using the rsync transfer method. Is there a workaround to prevent all 100 GB of data from being transferred again? Regards, Raman ------------------------------------------------------------------------------ Introducing Performance Central, a new site from SourceForge and AppDynamics. Performance Central is your source for news, insights, analysis and resources for efficient Application Performance Management. Visit us today! http://pubads.g.doubleclick.net/gampad/clk?id=48897511&iu=/4140/ostg.clktrk _______________________________________________ BackupPC-users mailing list [email protected] List: https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki: http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/
