I've been running rdiff-backup for two years, and now desperately need to restructure things (this is with version 1.1.15 on Ubuntu). I've tried SplitRdiffBackup but it was taking far too long and really wasn't doing what I wanted. I then wrote a script that would restore each increment in turn for the necessary paths, then build a new archive using --current-time, but again it was taking far too long. I've looked at archfs, but computationaly speaking, it would be doing the same thing as my script.
It looks like I'm going to have roll my own solution -- one that applies the rdiffs in a more intelligent fashion. At least for this first phase, I am working with files from a single directory, which keeps it simple. My plan is to have a seeding script that will restore back to the oldest increment by applying the rdiffs directly using patch, but every ten increments would save an intermediate version of the files. A second script would then restore to each increment (adding it to the new archive), by applying rdiffs to the closest intermediate version. This method should run about 65 times faster than the brute force approach, while eating about 200GB for all the intermediate copies. If I had a couple of TBs of spare disk space, it could be done a lot faster and simpler, but I don't. Has anyone done anything like this before? Are there any problems or gotchas with applying the rdiffs directly, rather than restoring using rdiff-backup? Are there any alternatives that I have missed? I tried searching the list archives but it was hard to find good search terms. Thanks, Alan/ _______________________________________________ rdiff-backup-users mailing list at [email protected] http://lists.nongnu.org/mailman/listinfo/rdiff-backup-users Wiki URL: http://rdiff-backup.solutionsfirst.com.au/index.php/RdiffBackupWiki
