Hi, We currently use a very basic perl script (that I inherited) to run rsync with 40 "incrementals" from a main server to a backup server. There is ~1.5 times the space on the backup server that is used on the main server (not enough for two full copies). The current live data ~4.6TB, although both servers are on gigabit, the transfer time with rsync running over ssh is measured in days for a full run. The quantity of files means that just the "receiving file list" portion of an rsync run takes ~30 minutes...
I've had a look at Dirvish, and in combination with the Dirvish-Status.sh script it would seem to be a big step up from what we currently have with significantly better reporting on how mauch data was transferred in each run etc, however having to do a "dirvish --vault main-server --init" would mean that I would have to dump all of the existing backups (I _really_ don't want to be without one copy), and then wait ~3 days for it to do its first backup until I have a full copy again. I tried to "cheat" by initially only setting up the default.conf to only copy one small subdirectory on the main server, then after it had finished, changing the default.comf to be fule full file set, then going into /tree and doing an "rm-rf *", then doing a "cp -alR /RAID/backups/current/* /RAID/Snapshots/main-server/<date>/tree/" which created a pure "hard link" backup set, but when I then ran "dervish --vault main-server" after the initial "receiving file list" it then started to transfer files rather than create another "hard link" <date>/tree set. I'd welcome any clues as to where I might have gone wrong, or if there is a better/simpler/easier way of doing this. Cheers
_______________________________________________ Dirvish mailing list [email protected] http://www.dirvish.org/mailman/listinfo/dirvish
