Peter Walter wrote: > >> >> But because of the nature of the problem, rsync and cp can't do a small >> piece at a time - and their views of the inodes in question are only >> valid as long as none change before the run completes. A tool that >> relinks individual files can plug away at it for as long as it takes - >> although I suppose it has to be aware of the nightly run. You probably >> have the whole blackout window with not much else to do anyway. >> > Les, > > Perhaps I should rephrase my issue in the context of how backuppc > already works. My understanding is that, if it were not for the > hardlinks, that rsync transfers to another server would be more > feasible; that processing the hardlinks requires significant cpu > resources, memory resources, and that access times are very slow, > compared to processing ordinary files. Is my understanding correct? If > so, then what I would think of doing is (a) shutting down backuppc (b) > creating a "dump" file containing the hardlink metadata (c) backing up > the pooled files and the dump file using rsync (d) restarting backuppc. > I really don't need a live, working copy of the backuppc file system - > just a way to recreate it from a backup if necessary, using an "undump" > program that recreated the hardlinks from the dump file. Is this > approach feasible?
First, consider the simple way that does work: unmount the partition holding the archive and do an image copy like dd'ing the partition (or if there is only one partition on the drive, the whole raw disk) to something with enough space to hold it. If the disk is mostly empty, it might be worth figuring out what clonezilla uses to copy images saving only the used space - probably partimage. This will run about as fast as your disks (and network if you use it) will go, since it's one pass across the drive to do it. The down side is that the partition has to be unmounted or at least unchanging during the copy, and you have to restore to a same-size partition to use it although resizing afterwards is possible if the new location has more space. If you really have to go the file-by-file route, rysnc or cp have to do the entire thing at once because they must have access to the entire filename list and corresponding inode numbers to identify the matches that are linked. If you don't have RAM to hold that (and you haven't mentioned the scale of your setup yet - it might help to know if you have an enormous archive or are just short on memory...) you might try the BackupPC_tarPCCopy approach. Basically you copy the pool without any links, then generate something resembling that dumpfile of hardlink metatadata you wanted in a form that GNU tar will understand. Some info here: http://backuppc.wiki.sourceforge.net/move_backup_data+. But if your archive is huge, expect that to take a day or a few to complete when you try restore it - the disk head will be bouncing all over the place. -- Les Mikesell lesmikes...@gmail.com ------------------------------------------------------------------------------ OpenSolaris 2009.06 is a cutting edge operating system for enterprises looking to deploy the next generation of Solaris that includes the latest innovations from Sun and the OpenSource community. Download a copy and enjoy capabilities such as Networking, Storage and Virtualization. Go to: http://p.sf.net/sfu/opensolaris-get _______________________________________________ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List: https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki: http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/