Les Mikesell <[EMAIL PROTECTED]> wrote on 01/26/2007 12:00:18 PM: > Timothy J. Massey wrote: > > We now want to add a new host, which happens to be running the exact > > same operating system. They're not mirror images of each other, but > > they are naturally going to share a large number of common files. Let's > > assume that the new server contains 1GB worth of files, and that 90% of > > the files are identical to files on the existing host. > > > I think there is a quick-fix here by doing a 'cp -a' of an existing > similar host directory to the new one before the first backup run.
That is an interesting solution. That would work for rsync (assuming my speculation is correct). My current favorite solution for priming a remote server (because of the difficulty of moving a BackupPC pool from one system to another, and because of my desire not to move my backup servers) is to copy all of the remote server's files to a removable hard drive and attach it to a system that is local to BackupPC. I then create the new backup host and point it at the computer with the removable hard drive, which I have set up with the same rsyncd.conf settings. Once a full backup has been completed, I just reconfigure the backup job to point to the right server. This is somewhat similar in spirit, if not in details. > As a feature request it > might be nice to have a scripted 'prime-new-host' that does the grunge > work for you and had a way to distinguish the faked starting point from > a real backup. I would love to see this abstracted a little more into a "copy-host" feature, that could copy a host to a new host, either within the same pool or to a different pool. After reading about how the NewFiles file works, it doesn't seem like we would even have to worry about preserving hardlinks if NewFiles were configured to write down *all* files as new files: the BackupPC_link process would resolve the issues for us. I don't have time right now, but that will be how I think I will attempt to move a host from one pool to another: let BackupPC_link take care of it. All I'll have to do is walk the tree, copying files and adding them to NewFiles. It's hard on BackupPC_link, but I can live with that. > > For rsync and rsyncd, BackupPC_dump handles the transfer directly: > > there is no program like BackupPC_tarExtract to handle hashing and > > pooling. It seems that BackupPC is depending on rysnc to handle these > > details completely on its own. However, while I can see in the code > > where the transfer is started, I can't find the code that is actually > > *doing* the transfer. > > > That should be in the File::RsyncP perl module that gets installed in > your perl system library area. Do a 'locate RsyncP.pm' to find it. I will look for it. Thank you. Tim Massey ------------------------------------------------------------------------- Take Surveys. Earn Cash. Influence the Future of IT Join SourceForge.net's Techsay panel and you'll get the chance to share your opinions on IT & business topics through brief surveys - and earn cash http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV _______________________________________________ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/backuppc-users http://backuppc.sourceforge.net/