Timothy J. Massey wrote:
> > I think there is a quick-fix here by doing a 'cp -a'  of an existing
> > similar host directory  to the new one before the first backup run.
>
> That is an interesting solution.  That would work for rsync (assuming 
> my speculation is correct).
>
Yes, it isn't  necessary for the other methods since they are going to 
transfer an entire full backup of all the files anyway and can then 
compute their own pool matches.

> > As a feature request it
> > might be nice to have a scripted 'prime-new-host' that does the grunge
> > work for you and had a way to distinguish the faked starting point from
> > a real backup.
>
> I would love to see this abstracted a little more into a "copy-host" 
> feature, that could copy a host to a new host, either within the same 
> pool or to a different pool.  After reading about how the NewFiles 
> file works, it doesn't seem like we would even have to worry about 
> preserving hardlinks if NewFiles were configured to write down *all* 
> files as new files:  the BackupPC_link process would resolve the 
> issues for us.
>
> I don't have time right now, but that will be how I think I will 
> attempt to move a host from one pool to another:  let BackupPC_link 
> take care of it.  All I'll have to do is walk the tree, copying files 
> and adding them to NewFiles.  It's hard on BackupPC_link, but I can 
> live with that.
>

When copying within the pool, all you have to do is preserve the 
existing links which cp -a or cp -l would do, since they will already be 
to the correct pool files.   When copying to a new server, a utility 
could be a little more intelligent if it tracked the inodes of the 
source files and when it saw duplicates, sent the filename of the 1st 
copy instead of the data.  I think Craig has done some work on a utility 
to copy an entire archive, but I think that requires the entire original 
pool to be copied first which might have collisions in the hashing if 
you move to a different existing server and is overkill if you only want 
to move some of the hosts.   I'd like to see something that could  move 
the per-pc directory for one or several  hosts at a time, relinking the 
first instance of each file the hard way, then linking matches in the 
same run to the first one without having to copy anything first. I guess 
that wouldn't help with a remote move where you'd like to avoid the 1st 
copy too.  Maybe a special version of rsync at both ends could do the 
hash computations to avoid copying anything already in the pool.

-- 
  Les Mikesell
   [EMAIL PROTECTED]
.

-------------------------------------------------------------------------
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
_______________________________________________
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/

Reply via email to