Timothy J. Massey wrote:
> We now want to add a new host, which happens to be running the exact 
> same operating system.  They're not mirror images of each other, but 
> they are naturally going to share a large number of common files.  Let's 
> assume that the new server contains 1GB worth of files, and that 90% of 
> the files are identical to files on the existing host.
>
>   
I think there is a quick-fix here by doing a 'cp -a'  of an existing 
similar host directory  to the new one before the first backup run.   
Since all of the backed up files are already links to the pool and the 
-a option will preserve this attribute, the copy won't take much 
additional space.   It will get several things wrong, like keeping the 
old backup count as your new starting point and having a history that 
doesn't exist that may or may not bother you enough to fix.  The fix 
would be to copy the highest numbered full backup-number directory from 
the source to '0' in the new entry
and adjust the 'backups' file accordingly.   I usually do copies like 
this by cd'ing into the source directory and 'cp -a . /path/to/target' 
so I don't have to worry about the syntax for whether it is going to add 
a new directory with the source name or not.   As a feature request it 
might be nice to have a scripted 'prime-new-host' that does the grunge 
work for you and had a way to distinguish the faked starting point from 
a real backup.

> The documentation has very little in the way of details as to *how* the 
> data is transferred from the host to BackupPC, or how that data is 
> actually checked against the pool.  In the case of tar-based transfers 
> (tar and smb), it mentions that this function performed by 
> BackupPC_tarExtract.  An examination of the source points to 
> BackupPC::PoolWrite, and the comments there explain it very nicely.
>
>   
It is a hash of some amount of data to make the filename so you have a 
quick check that rules out most of the choices, then I think a full pass 
over the file is done to check for collisions in the first hash.  In the 
rsync case, you are running a stock rsync program on the other end that 
won't do this for you.

> For rsync and rsyncd, BackupPC_dump handles the transfer directly: 
> there is no program like BackupPC_tarExtract to handle hashing and 
> pooling.  It seems that BackupPC is depending on rysnc to handle these 
> details completely on its own.  However, while I can see in the code 
> where the transfer is started, I can't find the code that is actually 
> *doing* the transfer.
>   
That should be in the File::RsyncP perl module that gets installed in 
your perl system library area.  Do a 'locate RsyncP.pm' to find it.

-- 
  Les Mikesell
   [EMAIL PROTECTED]


-------------------------------------------------------------------------
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
_______________________________________________
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/

Reply via email to