Les,

As I read your answer, I see that the solution I proposed is far from being 
easily doable (eventhough it look sexy) with the current release of BackupPC.

But maybe BackupPC could be enhanced to be able to have a slave.

I mean: during the nightly job, new files are identified.
For such files, the MD5 sum could be sent to the slave side and then:
if file exists on the other side, a hardlink is created on the slave, or, if 
it's a new file, it is transfered to the slave.

Regarding re-sync of 2 dis-synchronized backuppc server, the operation would 
only require to (If I understand how BackupPC works):
- send the cpool directory (rsync to handle non empty directory on the slave)
- scan the pc directory and send the MD5 sums to the slave so it is able to 
recreate the correct hardlinks.

This technique would avoid having to reenumerate hardlinks when sending files 
IMHO.

Having 2 backuppc being slave of each other would be a realy efficient way to 
have safe backup while being optimal regarding network and disk usage.

I'm not a coder and I wish I could code this, but at least I share this idea 
in the hope it is usefull and that someone skilled and iterrested enought 
would implement it (provided that after being studied more deeply, it is 
feseable).

Best regards,

Olivier.

Le Mardi 23 Août 2005 17:34, vous avez écrit :
> On Tue, 2005-08-23 at 09:15, [EMAIL PROTECTED] wrote:
> > then each night we to an "rsync-2.6.6 -H" (on /var/lib/backuppc/pc)
> > between the 2 sites.
>
> First, whether this is practical or not depends on the number of files
> with hard links that rsync has to traverse.  The technique used to
> find the matching links is not efficient and slows as the number
> increases.   Also, if you only sync the pc directory you'll get
> duplicate (but working) copies since the links that tie everything into
> cpool won't be followed.  If you include cpool in the same run with some
> machines being backed up at each location you will hit conflicting
> hashed filenames in cpool which may or may not break things.
>
> > Can this config work. If not why?
> > If it can work, can we have the benefit to optimise redundant files
> > between the 2 sites?
>
> If you have to use this technique it would probably be safest to rsync
> only the individual pc directories backed up locally to the other site
> and put up with the fact that the redundant files will be stored as
> separate copies.
>
> I'd consider it much safer to simply let each backuppc server back up
> all six machines separately.  That way the pools are managed correctly
> at each location and you don't have to worry immediately if something
> goes wrong with one of the servers or you want to replace the hardware
> or software at one or the other locations.  If you do not have time or
> bandwidth for two runs to complete you might consider using only one
> live server and making an image copy of its archive partition to an
> external drive that is rotated offsite.

--
        Olivier LAHAYE
        Motorola Labs IT manager
        Saclay, FRANCE


-------------------------------------------------------
SF.Net email is Sponsored by the Better Software Conference & EXPO
September 19-22, 2005 * San Francisco, CA * Development Lifecycle Practices
Agile & Plan-Driven Development * Managing Projects & Teams * Testing & QA
Security * Process Improvement & Measurement * http://www.sqe.com/bsce5sf
_______________________________________________
BackupPC-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/

Reply via email to