Joe Casadonte wrote:
> I'm using BackupPC 3.0.0beta3, if it matters.
> 
> What is the difference between a Full and Incremental backup when
> Pooling is involved?  In a non-pooling backup scheme, a full backup
> backs /everything/ up, whether or not it's changed since the last full
> backup.  This way, even if File A has not changed, it's still backed
> up, and if a restore is needed and the current full backup is corrupt,
> you can always go to the previous one.  With BackupPC, as I understand
> it, there will be only one copy of the file regardless of how many
> full backups have occurred, and if that one file is corrupt, you're
> kind of up the creek.  Now, it's true that if the file is corrupt it
> will likely be caught by BackupPC and not be linked to, because it's
> not identical.  But still, there is a time window where this could
> occur.
> 
> If my understanding of this is correct, what is the functional
> difference between a Full and Incremental backup?  It's more
> curiosity, really; I love BackupPC and think it's great!

It's been a while since I've looked into these issues so my understanding could 
be mistaken or out-of-date. My understanding is that pooling is independent of 
backup method.  Due to pooling, there is little practical difference between 
full and incremental backups, with respect to the end result.  The main effect, 
as you describe, is that file corruption *may* be caught and corrected. 
Unfortunately this only addresses corruption of the archive, and not in the 
host.  To the contrary, corruption in the host is compounded by full backups, 
which silently supercede the uncorrupted backups and may not be caught in 
before 
the uncorrupted backups expire.

I agree backuppc is a great tool which has served me well for several years, 
but 
your questions raise these still unsolved issues for me.  The last time I 
checked I didn't see any validation of archives in backuppc.  This is 
compounded 
by the problem of backing up backuppc archives, caused by the huge number of 
hardlinks, which AFAIK is still unsolved and which not a backuppc problem per 
se, but more likely a linux and/or rsync problem.

I would prefer that the hashes used for pooling were also used to validate 
files 
during optional valididity checks, which would address concerns about archive 
corruption.  Ideally there would be a facility for logging errors, and manually 
or automatically correcting them, and these hashes would be the same as those 
used by rsync to minimize computation time, possibly in conjunction with rsync 
transfers. The ultimate validity checking feature would be the optional use of 
ECC data for error correction in archived backuppc archives, where the 
corruption issue would be most acute.  (This would a lower priority on my wish 
list.)

What these problems mean in practice is that I have to employ an alternate 
backup scheme for file validation, using hashes, and two backuppc servers to 
address the lack of archive backups.  This is more redundancy than I would 
otherwise require.  If and when backuppc performs archive checks, and backing 
up 
archives become feasible, then I would like to run a single backuppc server and 
periodically rsync the archive to a remote server.

-------------------------------------------------------------------------
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
_______________________________________________
BackupPC-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/

Reply via email to