On Fri, 2006-06-09 at 16:13 +1000, Adam Goryachev wrote:
> I've been looking around for a new backup solution, and so far, backuppc
> looks like the best fit. However, after running through the install
> options, there were a lot of config choices designed to remove old full
> backup and old incremental backups. So I'm not so sure if this matches
> my needs anymore.
> 
> Basically, I am looking at building a backup server with around 1.6TB of
> storage space available, and using that to backup a single Linux server
> (well, actually two linux boxes) as a single full backup on day 0 and
> then making incremental backups daily for ever after that. This way, we
> can grab any version of any file since the original full backup.
>
> Can backuppc achieve this??

Backuppc collapses all copies of identical files to one pooled
copy regardless of whether it appears in fulls or incrementals.
There is no storage space advantage in avoiding additional full
runs.  Incremental runs are always based on the last full, so
they grow increasing larger over time and there are quirks in
each method that are fixed only at the fulls - for example the
tar based incremental runs won't take old files in their new
location under a renamed directory.   And, you probably do want
a scheme that deletes the daily new copies of growing log files
and things you intentionally delete except for the archived copies
and the full runs you've chosen to keep.  The default scheme of
weekly fulls might be OK if you just bump up the number of fulls
to keep to a huge value.  That way you'll have every file for the
last week (or more if you want), then weekly snapshots going back
forever.

> Based on our calculations, we are hoping the above will be sufficient to
> backup all data for at least 5 years, with the original estimate at
> around 22 years.

Since Backuppc compresses the files, you might double that, depending
on how compressible the files are.

>  We factored in some allowance for data requirements to
> expand, and hence brought it back to 5 years). The idea is also that in
> 5 years we can just pull out the old HDD's and replace them with the
> then current size HDD which will probably be something like 2 - 5TB per
> drive...

It is a problem to copy a large backuppc archive by normal means because
of all the hardlinks, but it can be done using an image copy and
then growing the filesystem - or if you start with LVM you could add
more drives later.

> Finally, we would like to take the latest 'merged' snapshot every month
> and burn that to DVD storage.

You can do that with the 'archivehost' feature or roll your own with
a scripted run of BacupPC_tarCreate.

-- 
   Les Mikesell
    [EMAIL PROTECTED]




_______________________________________________
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/

Reply via email to