Andrew Sackville-West <[EMAIL PROTECTED]> wrote:
> For example, make multiple identical backups. sprinkle them in various
> locations. on a periodic, routine basis, test those backups for
> possible corruption. If their clean, make a new copy anyway to put in
> rotation, throwing away the old ones after so many periods. If you
> find a corrupt one, get one of your clean ones to reproduce it and
> start over. 

        Here's what I do with my systems:

        I use backup2l to make incremental backups to a partition in /dump.
These backups are then GPG-encrypted, with the key of the owner of each
server. They are then rsynced to a central repository on one of the servers,
and from there rsynced down to my home system.

        So each server's backup data is always in three locations: It's own
machine, the repo, and my home machine.

        When the /dump partition starts to get a bit full somewhere, I
create a DVD image of some of the tarballs and burn off 4 copies. Two stay
at home, one goes to my friend that is managing the repo, and one gets
mailed to a friend in austria.

        This system works well, but that's mainly because we have less than
300GB of data that needs to be backed up and we have long backup cycles -- a
new level-1 backup is generated maybe once every six months.

        If anyone wants to check out the backup2l.conf and associated files,
let me know and I'll send it to you off-list.

        Cheers,
                Tyler


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]

Reply via email to