On Tue, Feb 26, 2013 at 5:26 PM, John R Pierce <pie...@hogranch.com> wrote:
>
> don't have anywhere near that sort of uptime requirements, but when data
> starts spiralling out into the multi-terabytes with billions of file
> links, rsync is painfully slow.

Yes, the one problem with backuppc is that the number of hardlinks it
uses to de-dup the data makes it a big problem to copy its archive as
anything but an image-type copy.

> the use case is more like,  if the primary backup server fails, I'd like
> to have the secondary backup server running within a few hours of
> futzing with the existing backups available for recovery.
>
> maybe I should use backupPC's archiving feature, but if I have to
> restore 20TB or whatever of files and links from an archive, that could
> well take the better part of a week.

The simple approach is to run another independent instance of
backuppc, but you may run into problems with the timing if you have a
small backup window.

> the way I figure it, drbd would give me a backup copy of the backup
> system thats ready for near immediate use.    failover would be a manual
> process, but simple and quick (stop drbd, mount the archive, start the
> standby backup PC server)..

That should work, but what happens if they ever get out of sync?   How
long will it take drbd to catch up with something that size?

-- 
   Les Mikesell
     lesmikes...@gmail.com
_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos

Reply via email to