3:31pm, Andreas Micklei wrote:

> Am Freitag, 4. Januar 2008 schrieb Carl Wilhelm Soderstrom:
>> On 01/03 09:46 , dan wrote:
>>> i doubt that dd will do much better than tar,
>>
>> With the backuppc data pool, dd'ing the disk partition is several times
>> faster than tar because it doesn't do all the seeks for the links. This has
>> been my experience at least.
>>
>> I haven't tried 'dump'; but this may offer a comparable improvement.
>
> I have and it works really well. After reading some HOWTOs on dump/restore
> (Google is your friend) it is quite easy to use and fast enough for me. With
> some theoretical background about dump it's obvious that it performs better
> than rsync or even tar with lots of hardlinks.
>
> Note that during dump the dumped filesystem should not be changed. It would be
> best to unmount it or mount it read-only. Here at my site this is not
> possible, so I just stop almost every daemon including backuppc before
> starting the backup to get a consistent dump. I let BackupPC do it's work at
> night and dump the disk array of the BackupPC Server to an external USB disk
> at day. USB disks are swapped and stored off-site. This is probably the best
> backup solution you can get without buying expensive tape loaders or similar
> high-end equipment.
>

If you are using lvm2 (which is pretty common, given the necessary 
single-filesystem size for backuppc), then you should be able to take a 
snapshot of the logical volume, and backup from that.

Paul

-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2005.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
_______________________________________________
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:    https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:    http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/

Reply via email to