Vetch wrote:

>     It can do either, depending on whether you use the tar, smb, or rsync
>     transfer methods. 
> 
> The Rsync method presumably from your previous comment would check then 
> send...?

Yes - if a matching file exists in the previous backup, only the 
differences are sent.

>     I think rsync will do it as well as it can be done.  However, it is hard
>     to tell how much two different Exchange database dumps will have in
>     common.  Then there is the issue that you could reduce the size by
>     compressing the file but doing so will make the common parts impossible
>     to find from one version to another.  You can work around this by using
>     ssh compression or something like an openvpn tunnel with lzo compression
>     enabled, leaving the file uncompressed.
> 
> 
> I see - so you wouldn't compress the file, you'd compress the tunnel...
> Makes sense...

This takes some extra CPU work but otherwise it would be impossible to 
find the matching parts.

> Would it then still get compressed when stored at the other end?

Yes, in fact the backuppc side will be running a perl implementation of 
rsync that performs the comparison on the fly against the compressed 
copy (but pretends it is the uncompressed version to match the other end).

> So I would output a copy of the database to the same file name, and 
> rsync would just take the changes...
> I'll try it out...

Yes, depending on the structure of the database dump and the changes 
each day there may not be much in common.

> How well would that work for something like LVM snapshotting?
> I'm thinking of migrating my windows servers to Xen Virtual Machines on 
> LVM drives
> If I take a snapshot of the drive and then mount it somewhere, could I 
> get BackupPC to copy only the changed data as rsynch files?

Rsync will not work directly against devices so you'd have to make a 
file copy first.  Also, when constructing the destination file after 
differences are found you need room for 2 complete copies as the new 
version is built out of a combination of chunks from the old plus the 
transferred differences.  If I were going to try this, I'd probably dd 
the snapshot image and pipe it to split to break it up into some number 
of chunks first, then back up the directory of chunks.  I'm not sure 
what might be a good size, though.

> With regards to the storage - does it keep copies of all the versions of 
> the file that is backed up, with differences stored and are they 
> separated into chunks at that level, or are they stored as distinctive 
> files?

All files that are exactly identical are pooled into a single instance 
(so you might get lucky with the chunking approach if some parts are 
unchanged).  However, if there is any difference at all they are stored 
as different complete files.   Something like backup-rdiff might be 
better for huge files with small changes.

-- 
   Les Mikesell
    [EMAIL PROTECTED]


-------------------------------------------------------------------------
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
_______________________________________________
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/

Reply via email to