On Thu, 2005-11-17 at 10:51, Carl Wilhelm Soderstrom wrote:
> On 11/16 10:42 , Les Mikesell wrote:
> > Even when rsync does a full, it just sends the filename list, then
> > exchanges block checksums over the files that already match.  This
> > may take a long time but it uses very little bandwidth for the
> > unchanged files.  
> 
> unfortunately, it seems that if the last (rsync) backup was an incremental,
> the next full (rsync) backup won't use those already transferred files. I
> moved 8GB of files over a T1 recently, but the transfer was done as an
> incremental backup. I realized that the next backup was going to be an
> incremental as well, and again likely transfer all those files; so I forced
> it to be a full backup with:
> 
> $Conf{FullPeriod} = .80;
> 
> (blackouts keep it from running more than once a day; but this also seems to
> force it to the head of the queue for backups, so it starts as soon as the
> blackouts end instead of waiting for others).
> 
> Unfortunately, it transferred all that data over again, taking another 40
> hours (!). I know it's not the data changing, because the most recent full
> backup only took 200 minutes.
> 
> I had thought that having a hash-locatable copy of the file in the cpool
> would keep an identical file from being transferred again. Can someone
> explain why this doesn't happen?

The remote rsync doesn't send the hash matching the backuppc naming
scheme, so it can't identify matches that aren't in the previous
tree for that host. Even if it did, the hashing isn't perfect so
additional checks would be needed that a stock rsync at the other
end can't do.  It should be possible to merge in the incrementals
in the files that are checked, but currently only the last full
is used.  I've gotten by pretty well by just starting a manual full
from the web interface before going home if I know there are big
changed on one of my remote hosts.

> > And you can add the -C option to the ssh
> > command to save more.  
> 
> This option really should be mentioned in the notes in the config.pl file.
> For old machines with slow CPU and lots of bandwidth it might not be
> worthwhile; but for fast machines with slow links, it would be worth
> reminding us of it. :)

For slow CPUs it might also be worth adding -c blowfish to get the
more processer-friendly encryption.

> In my copious spare time I should try to determine at which point this
> becomes a worthwhile option to employ.

I don't think it makes sense on a local network.  It probably always
makes sense on WANs slower than a few T1's.

> AFAIK, File:RsyncP does not allow compression. (someone please correct me if
> I'm wrong). I wonder if a comman could be concocted which piped the rsync
> stream through a compression tool (bzip2? 7zip?) which offered better
> compression than ssh -C. 

Note that ssh also has a config file where you can specify
CompressionLevel from 1 to 9 (default is 6).  See man ssh_config.
You could also enable compression there if you want it for every
host.

-- 
   Les Mikesell
    [EMAIL PROTECTED]




-------------------------------------------------------
This SF.Net email is sponsored by the JBoss Inc.  Get Certified Today
Register for a JBoss Training Course.  Free Certification Exam
for All Training Attendees Through End of 2005. For more info visit:
http://ads.osdn.com/?ad_id=7628&alloc_id=16845&op=click
_______________________________________________
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/

Reply via email to