On Tue, 2007-01-09 at 17:11 -0500, Timothy J. Massey wrote:

> I don't use compression.  The fewer layers in my backup strategy the 
> better:  and disk space is cheap!  :) 

Yes, but the disk is the most likely place to get an error and a
smaller compressed file thus has less risk...  Fast CPU's and
RAM are cheap too, but I use my desktop machine to get double
duty since the backups all run at night.

> (The fact that BackupPC mangles 
> every file name is bad enough...)   The CPU usage on the backup server 
> is pure rsync overhead.

The question is, is the overhead of the transfer more than offset
by the number of times you don't have to repeat the transfer?
Looking at MB/sec isn't really a good measure of what an
incremental rsync run is doing. 

> Might it help if the rsync protocol on the backup server were not 
> written in perl?  Or do I misunderstand, and it's in a compiled library? 

Yes, both CPU and RAM-wise, although I'm not sure how much the
checksum-caching scheme helps.

-- 
  Les Mikesell
   [EMAIL PROTECTED]



-------------------------------------------------------------------------
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
_______________________________________________
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/

Reply via email to