Les Mikesell wrote:
> Don't you run several concurrent backups? Unless you limit it to the 
> number of CPUs in the server the high compression versions will still 
> be stealing cycles from the LAN backups.
I'm backing up 5 machines.  Only one is on the internet, and the amount 
of CPU time/sec the internet backup takes is very small.

For example, to compress a 5,861,382 byte mp3 file with bzip2 -9 takes 
3.3 seconds.  That's 1,776,176 bytes/sec.  The DSL line pumping the data 
to me is pushing 42,086 bytes/sec, and that includes ethernet/IP/ssh/ssh 
compression overhead.  (hmm, now that I think about it, the real 
transfer could be higher because ssh is compressing, but even if it was 
100k/sec of real data it is still peanuts.)

Does that make the theoretical load on the CPU 6% if I got the math 
right?    100*1024/1776176 = 0.06.  Checking the current backup, yeah, 
it's about 2% right of a CPU now.

>> I am using ssh -C as well.  And see my other post about rsync 
>> --compress -- it is broken or something.
>
> It is just not supported by the perl rsync version built into backuppc.
>

Ah -- well, it fails quite silently =-).  I couldn't figure out why the 
same files kept getting transferred over and over again...

Rich


-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
_______________________________________________
BackupPC-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/

Reply via email to