Les Mikesell wrote:
That should already be happening. Rsync does transfer the entire
directory tree before starting and there is a certain amount of
memory overhead per file.  If you are short of RAM on the backuppc
server, this could make the process very slow. You might have
a big improvement if you can break it up into separate runs
for individual filesystems or directories.  I'm also not sure
how well it deals with sparse files.  For example if you have
a x86_64 Linux machine you might want to see if /var/log/lastlog
appears to be a terabyte in size and exclude it if it is.
I upgraded my BackupPC server from 256MB to 1GB. It still is taking a long time to backup (over 24 hours for ~40GB using ssh/rsync). I think the problem is the client. The server is creating a ssh connection to the client and running rsync on the client. The client has 512MB RAM and 512MB swap, but the client is a heavily used host (5-10 users, most of which are running VNC of which some are running GNOME). rsync is taking about 25% of processing and the swap space is almost completely used.

So with that, I think I'll have to upgrade the RAM and increase the swap space if I can. I'm backing up the /home partition, which has some large source repositories, as well as user accounts, as well as the /var partition (/var -> /home/var). I could break this down so that they are backed up in different runs, but that's another story for another day.

Cheers,
Brendan.



-------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc. Do you grep through log files
for problems?  Stop!  Download the new AJAX search engine that makes
searching your log files as easy as surfing the  web.  DOWNLOAD SPLUNK!
http://ads.osdn.com/?ad_id=7637&alloc_id=16865&op=click
_______________________________________________
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/

Reply via email to