Hello,

I'm having trouble backing up a server with 15 million files totaling
approx 100 GB.

The backuppc machine has 1GB RAM.  I've tried switching from rsync to tar,
this helps but the 1GB still becomes exhausted during the backup. Then it
starts to thrash, and it never finishes.

I've tried excluding what I can, but the total number of files still ends
up being too high.   Once thing that did help is splitting up the job into
many little pieces, but the administrative overhead to do that is just
huge.

Is there a way to reduce the memory requirements even more?  Maybe some
little script to split up the file transport into smaller more manageable
pieces behind the scenes?  Or maybe this is a question for the dev list.

This is really annoying, I've been using backuppc for dozens of systems,
but this is the first one that I'm not able to do at all.  I should also
mention that this is a FreeBSD system that I'm backing up to Linux - but I
doubt that would affect the memory requirements in any way.

Thanks for your help,
Tomas



-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
_______________________________________________
BackupPC-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/backuppc-users
http://backuppc.sourceforge.net/

Reply via email to