with 370,000 files rsync should use 370,000*100B=35MB+/- 10% on each side.
How fast is your CPU? are you sure that you can process the checksums fast
enough?
Are you compressing the rsync process and if so what compression level?
rsync compression at level 3 is only slightly worse that level 9 but uses a
tiny fraction of the CPU time that level 9 does.
Also, are you firing off both backups at the same time. depending on your
hardware, you could be adding very very large I/O delays from disk seeks.
I/O performance is a big factor in system load. slow IO can give you a 2
digit system load and have plenty or ram and CPU left over.
I would definitely try to break down the shares a bit. You might also
consider not doing a checksum on older files that are in storage but doing
checksums on newer files that may be altered. Most systems will always
update the mtime of a file when it is altered and unless you have something
special going on then the mtime is a reliable and easy check. even if the
file was not actually changed but was opened and re-saved as is the mtime
will change. (again, there are some special circumstances)
I think the most likely cause here is the checksumming eating up RAM as more
and more files/larger files are checksummed in parallel and spilling over to
SWAP. Then SWAP is adding I/O transactions to the system which in the best
case is adding work to the CPU and disk controller and worse case is that
the SWAP is on the same physical drive as TOP_DIR. It them becomes a
traffic jam where every extra file takes more ram theirfore more swap which
slows down the existing checksum processes because they have to wait to
write which keeps more in the pipe which causes more memory to be used=more
swap=more io=x=x=x==x=x=x=and so on.
try just trusting mtime on the files and see what happens. 370,000 isnt
that much for backuppc or rsync. I dont see issues until around 1million
files. I then break that backup set down to seperate machines.
On Tue, Nov 24, 2009 at 8:42 PM, GB <pse...@gmail.com> wrote:
> Thanks Chris. I will give it a shot and see if I can make it behave in any
> way... was hoping for a bit of a magic bullet, I suppose :)
>
>
> On Tue, Nov 24, 2009 at 10:41 PM, Chris Bennett <ch...@ceegeebee.com>wrote:
>
>> Hi,
>>
>> > Thanks for the reply. The data is, in fact, "all time" in the sense that
>> it
>> > goes back years, but it's sorted by filename, rather than date; it's
>> > essentially equivalent to how BackupPC stores data in cpool/, i.e. the
>> first
>> > 3 characters of the filename will generate 3 levels of subdirectories.
>> The
>> > best I was able to do, to date, was to make 10 shares, 1-9, and back up
>> 10
>> > separate backup trees. But that was before, when I had about 100k
>> files... I
>> > tried this recently, and seem to have made it go under. So I guess I'd
>> need
>> > to make TWO levels of shares, so 1/0-1/9, 2/0-2/9, etc. Then, maybe,
>> once I
>> > go through the full loop, it'll be easier to perform future incrementals
>> > since the delta will be small.
>>
>> Yeah, I've been able to archive large pools of files that have aged,
>> so that backuppc doesn't have to consider such a large filelist. I'm
>> not too sure on the mechanics of backuppc and overhead - e.g. what
>> amount of work does backuppc perform to perform a full and
>> incremental.. how much memory is consumed per considered file. I
>> expect someone else can more succintly answer these kind of questions
>> to help you build a more scalable configuration.
>>
>> > My BackupPC box doesn't swap too much, it doesn't behave like it's under
>> > massive load at all; but then again, I think my IO subsystem (Dell Perc6
>> +
>> > 4x WD Greens in RAID5) hopefully outperforms the speed of the link+any
>> > overhead :) I haven't tried stracing rsync on the remote server. Any
>> > suggestions on how to use it? I've never tried it before.
>>
>> Get the pid of your rsync process on the source of data.
>>
>> Then perform something like
>> # -s3000 specified 3000 characters printed per system call
>> strace -p <pid> -s3000
>>
>> This will give you insight into the open/stat/read/close cycle that
>> rsync will be doing when copying data. I would expect it to be
>> cycling faster than you can read, although in the case where I've seen
>> high swap activity, you'll see batches of the cycle followed by
>> pauses.
>>
>> Similarly, running:
>> vmstat 1
>>
>> in another console and looking at the bi/bo columsn that represent
>> blocks in/out helps you to know whether swap is being heavily used.
>>
>> Good luck and let me know if you find a good solution to your problem.
>>
>> Regards,
>>
>> Chris Bennett
>> cgb
>>
>
>
>
> ------------------------------------------------------------------------------
> Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day
> trial. Simplify your report design, integration and deployment - and focus
> on
> what you do best, core application coding. Discover what's new with
> Crystal Reports now. http://p.sf.net/sfu/bobj-july
> _______________________________________________
> BackupPC-users mailing list
> BackupPC-users@lists.sourceforge.net
> List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
> Wiki: http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
>
------------------------------------------------------------------------------
Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day
trial. Simplify your report design, integration and deployment - and focus on
what you do best, core application coding. Discover what's new with
Crystal Reports now. http://p.sf.net/sfu/bobj-july
_______________________________________________
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List: https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki: http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/