"Ryan Bloom" <[EMAIL PROTECTED]> writes: > This is the point of pools. The idea is that you should hit a steady > state quickly. Basically, one request goes through, and it allocates > all of the memory out of pools. The next time that same request is > sent, it should use the same amount of memory. For all other requests, > it should either use a little more or a little less memory, but at some > point you will get a request that uses more memory than any other > request, and that is how large your pool will be forever, which means > that you will no longer allocate memory. > > If your pools are growing too large, then you most likely need to split > the allocation into multiple sub-pools, so that the memory is returned > and can be used by later operations.
I guess I wasn't clear enough. The point is, even if I split allocation into subpools and destroy it, the memory consumption grows steadily. If you run my test program, you'll see how it's process grows monotonously. If the pool policy is to cache all allocated memory chunks and reuse them, it should stabilize at some point as you say. However, it doesn't and I guess there's some bug or problem in the code. I know you guys are all busy, but could you spare some moment to actually experiment with my test code? You just need to save it and run gcc `apr-config --cflags` `apr-config --libs` test.c `apr-config --ldflags` -lapr to compile it. ./a.out DIRNAME and see how the process size grows with top or whatever tools you like. The best directory to see the symptom is a direcotry with many shallow subdirectories. Probably apr source directory is OK to observe it. -- Yoshiki Hayashi
