On 10/16/07, Mark Maunder <[EMAIL PROTECTED]> wrote:
> My mod_perl app works with some fairly
> large data structures and AFAIK perl doesn't like to free memory back
> to the OS once it's allocated it, so the processes tend to grow for
> the first few hours of the server being up and then the plateau and
> grow about 1 meg per day (a slow leak I think).

Were you sharing these structures between threads explicitly?  If not,
they should not be any bigger with processes.

> I brought up my server with prefork and only 150 children.

Why so many children?  Most busy mod_perl servers run more like 20-50
processes, with a separate front-end proxy server.  I suspect you
didn't have anywhere near that many active threads.  If you're only
serving 40 reqs/sec, you probably don't need more than 20 or so
processes.

> I'm back on worker and I have a full 250 threads with much lower memory usage.

But how many active perl interpreters do you have?  I'm guessing a lot
fewer than 250.  You can't run 250 perl interpreters in 2GB of memory.
 What did you set PerlInterpStart and PerlInterpMax to?  By the way,
you probably should use PerlInterpMaxRequests rather than
MaxRequestsPerChild when running in worker mode.

> When i was running with prefork, each process was 29 Megs and there
> were 150 of them. That's 4.3 Gigs and my box only has 2 Gigs so
> apparently copy-on-write was in effect and some of that was shared.

The simplest thing to do when comparing free memory is usually to
check the output of /usr/bin/free and see how much real memory is
being used.

- Perrin

Reply via email to