You could use Apache2::SizeLimit ("because size does matter") which evaluates the size of Apache httpd processes when they complete HTTP Requests, and kills those that grow too large. (Note that Apache2::SizeLimit can only be used for non-threaded MPMs, such as prefork.) Since it operates at the end of a Request, SizeLimit has the advantage that it doesn't interrupt Request processing and the disadvantage that it won't prevent a process from becoming oversized while processing a Request. To reduce the regular load of Apache2::SizeLimit it can be configured to check the size intermittently by setting the parameter CHECK_EVERY_N_REQUESTS. These parameters can be configured in a <Perl> section in httpd.conf, or a Perl start-up file.

That way, if your script allocates too much memory the process will be killed when it finishes handling the request. The MPM will eventually start another process if necessary.

BR
A

On Mar 16, 2010, at 9:30 AM, William T wrote:

On Mon, Mar 15, 2010 at 11:26 PM, Pavel Georgiev <pa...@3tera.com> wrote:
I have a perl script running in mod_perl that needs to write a large amount of data to the client, possibly over a long period. The behavior that I observe is that once I print and flush something, the buffer memory is not reclaimed even though I rflush (I know this cant be reclaimed back by the OS).

Is that how mod_perl operates and is there a way that I can force it to periodically free the buffer memory, so that I can use that for new buffers instead of taking more from the OS?

That is how Perl operates.  Mod_Perl is just Perl embedded in the
Apache Process.

You have a few options:
 * Buy more memory. :)
 * Delegate resource intensive work to a different process (I would
NOT suggest a forking a child in Apache).
 * Tie the buffer to a file on disk, or db object, that can be
explicitly reclaimed
 * Create a buffer object of a fixed size and loop.
 * Use compression on the data stream that you read into a buffer.

You could also architect your system to mitigate resource usage if the
large data serve is not a common operation:
 * Proxy those requests to a different server which is optimized to
handle large data serves.
 * Execute the large data serves with CGI rather than Mod_Perl.

I'm sure there are probably other options as well.

-wjt


Arthur P. Goldberg, PhD

Research Scientist in Bioinformatics
Plant Systems Biology Laboratory
www.virtualplant.org

Visiting Academic
Computer Science Department
Courant Institute of Mathematical Sciences
www.cs.nyu.edu/artg

a...@cs.nyu.edu
New York University
212 995-4918
Coruzzi Lab
8th Floor Silver Building
1009 Silver Center
100 Washington Sq East
New York NY 10003-6688



Reply via email to