Perrin Harkins wrote:
> Stas Bekman wrote:
> 
>> Moreover the memory doesn't
>> get unshared
>> when the parent pages are paged out, it's the reporting tools that 
>> report the wrong
>> information and of course mislead the the size limiting modules which 
>> start killing
>> the processes.
> 
> 
> Apache::SizeLimit just reads /proc on Linux.  Is that going to report a 
> shared page as an unshared page if it has been swapped out?

That's what people report. Try the code here:
http://marc.theaimsgroup.com/?l=apache-modperl&m=101667859909389&w=2
to reproduce the phenomena in a few easy steps

> Of course you can void these issues if you tune your machine not to 
> swap.  The trick is, you really have to tune it for the worst case, i.e. 
> look at the memory usage while beating it to a pulp with httperf or 
> http_load and tune for that.  That will result in MaxClients and memory 
> limit settings that underutilize the machine when things aren't so busy. 
>  At one point I was thinking of trying to dynamically adjust memory 
> limits to allow processes to get much bigger when things are slow on the 
> machine (giving better performance for the people who are on at that 
> time), but I never thought of a good way to do it.

This can be done in the following way: move the variable that controls
the limit into a shared memory. Now run a special monitor process that
will adjust this variable, or let each child process to do that in the
cleanup stage.

To dynamically change MaxClients one need to re-HUP the server.
__________________________________________________________________
Stas Bekman            JAm_pH ------> Just Another mod_perl Hacker
http://stason.org/     mod_perl Guide ---> http://perl.apache.org
mailto:[EMAIL PROTECTED] http://use.perl.org http://apacheweek.com
http://modperlbook.org http://apache.org   http://ticketmaster.com

Reply via email to