> Stas Bekman wrote: > > Moreover the memory doesn't get unshared when the parent pages are > > paged out, it's the reporting tools that report the wrong > > information and of course mislead the the size limiting modules > > which start killing the processes. > > Apache::SizeLimit just reads /proc on Linux. Is that going to report a > shared page as an unshared page if it has been swapped out? > > Of course you can void these issues if you tune your machine not to > swap. The trick is, you really have to tune it for the worst case, i.e. > look at the memory usage while beating it to a pulp with httperf or > http_load and tune for that. That will result in MaxClients and memory > limit settings that underutilize the machine when things aren't so busy. > At one point I was thinking of trying to dynamically adjust memory > limits to allow processes to get much bigger when things are slow on the > machine (giving better performance for the people who are on at that > time), but I never thought of a good way to do it.
Ooh... neat idea, but then that leads to a logical set of questions: Is MaxClients that can be changed at runtime? If not, would it be possible to see about patches to set this? :-) L8r Rob #!/usr/bin/perl -w use Disclaimer qw/:standard/;