On Thu, 11 Jan 2001 [EMAIL PROTECTED] wrote:
> > I think you're making this much harder than it needs to be. It's this
> > simple:
> > MaxClients 30
> > PerlFixupHandler Apache::SizeLimit
> > <Perl>
> > use Apache::SizeLimit;
> > # sizes are in KB
> > $Apache::SizeLimit::MAX_PROCESS_SIZE = 30000;
> > $Apache::SizeLimit::CHECK_EVERY_N_REQUESTS = 5;
> > </Perl>
> This is just like telling an ISP that they can only have 60ish dial
> in lines for modems because that could theoreticly fill their T1. Even
> though they would probably hardly even hit 50% if they only had 60 modems
> for a T1.
>
> The idea that any process going over 30 megs should be killed is probably
> safe. The solution though is only really valid if our normal process is
> 29 megs. Otherwise we are limiting each system to something lower then it
> produce.
It's a compromise. Running a few less processes than you could is better
than risking swapping, because your service is basically gone when you hit
swap. (Note that this is different from your ISP example, because the ISP
could still provide some service, albeit with reduced bandwidth, after
maxing out it's T1.) The numbers I put in here were random, but in a real
system you would adjust this according to your expected process size.
Even a carefully coded system will leak over time, and I think killing off
children after 1MB of growth would probably be too quick on the draw.
Child processes have a significant startup cost and there's a sweet spot
between how big you let the processes get and how many you run which you
have to find by testing with a load tool. It's different for different
applications.
It would be nice if Apache could dynamically decide how many processes to
run at any given moment based on available memory and how busy the server
is, but in practice the best thing I've found is to tune it for the worst
case scenario. If you can survive that, the rest is cake. Then you can
get on to really hard things, like scaling your database.
- Perrin