Paul Querna wrote:

This is traditionally called the 'Thundering Herd' Problem.

When you have N worker processes, and all N of them are awoken for an accept()'able new client. Unlike the prefork MPM, N is usually a smaller number in Event, because you don't need that many EventThreads Per Number of WorkerThreads,

I'll buy that. you only need 2 processes to deal with 3rd party module reliability worries. not much of a herd. if you have reliable code and we scale well enough to handle the traffic, nothing stops you from using one process.

I also reason that on a busy server, the place you most likely want to put the event mpm, you will have many more non-listener sockets to deal with, and those will fire more often than new clients are connecting,

sure, assuming that there are typically multiple HTTP requests per connection or big files/slow network combinations to trigger Brian's async write logic.

meaning you will already be coming out of the _poll() with 'real' events. So the 'cost' of being put into the Run Queue isn't a 'waste', like it is on the Prefork MPM, where you just would go back into _poll() without having done anything.

I'll buy that too. once the poll pops, there is a loop to consume all of the poll events in one go. judging by the comments in worker.c::listener_thread, I think Manoj meant to do that in worker's ancestor but never got a round tuit.

Greg

Reply via email to