On Fri, Apr 26, 2002 at 11:32:19AM -0400, Paul J. Reder wrote:
> In my tests, this patch allows existing worker threads to continue
> procesing requests while the new threads are started.
> 
> In the previous code the server would pause while new threads were
> being created. The new threads started accepting work immediately,
> causing the existing threads to starve even though there are a
> small (but growing) number of new threads.
> 
> This patch allows the server to maintain a higher level of responsiveness
> during the ramp up time.

I don't quite understand what you are saying here. AIUI the worker MPM
creates all threads as soon as it is started, and as an optimization it
creates the listener thread as soon as there are at least one worker
thread availble. By delaying the startup of the listener thread we're
merely increasing the amount of time it takes to start a new child and
start accepting connections. Please correct me if I'm missing something.

The reason I think you were seeing a pause while new threads were being
created, as Jeff points out, was because our listener thread was able
to accept far more connections than we had available workers or would
have available workers. In the worst case, since we create the listener
as soon as there is 1 worker, it is possible to have a queue filled
with ap_threads_per_child accept()ed connections and only 1 worker.
As soon as the next worker is created the listener is able to accept()
yet another connection and stuff that into the queue.

And I think I've just realized something else. Since the scoreboard
is not updated until a worker thread pulls the connection off of the
queue, the parent is not going to create another child in accordance
with how many connections are accept()ed. This means that we are able to
accept up to 2*ThreadsPerChild*number_of_children connections while the
parent will only count us as having 1/2 that amount of concurrency, and
therefore will not match the demand. This is another bug in the worker
MPM that would be fixed if we prevented the listener from accepting more
connections that workers.

-aaron

Reply via email to