Luca,

thanks for the long mail. I am looking forward to read the answers to your 
questions, as I have *assumptions* about them, but am myself not certain.

> Am 01.02.2016 um 10:17 schrieb Luca Toscano <toscano.l...@gmail.com>:
> 
> Hi Apache Devs!
> 
> I am trying to understand if https://httpd.apache.org/docs/2.4/mod/event.html 
> could use some documentation improvements or if I am the only one not getting 
> the whole picture correctly. 
> 
> I'd like to write a summary of my understanding of the module to get some 
> feedback:
> 
> - mod_event reuses mod_worker's architecture using forking processes that in 
> turn create several threads (controlled using the ThreadPerChild directive). 
> A special thread for each process is called the listener (like in worker) and 
> it keeps track of the connections/sockets "assigned" to its parent.
> 
> - mod_event's listener thread is smarter than its brother in mod_worker since 
> it keeps a list of sockets in: keep alive, flushing data to client only 
> (after the output chain has finished to process the response) and complete 
> new request/response to handle. 

My read: Not only in keepalive, but also in timeout, depending on the 
connection state.

> - when a socket changes its state, the listener checks free workers among its 
> thread pool and assign either some "small" work like handling keep alives or 
> flushing data, or a whole new request to handle. 

"changes its state" -> raises an event. If the event needs processing by a 
worker...

> - mod_ssl and mod_deflate are examples of filters that needs to act on the 
> whole response so a worker gets stuck flushing data to slow clients rather 
> than giving up the socket earlier to the listener and doing a different work.

Hmm, not sure I understand your point. Every part in the process of generating 
the bytes sent out on the socket is involved here. The crucial difference 
between the worker and the event mpm is:
- worker keeps the response state on the stack
- event keeps the response state in the heap
which means that calls to response processing 
- on worker, need to return when all is done
- on event, may return whenever a READ/WRITE event or TIMEOUT/KEEPALIVE is 
needed. 

In that way, writing a response with event is "stutter stepping" it. Write 
until EWOULDBLOCK, return, queued, event, write until EWOULDBLOCK,...

(my understanding)

> - AsyncRequestWorkerFactor is used to regulate the amount of requests that a 
> single process/threads block can handle, calculating the value periodically 
> using the idle threads/workers available. In case of workers maxed out, the 
> keep alive sockets/connections are closed to free some space.
> 
> 
> If my understanding is correct (I doubt it but let's assume this) then I have 
> the following questions:
> 
> - Would it be worth to add more info in the "how it works" section? A first 
> read may bring the user to think that the listening thread is the one doing 
> the actual work, rather than the workers, being a bit puzzled when reading 
> the AsyncRequestWorkerFactor section.
+1

> - Would it make sense to expand the summary to add more info about the fact 
> that most of worker's directives needs to be used? The "how it works" section 
> dominates and a reader is more keen to read it first and skip the summary in 
> my opinion. 
+1

> - An interesting question has been filed a while ago in the comments: "This 
> documentation does not make it clear whether the event MPM at least allows 
> for keepalives on SSL connections to conserve a thread.  Does it require the 
> use of a thread while transmitting only, or also while the kept-alive SSL 
> connection is idle?" - If my understanding is correct, the answer should be 
> that a slow client can still block a worker for a keep alive response due to 
> mod_ssl requirements, but that idle times are managed by the listener. 
There is some ongoing work done in trunk in regard to this...

> - The summary talks about "supporting threads" and given the fact that 
> AsyncRequestWorkerFactor is added to ThreadsPerChild, it raises the question 
> about how many of them are created at startup. Conversely, is it a way to 
> say: the number of threads for each process are ThreadsPerChild but since 
> they now perform also small bursts of work (like keep alive house keeping and 
> flushing data to clients) the total amount of connections allowed should be 
> more to make room for all these connection/socket states?
> 
> Apologies for the loooong email, hope that what I've written makes sense! If 
> not, I'll start reading again! :)
> 
> Luca
> 
> 
> 

Reply via email to