At 02:10 AM 10/16/2002, Bojan Smojver wrote:

>Coming back to what William is proposing, the three pool approach... I'm
>guessing that with Response put in-between Connection and Request, the Response will 
>store responses for multiple requests, right?

No, I'm suggesting a single response pool for each request.  As soon as
the request is processed, the request pool disappears (although some 
body may still sitting in brigades, or set aside within filters).  As soon as
the body for -that- request- has been flushed complete, that response
pool will disappear.

Since this isn't too clear by the names, perhaps 'connection', 'request'
and an outer 'handler' pool make things more clear?  In that case, the
handler pool would disappear at once when the response body had been
constructed, and the 'request' pool would disappear once it was flushed.

> Then, once the core-filter is
>done with it, the Response will be notified, it will finalise the processing
>(e.g. log all the requests) and destroy itself. Sounds good, but maybe we should just 
>keep the requests around until they have been dealt with?

Consider pipelining.  You can't start keeping 25 requests hanging around
for 1 page + 24 images now, can you?  The memory footprint would soar
through the roof.

>BTW, is there are way in 2.0 to force every request to be written to the network
>in order? Maybe I should focus on making an option like that in order to make it
>work in 2.0?

How do you mean?  Everything to the network is serialized.  What is out
of order today, date stamps in the access logs? That is because it takes
different amounts of time to handle different sorts of requests, availability
from kernel disk cache of the pages, etc.

Bill

Reply via email to