Re: http/1.1 pipelined request processing order

2002-12-02 Thread Bill Barker

Michael Yates [EMAIL PROTECTED] wrote in message
[EMAIL PROTECTED]">news:[EMAIL PROTECTED]...
 Hi all,

 From some testing I have done it appears Tomcat ensures that pipelined
 requests (HTTP/1.1) are handled in order by only handing off request #2
 after request #1 has completely finished processing. This adds quite a
delay
 in processing a sequence of lengthy requests.

 Say 2 requests arrive in a HTTP/1.1 pipeline very close together. And each
 request takes 10 seconds to process.
 The behavior I have seen is that:
 * Request 1 is handed to the servlet and allowed to process
 * Response 1 is written out on the wire
 * Request 2 is handed to the servlet to process
 * Response 2 is written out on the wire.

 This takes a total of just over 20 seconds.

 However if the client had NOT used pipelining (which should be more
 efficient) and opened two connections to the server then request 1 and
 request 2 would have both been processed in a total (start to end time)
over
 just over 10 seconds - although using more sockets and more packets.

 Is there a way Tomcat can be configured whereby as soon as requests arrive
 they are handed to the servlet for processing? (Obviously as separate
 threads).

 Has anyone written any custom code to ensure the responses going back out
on
 the wire are in the same order as the requests coming in (as is required
in
 HTTP/1.1).

 If this functionality isn't currently implemented in Tomcat 4 where would
be
 the best place (in the code) to go about adding this for our custom
 solution?

It isn't currently implemented in Tomcat 4.  The place that this is handled
is j-t-c/http11/src/java/org/apache/coyote/http11/Http11Processor.java.
However, I have serious doubts that you can make it work better (at least
without knowing many details about your webapp).  You're traffic-cop will
end up blocking most of your threads until it is their turn to write.  It
would also have to handle error-conditions that would force the rest of the
requests to stop processing.  For this reason, Apache/httpd works much the
same way that Tomcat does.

That being said, if you do manage to solve this (and are willing to donate
the code to Apache), I'd be very interested in seeing it.



 Regards,
 Michael






--
To unsubscribe, e-mail:   mailto:[EMAIL PROTECTED]
For additional commands, e-mail: mailto:[EMAIL PROTECTED]




http/1.1 pipelined request processing order

2002-12-01 Thread Michael Yates
Hi all,

From some testing I have done it appears Tomcat ensures that pipelined
requests (HTTP/1.1) are handled in order by only handing off request #2
after request #1 has completely finished processing. This adds quite a delay
in processing a sequence of lengthy requests.

Say 2 requests arrive in a HTTP/1.1 pipeline very close together. And each
request takes 10 seconds to process.
The behavior I have seen is that:
* Request 1 is handed to the servlet and allowed to process 
* Response 1 is written out on the wire
* Request 2 is handed to the servlet to process
* Response 2 is written out on the wire.

This takes a total of just over 20 seconds. 

However if the client had NOT used pipelining (which should be more
efficient) and opened two connections to the server then request 1 and
request 2 would have both been processed in a total (start to end time) over
just over 10 seconds - although using more sockets and more packets.

Is there a way Tomcat can be configured whereby as soon as requests arrive
they are handed to the servlet for processing? (Obviously as separate
threads).

Has anyone written any custom code to ensure the responses going back out on
the wire are in the same order as the requests coming in (as is required in
HTTP/1.1).

If this functionality isn't currently implemented in Tomcat 4 where would be
the best place (in the code) to go about adding this for our custom
solution?

Regards,
Michael