Re: Detecting terminal HTTP chunk

2007-03-05 Thread Pid

Peter Kennard wrote:

At 23:07 3/4/2007, you wrote:
But since you can't send the response without finishing the reading of 
the input stream - the entire question doesn't seem to make sense.


If the input pipe is slow (ie: cellphone with slow pipe) and you are 
sending a transaction where the first part of it initiates a big 
operation (like a database lookup) the lookup can be happening while the 
rest of the data is still being read in.  ie: it is useful to be able to 
read input in small chunks as it comes in. And the client can be tuned 
to chunk appropriately for the transaction(s).


It's not really useful for Tomcat though, given that the server is 
designed to be a Servlet Container rather than a multipurpose network 
application.


Tomcat mainly handles two cases: 1) read headers  THEN send response 
e.g. GET, 2) read headers  process body, THEN send response e.g. POST.


available() may work for this depending on buffering scheme of tomcat's 
protocol handler.


On writing the reply if you call flushBuffer() it will dispatch 
whatever is in the buffer (HTTP chunks as ip packets) to the client even 
if input reading is incomplete.  Doing so if you can, will reduce round 
trip latency and the time your socket is consumed.  A gross example 
would be a transaction to process a large file and return it to the 
client.  If the processing was serial then the client could be receiving 
the return file even before it had finished sending the source file.


It seems the servelet API was not upgraded to handle incremental chunks 
in a flexible general manner when it was added in HTTP1.1.  This is 
irrespective of how chunks may be juggled by any proxy or other front 
end. I am simply dealing with how you *can* handle them on the receiving 
end.


Why would the servlet API need to do that, when chunking is something 
that happens during the response rather than the request?


Your analysis is from the point of view of someone who's (if you'll 
forgive the analogy) trying to force a car to work like a canoe.



Given that, I'd suggest that if your app client is sending a large 
amount of data that can be processed in discrete chunks, you might as 
well split it (the request) into a series of separate smaller requests.


If you've got control of the client you could set an X-Header that 
indicates it's position in the series, and another that groups them.


At least then you gain some robustness and your server can indicate to 
the client if it's missing one of the series.



Having said all that, though, I'd have started from scratch or built a 
web service as I'm not sure what I'd really be gaining by using Tomcat.



p





PK




-
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]





-
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Detecting terminal HTTP chunk

2007-03-05 Thread Peter Kennard

At 04:53 3/5/2007, you wrote:

Peter Kennard wrote:

At 23:07 3/4/2007, you wrote:
But since you can't send the response without finishing the 
reading of the input stream - the entire question doesn't seem to make sense.
If the input pipe is slow (ie: cellphone with slow pipe) and you 
are sending a transaction where the first part of it initiates a 
big operation (like a database lookup) the lookup can be happening 
while the rest of the data is still being read in.  ie: it is 
useful to be able to read input in small chunks as it comes in. And 
the client can be tuned to chunk appropriately for the transaction(s).



It's not really useful for Tomcat though, given that the server is 
designed to be a Servlet Container rather than a multipurpose 
network application.


Yes - it's a matter of what is *considered* useful in the context.
HTTP is being forced to evolve.

Tomcat mainly handles two cases: 1) read headers  THEN send 
response e.g. GET, 2) read headers  process body, THEN send 
response e.g. POST.


I will let it lie after this rant - I have enough info for now, and I 
thank list participants for thier answers :)


Right - Tomcat was initally wholly designed aroud HTTP1.0 single IP 
connection hit as a paradigm and disavowed the feature of having a 
bidirectional connection.  IN essence HTTP made UDP out of TCP, and 
in the process may have inadvertantly scuttled IPV2.  This was a 
major flaw in the initial HTTP conception and as the IETF and W3C 
move forward things like Content-Disposition: chunked have been 
added.  which along with the evolution of HTTP proxys and load 
balancers has led to the absurdity of what one migh call HTTP URL 
Addressed - High Speed Routers as a major part of the infrastructure.



available() may work for this depending on buffering scheme of tomcat's of

...
how chunks may be juggled by any proxy or other front end. I am 
simply dealing with how you *can* handle them on the receiving end.


Why would the servlet API need to do that, when chunking is 
something that happens during the response rather than the request?


It *can* happen both ways as defined by W3C, just current browsers 
don't support it.  Tomcat HAS [I might say partial] support for it in 
the HTTP1.1 connector now because it was required to build 
practical proxy routers and to load balancers.  Some resources I 
consider a valuable part of the Tomcat infrastructure I want to 
leverage for this project.


Your analysis is from the point of view of someone who's (if you'll 
forgive the analogy) trying to force a car to work like a canoe.


In some sense yes, but then almost all uses of HTTP are doing this to 
one extent or another because initially HTTP was so shallowly 
conceived. I would say I'm trying to make a mississippi riverboat 
into a hydrofoil and keep the ballroom :)


Given that, I'd suggest that if your app client is sending a large 
amount of data that can be processed in discrete chunks, you might 
as well split it (the request) into a series of separate smaller requests.


Actually in this case is is a small ammount of data over a slow pipe 
but in a lot of small pieces, and in this case a hit becomes high 
overhead.  The HTTP hacked on solution to a similar problem, the 
many-hit hyperlinked page, is to use keep alive and to duplicate 
really fat HTTP headers for hundreds of multiplexed hits comming 
back from complexly linked up pages pages of frames and tables, and 
have this integrated into the socket handling of both browsers and servers.


If you've got control of the client you could set an X-Header that 
indicates it's position in the series, and another that groups them.


Undestood, yes session ID to maintain state if that is needed.

At least then you gain some robustness and your server can indicate 
to the client if it's missing one of the series.


Yes I understand these issues *all*too* well, hence why I want to do this ;^

Having said all that, though, I'd have started from scratch or built 
a web service as I'm not sure what I'd really be gaining by using Tomcat.


Actully a whole lot and the the people involved should be proud it is 
so useful robust and documented.  Even if I have to write a custom 
protocol handler for the front end for some ports.


Tomcat already has built a very good, administrable server side 
application structure, packaging scheme, remote deployment system, 
developer adminstration and acess scheme, application manager web ui, 
and a standards based remote platform independent UI system ie: 
HTTP1.1 support + jsp + libraries for use of the web desktops.  And 
if I remain AJP complient internally I can take advantage of load 
balancers and a lot of other off the shelf stuff.  This includes 
all the O'Reilly books and documentation, and the pool of available 
personell who know how to use it, etc. (and this list BTW), I can 
hire people and with an API supported by a subclass of Servlet, 
integrate them in with IDE support, eclipse plugins 

Re: Detecting terminal HTTP chunk

2007-03-05 Thread Peter Kennard

typo, if anyone read it, I meant IPV6 :)
PK


-
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Detecting terminal HTTP chunk

2007-03-04 Thread Peter Kennard


I guess the general form of this question is, with HTTP1.1 chunked 
input, how do I read a chunk at a time, which requires I know the 
length of the chunk before calling read()  so if I attempt to read 
more than the length of the chunk so I can process it immediately 
instead of waiting for subsequent input ???  (in the final chunk case 
there is no subsequent input.)


That is without having to put in a second form of chunking inside the 
HTTP chunking ??


PK



-
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Detecting terminal HTTP chunk

2007-03-04 Thread Tim Funk
The servlet API does not expose these details. At best you have the 
InputStream to read from. (And use available() if you want to try to 
read without blocking (but due to buffering probably won't work anyways))


But since you can't send the response without finishing the reading of 
the input stream - the entire question doesn't seem to make sense.


-Tim

Peter Kennard wrote:


I guess the general form of this question is, with HTTP1.1 chunked 
input, how do I read a chunk at a time, which requires I know the 
length of the chunk before calling read()  so if I attempt to read 
more than the length of the chunk so I can process it immediately 
instead of waiting for subsequent input ???  (in the final chunk case 
there is no subsequent input.)


That is without having to put in a second form of chunking inside the 
HTTP chunking ??


-
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Detecting terminal HTTP chunk

2007-03-04 Thread Peter Kennard
if available() is accurately suported I guess that does part of the 
job, but it still doesn't let you know the last chunk you read was 
the last one.  It is wholly dependent on the higher levels reading an 
end tag, which seems like a design mistake instead of getting and 
end of file or end of data exception.


At 14:21 3/4/2007, you wrote:
The servlet API does not expose these details. At best you have the 
InputStream to read from. (And use available() if you want to try to 
read without blocking (but due to buffering probably won't work anyways))


But since you can't send the response without finishing the reading 
of the input stream - the entire question doesn't seem to make sense.


-Tim

Peter Kennard wrote:
I guess the general form of this question is, with HTTP1.1 chunked 
input, how do I read a chunk at a time, which requires I know the 
length of the chunk before calling read()  so if I attempt to 
read more than the length of the chunk so I can process it 
immediately instead of waiting for subsequent input ???  (in the 
final chunk case there is no subsequent input.)
That is without having to put in a second form of chunking inside 
the HTTP chunking ??


-
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]




-
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]



Re: Detecting terminal HTTP chunk

2007-03-04 Thread Peter Kennard

At 23:07 3/4/2007, you wrote:
But since you can't send the response without finishing the reading 
of the input stream - the entire question doesn't seem to make sense.


If the input pipe is slow (ie: cellphone with slow pipe) and you are 
sending a transaction where the first part of it initiates a big 
operation (like a database lookup) the lookup can be happening while 
the rest of the data is still being read in.  ie: it is useful to be 
able to read input in small chunks as it comes in. And the client can 
be tuned to chunk appropriately for the transaction(s).


available() may work for this depending on buffering scheme of 
tomcat's protocol handler.


On writing the reply if you call flushBuffer() it will dispatch 
whatever is in the buffer (HTTP chunks as ip packets) to the client 
even if input reading is incomplete.  Doing so if you can, will 
reduce round trip latency and the time your socket is consumed.  A 
gross example would be a transaction to process a large file and 
return it to the client.  If the processing was serial then the 
client could be receiving the return file even before it had finished 
sending the source file.


It seems the servelet API was not upgraded to handle incremental 
chunks in a flexible general manner when it was added in 
HTTP1.1.  This is irrespective of how chunks may be juggled by any 
proxy or other front end. I am simply dealing with how you *can* 
handle them on the receiving end.


PK




-
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]