On Sun, 2010-06-20 at 23:44 -0700, Supun Kamburugamuva wrote:
> We have a requirement for streaming contents from client to a server.
> 
> We have two I/O reactors in client side and server side.
> 
> We are going to use a constant shared buffer between client side I/O reactor
> and server side I/O reactor.
> 
> Now lets say we have a fast client and a slow server. In the shared buffer
> we disable the I/O operations of the reactors as appropriate.
> 
> For example if the client has data we enable output of the server I/O
> reactor using IOControl. IOControl.requestOutput etc. When server writes
> data and buffer is free we again enable input of the clinet I/O reactoro
> IOControl.
> 
> If the client is fast we will disable its input most of the time so that we
> don't read anything until server is ready.
> 
> We are wondering weather this is a correct approach for implementing
> streaming. It will be great to have your ideas.
> 

Yes, this sounds reasonable. Conceptually the pattern is fairly
straightforward. If incoming data cannot be processed one should disable
input events until resources free up. If outgoing data is not available
or ready one should disable output events until output is ready.    

There is an example of an ultra-simple HTTP reverse proxy that
demonstrates basic I/O throttling technique for asynchronous HTTP
connections:

http://svn.apache.org/repos/asf/httpcomponents/httpcore/trunk/httpcore-nio/src/examples/org/apache/http/examples/nio/NHttpReverseProxy.java

Hope this helps

Cheers

Oleg


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@hc.apache.org
For additional commands, e-mail: dev-h...@hc.apache.org

Reply via email to