On 2011-08-03 13:08, Oleg Kalnichevski wrote: > >>> >>> Yes, you do. One thing I do not understand, though. Why don't you simply >>> use two connections if you really need to process two messages >>> concurrently? >>> >>> Oleg >>> >>> >> >> I'm implementing a load balancer. >> >> Requests from browsers are first put on a common queue by producer >> tasks. Requests can be part of a HTTP pipeline. Consumer tasks then take >> them from the queue to send them to available web servers from a pool. >> Requests from the same pipeline can be sent to different servers. >> >> A similar processing happens for responses. In case a server crashes, >> the requests sent to it that did not get a response are reallocated to >> the other consumer tasks to be resent to the other servers. >> >> Hence, processing is decoupled between the client tasks and the server >> tasks and is thus asynchronous. I need to preserve the requests to >> resend them in case of server failure and also need to preserve >> responses to guarantee the right delivery order of HTTP pipelines. >> Responses for the same pipeline can arrive to the load balancer out of >> order. >> > > This still does not explain why you want to read two requests from the > same connection at the _same_ time. Anyway, if you want to be able to > repeat requests there is no way around buffering content in memory or on > disk. > > Oleg >
Even if requests were not repeated, in a pipeline they can be forwarded to the servers as soon as the arrive to the connection with the browser. Hence, I want to put them on the working queue as soon as they are available. I cannot figure out how to do this without consuming them from the connection and preserve them in buffered counterparts to be put on the queue. --------------------------------------------------------------------- To unsubscribe, e-mail: dev-unsubscr...@hc.apache.org For additional commands, e-mail: dev-h...@hc.apache.org