Thanks for creating the issue.  Please note that there were actually two 
issues discovered.  The client flow issue was logged, but the server 
request parsing issue wasn't.

The rest of this post is really just going through my thought process, so 
that the group can benefit from an analysis of this problem and a 
discussion of its solution as a case study of larger stream processing 
problems/solutions.

The more I dig into this issue, the more I understand streams.  Still the 
more I dig, the more it appears you've got a problem on your hands with 
this type of flow.

My initial thought was to implement this system as a combination of two 
streams, your outgoing request stream and your incoming response stream, 
each stream processing their things at full speed.  Now I'm still learning 
the lingo and don't want to unintentionally mix my metaphors.  Essentially 
I'd expect you give something a Source[HttpRequest] and get a 
Source[HttpResponse] back.  Each with the possibility of independent 
supply/demand rates.

This might work in some ways, but presents other challenges.  Based on what 
I *think* the original flow is trying to do, there appears to be a 
fundamental problem.  My (maybe naive) solution treats request and response 
streams as being independent (from the point of view of the client). 
 Stream ordering is (supposed to be) guaranteed by HTTP 1.1 pipelining. 
 What the original flow attempts to do is "remember" the HttpMethod of the 
original request and join it back up with the response.  I'm guessing you 
want to do this for pipeline error detection as well as auto-recovery of 
recoverable requests.  You need the original request's method to know 
whether or not a response failure is auto-recoverable (GET typically is, 
POST isn't, etc).

Unfortunately, now we're caught between two infinites and the amount of 
state stored depends on the difference in the two rates.  This is a problem 
for a bounded memory system.  We have some tools for dealing with rate 
differences.  My broken solution, introduces a dropping buffer.  A better 
broken solution might be to conflate the httpMethod stream and collapse 
like methods (recoverable, non-recoverable) together.  This somewhat 
reduces the problem, but doesn't solve it.  In order to deterministically 
match the req/resp we'd need to conflate and still track the overall shape 
of the conflated httpMethod stream, i.e. (Rs, NRs, Rs, NRs, Rs), where R = 
recoverable, NR = non-recoverable.

Still, we have an issue where the potential rate of change in the conflated 
httpMethod stream's shape exceeds our buffer.  Now we're back to our 
original problem.  The rate difference is significantly better, but not 
good enough to run in constant space.  I'm not sure it's possible to get 
this into constant space because we can't reduce the problem down to a pure 
conflation, can't drop and don't want to error.

At this point, it's probably acceptable to rate limit the request flow, 
while the conflated httpMethod (merge or zip) works to drain responses.  

This means that if we want the client to be able to associate response with 
request and do nice things for us, we need to accept the possibility that 
it will limit the rate of accepted requests to keep the memory required to 
associate request with response bounded.

If, on the other hand, we didn't want the client to perform pipeline error 
detection or request auto-retry, we should be able to use the two 
independent streams model.  Even in this model, we'd have appropriate 
backpressure.  The server will never sink requests faster than it can 
handle, nor will it produce responses faster than it can handle.  The two 
things are in fact related, however that relationship isn't visible to the 
client, it simply manifests itself as a rate differential in our two 
otherwise independent streams.    

-- 
>>>>>>>>>>      Read the docs: http://akka.io/docs/
>>>>>>>>>>      Check the FAQ: 
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>      Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.

Reply via email to