On 05/20/2013 10:59 PM, Amos Jeffries wrote: > Used "we cannot service any more of the pipeline" instead. To make it > clear that some has been serviced, but no more will regardless of how > far down the pipeline we are..
At that point, we do not know (and do not need to know) whether any request has been serviced: connOkToAddRequest() is called for the very first request (as well as for subsequent requests). >>> + const bool result = (conn->getConcurrentRequestCount() < 1); >>> + if (!result) >>> + debugs(33, 3, conn->clientConnection << " persistence >>> disabled. Concurrent requests discarded."); >> We do not know whether we discarded something or not. Drop the second >> sentence. > We do know that. You are right, my bad. I missed the positive notYetUsed precondition. Please note that it does not imply that we are parsing a second request though. connOkToAddRequest() is called for the first request as well. > I've decided to move it up into the > config consistency checks next to the pipeline vs auth checks. Yes, that is the right place for the configuration consistency check. > I've added a loop to scan for Connection:close (and > Proxy-Connection:close), and CONNECT method as blockers on further > pipieline reads. There may be other conditions we find later. The new loop is problematic, on several levels: 1) We should either * assume (or ensure) that when we are called a connection is persistent or * not assume that when getCurrentContext() is NULL the connection is persistent. I recommend the first approach because it is simpler, does not add new state, and avoids code duplication. This is the approach used before your changes AFAICT. However, if you disagree, then the "getCurrentContext() is NULL implies that the connection is persistent" assumption above should be removed and the next two items will also apply: 2) We should not implement "connection is persistent after this request" check inside the loop. It should be either a method of ClientSocketContext() or a ConnStateData member, depending on whether you remove the assumption discussed above. 3) The check itself raises a few red flags. For example, if the previous requests is CONNECT, we should not even be here. And even if the previous request did not have Connection:close or equivalent, we may still not be persistent because of many other conditions. We probably need to take request->flags.proxyKeepalive into account here. Look for code marked with "check whether we should send keep-alive" comment in clientReplyContext::buildReplyHeader(). In summary, I think the decision tree looks like this: 1. Please carefully consider removing all the new persistency checks you have added. They are outside your patch scope, they may not be needed at all, and implementing them correctly requires more work. When connection stops being persistent, it is closed, and the pipelining code in question is not called (if you have reasons to doubt that last assumption, let's fix the code to make it true). 2. If you insist on adding those new checks for some good reason that I have missed, then I recommend adding a new ConnStateData::persistent_ field that the client-side code will maintain. Correct maintenance will not be trivial, but this approach avoids looping concurrent requests (that may be all gone already) and duplicating persistency checks for each request. > + // XXX: check that pipeline queue length is no more than M seconds long > already. > + // For long-delay queues we need to push back at the client via leaving > things in TCP buffers. > + // By time M all queued objects are already at risk of getting > client-timeout errors. I do not think we should do this, at least not by default, so this is not an XXX. I would remove that comment, but if you want to keep it, please rephrase it as an optional feature. It would be rather difficult for Squid to measure the time those requests spent in TCP buffers and also predict whether the client still wants those requests to go ahead. Some clients would not submit all requests at once and will not timeout submitted requests as long as they keep receiving response(s), for example. Thank you, Alex.
