On 18/01/2013 2:34 a.m., Tsantilas Christos wrote:
Hi all,
   this patch removes the "repinning" code from squid.

A related discussion in squid-dev mailing list is here:
   http://www.mail-archive.com/squid-dev@squid-cache.org/msg18997.html


Detailed description of the patch follows:

Propagate pinned connection persistency and closures to the client.

Squid was trying hard to forward a request after pinned connection
failures because some of those failures were benign pconn races. That
meant re-pinning failed connections. After a few iterations to correctly
handle non-idempotent requests, the code appeared to work, but the final
design, with all the added complexity and related dangers was deemed
inferior to the approach we use now.

Squid now simply propagates connection closures (including pconn races)
to the client. It is now the client responsibility not to send
non-idempotent requests on idle persistent connections and to recover
from pconn races.

Squid also propagates HTTP connection persistency indicators from client
to server and back, to make client job feasible. Squid will send
Connection:close and will close the client connection if the pinned
server says so, even if Squid could still maintain a persistent
connection with the client.

These changes are not mean to affect regular (not pinned) transactions.

In access.log, one can detect requests that were not responded to (due
to race conditions on pinned connections) by searching for
ERR_ZERO_SIZE_OBJECT %err_code with TCP_MISS/000 status and zero
response bytes.

This is a Measurement Factory project.

Thank you. All I can see standing out are a few cosmetic polishes.

In src/client_side_reply.cc:

* "not sending to a closing on pinned zero reply " does not make any sense.
Did you mean "not sending more data after a pinned zero reply "? although even that is a bit unclear. Since this is public log text I think we better clarify it a lot before this patch goes in.

* please remove HERE from the new/changed debugs lines in selectPeerForIntercepted().

Amos

Reply via email to