Luigi Gangitano wrote:
Hi guys,
I'm sorry to bother you here but a long thread on pipelining support is going 
on on debian-devel. You may wish to add some comments there. :-)

Thread start can be found here:

  http://lists.debian.org/debian-devel/2010/05/msg00494.html

Thanks,

L

Inizio messaggio inoltrato:

Rinvia-Da: debian-de...@lists.debian.org
Da: Petter Reinholdtsen <p...@hungry.com>
Data: 17 maggio 2010 07.05.00 GMT+02.00
A: debian-de...@lists.debian.org
Oggetto: APT do not work with Squid as a proxy because of pipelining default

I am bothered by <URL: http://bugs.debian.org/565555 >, and the fact
that apt(-get,itude) do not work with Squid as a proxy.  I would very
much like to have apt work out of the box with Squid in Squeeze.  To
fix it one can either change Squid to work with pipelining the way APT
uses, which the Squid maintainer and developers according to the BTS
report is unlikely to implement any time soon, or change the default
setting in apt for Aquire::http::Pipeline-Depth to zero (0).  I've
added a file like this in /etc/apt/apt.conf.d/ to solve it locally:

 Aquire::http::Pipeline-Depth 0;

My question to all of you is simple.  Should the APT default be
changed or Squid be changed?  Should the bug report be reassigned to
apt or stay as a bug with Squid?


Thanks Luigi, you may have to relay this back to the list. I can't seem to post a reply to the thread.


I looked at that Debian bug a while back when first looking at optimizing the request parsing for Squid. With the thought of increasing the Squid threshold for pipelined requests as many are suggesting.


There were a few factors which have so far crushed the idea of solving it in Squid alone:

* Replies with unknown-length need to be end-of-data signalled by closing the client TCP link.

* The IIS and ISA servers behaviour on POST requests with auth or such as outlined in our bug http://bugs.squid-cache.org/show_bug.cgi?id=2176 cause the same sort of problem as above, even if the connection could otherwise be kept alive.

This hits a fundamental flaw in pipelining which Robert Collins alluded to but did not explicitly state: that closing the connection will erase any chance of getting replies to the following pipelined requests. Apt is not alone in failing to re-try unsatisfied requests via a new connection.

Reliable pipelining in Squid requires evading the closing of connections. HTTP/1.1 and chunked encoding shows a lot of promise here but still require a lot of work to get going.


As noted by David Kalnischkies in http://lists.debian.org/debian-devel/2010/05/msg00666.html the Squid currently in Debian can be configured trivially to pipeline 2 requests concurrently, plus a few more requests in the networking stack buffers which will be read in by Squid once the first pipelined request is completed.

A good solution seems to me to involve fixes on both sides. To alter the default apt configuration down to a number where the pipeline timeouts/errors are less likely to occur. As noted by people around the web 1-5 seems to work better than 10; 0 or 1 works flawlessly for most. While we work on getting Squid doing more persistent connections and faster request handling.

Amos
Squid Proxy Project

Reply via email to