Re: APT do not work with Squid as a proxy because of pipelining default

2010-05-22 Thread Luigi Gangitano
Hi debianers,
I've contacted squid's upstream to help clarifying some details in this thread 
and am now forwarding Amos' reply:

 Thanks Luigi, you may have to relay this back to the list. I can't seem to 
 post a reply to the thread.
 
 
 I looked at that Debian bug a while back when first looking at optimizing the 
 request parsing for Squid. With the thought of increasing the Squid threshold 
 for pipelined requests as many are suggesting.
 
 
 There were a few factors which have so far crushed the idea of solving it in 
 Squid alone:
 
 * Replies with unknown-length need to be end-of-data signalled by closing the 
 client TCP link.
 
 * The IIS and ISA servers behaviour on POST requests with auth or such as 
 outlined in our bug http://bugs.squid-cache.org/show_bug.cgi?id=2176 cause 
 the same sort of problem as above, even if the connection could otherwise be 
 kept alive.
 
 This hits a fundamental flaw in pipelining which Robert Collins alluded to 
 but did not explicitly state: that closing the connection will erase any 
 chance of getting replies to the following pipelined requests. Apt is not 
 alone in failing to re-try unsatisfied requests via a new connection.
 
  Reliable pipelining in Squid requires evading the closing of connections. 
 HTTP/1.1 and chunked encoding shows a lot of promise here but still require a 
 lot of work to get going.
 
 
 As noted by David Kalnischkies in 
 http://lists.debian.org/debian-devel/2010/05/msg00666.html the Squid 
 currently in Debian can be configured trivially to pipeline 2 requests 
 concurrently, plus a few more requests in the networking stack buffers which 
 will be read in by Squid once the first pipelined request is completed.
 
 A good solution seems to me to involve fixes on both sides. To alter the 
 default apt configuration down to a number where the pipeline timeouts/errors 
 are less likely to occur. As noted by people around the web 1-5 seems to work 
 better than 10; 0 or 1 works flawlessly for most. While we work on getting 
 Squid doing more persistent connections and faster request handling.
 
 Amos
 Squid Proxy Project

Regards,

L

--
Luigi Gangitano -- lu...@debian.org -- gangit...@lugroma3.org
GPG: 1024D/924C0C26: 12F8 9C03 89D3 DB4A 9972  C24A F19B A618 924C 0C26


--
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/bbef7628-d6e3-4312-8669-e0ada34b2...@debian.org



Re: APT do not work with Squid as a proxy because of pipelining default

2010-05-22 Thread Goswin von Brederlow
Amos Jeffries a...@treenet.co.nz writes:

 Strange that you should not know where the patch is Goswin since you
 were the first and only one to mention it in this thread.

 The answer is in the upstream bug report.
 http://bugs.squid-cache.org/show_bug.cgi?id=2624

Actualy the answere is in the Debian bugreport as linked in the
initial mail:

http://lists.debian.org/debian-devel/2010/05/msg00494.html
http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=56

  I search a bit around, and discovered that this might be caused by a
  bug in the handling of If-Modified-Since in Squid, see URL:
  http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=499379  for a
  similar bug in squid3.  That issue is reported upstream at URL:
  http://bugs.squid-cache.org/show_bug.cgi?id=2624 , and a patch for
  version 3.0 which do not apply to version 2.7 in Lenny.

Sorry for the confusion.

MfG
Goswin


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/87eih3lr6e@frosties.localdomain



Re: Re: APT do not work with Squid as a proxy because of pipelining default

2010-05-21 Thread Amos Jeffries
Strange that you should not know where the patch is Goswin since you 
were the first and only one to mention it in this thread.


The answer is in the upstream bug report.
http://bugs.squid-cache.org/show_bug.cgi?id=2624

It should be noted that patch only affects the IMS replies apt gets back 
such that the hash file will correctly be replaced and pass signature 
validation during a window of cached but outdated hash or key rollover.


Pipelining itself is not related.

The end result of both failures though is only differed by the presence 
or absence of timeout in the message so can easily be confused by users.


Amos


--
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/4bf752ba.4000...@treenet.co.nz



Re: APT do not work with Squid as a proxy because of pipelining default

2010-05-20 Thread Pierre Habouzit
On Wed, May 19, 2010 at 03:28:00PM +0200, Bjørn Mork wrote:
 Pierre Habouzit madco...@madism.org writes:
  On Wed, May 19, 2010 at 10:42:55AM +0200, Bjørn Mork wrote:
 
  2) http proxy servers cannot always process pipelined requests due to
 the complexity this adds (complexity is always bad for security), and
 
  This is bullshit. It's *VERY* easy to support pipelining: parse one
  request at a time, and until you're done with a given request, you just
  stop to watch the socket/file-descriptor for reading (IOW you let the
  consecutive request live in the kernel buffers).
 
 Yeah, you make it sound easy.

The fact is that it is. And I know it from a fact having implemented
several custom http interfaces in the past. It's just _that_ easy.

And since it's _that_ easy, when the broken proxy is free software, it
shall be fixed.

-- 
·O·  Pierre Habouzit
··Omadco...@debian.org
OOOhttp://www.madism.org


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20100520231042.gb23...@madism.org



Re: APT do not work with Squid as a proxy because of pipelining default

2010-05-19 Thread Bjørn Mork
Petter Reinholdtsen p...@hungry.com writes:
 [Roger Lynn]
 But apt has been using pipelining for years. Why has this only just
 become a problem?

 It has been a problem in Debian Edu for years.  Just recently I
 figured out the cause and a workaround.

And FWIW I have experienced this problem for years too, but never
figured out why until this discussion came up.  And I do want to claim
more than common user knowledge of http proxy servers.  Still, it never
occured to me that my spurious apt problems could be caused by proxies.
And no, it's not just squid - I've been seeing the exact same at work
where the office network have some intercepting proxy solution from
websense.

Anyway, this is definitely the type of problem that can and do exist for
years without that necessarily causing a massive number of bug reports
against apt.  I still do not think that is an argument against fixing
it?

Can we please agree that in the real world
1) RFC1123 beats any other standard: Be liberal in what you accept, and
   conservative in what you send, and
2) http proxy servers cannot always process pipelined requests due to
   the complexity this adds (complexity is always bad for security), and
3) http clients cannot know whether their requests are proxied
?

The sum of these three points is that a http client should never send
pipelined requests.  



Bjørn


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/87vdak5lkg@nemi.mork.no



Re: APT do not work with Squid as a proxy because of pipelining default

2010-05-19 Thread Pierre Habouzit
On Wed, May 19, 2010 at 10:42:55AM +0200, Bjørn Mork wrote:

 2) http proxy servers cannot always process pipelined requests due to
the complexity this adds (complexity is always bad for security), and

This is bullshit. It's *VERY* easy to support pipelining: parse one
request at a time, and until you're done with a given request, you just
stop to watch the socket/file-descriptor for reading (IOW you let the
consecutive request live in the kernel buffers).

Such an implementation of course doesn't benefit from the pipelining
wins, but it works just fine.

In addition to that, multiplying the tcp connections means more file
descriptors to be used at the http-server side, which is way inferior to
pipelining.

If squid fails when apt uses pipelining then squid is to be fixed.


I'd say that mentioning in the README.Debian of apt that disabling the
pipelining may help is fine, but disabling it by default is wrong.
Pipelining is defined in the RFC since the nineties ffs...

-- 
·O·  Pierre Habouzit
··Omadco...@debian.org
OOOhttp://www.madism.org


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20100519124134.gc20...@madism.org



Re: APT do not work with Squid as a proxy because of pipelining default

2010-05-19 Thread Bjørn Mork
Pierre Habouzit madco...@madism.org writes:
 On Wed, May 19, 2010 at 10:42:55AM +0200, Bjørn Mork wrote:

 2) http proxy servers cannot always process pipelined requests due to
the complexity this adds (complexity is always bad for security), and

 This is bullshit. It's *VERY* easy to support pipelining: parse one
 request at a time, and until you're done with a given request, you just
 stop to watch the socket/file-descriptor for reading (IOW you let the
 consecutive request live in the kernel buffers).

Yeah, you make it sound easy.  I'm sure those writing proxy servers are
just stupid.


Bjørn


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/87r5l83tsv@nemi.mork.no



Re: APT do not work with Squid as a proxy because of pipelining default

2010-05-19 Thread Goswin von Brederlow
Robert Collins robe...@robertcollins.net writes:

 Well, I don't know why something has 'suddenly' become a problem: its
 a known issue for years. The HTTP smuggling
 [http://www.watchfire.com/resources/HTTP-Request-Smuggling.pdf]
 attacks made that very obvious 5 years ago now.

Reading that I don't think that is really a pipelining issue. You do not
need pipelineing for it to work. The real problem is keep-alive. The
connection isn't destroyed after each request so you can put multiple
requests into the stream and exploit different brokenness in different
parsers along the way.

Take the first example from the pdf
[http://www.cgisecurity.com/lib/HTTP-Request-Smuggling.pdf for non-stale
link]:

You send the first request to the proxy which it passes along:

POST http://SITE/foobar.html HTTP/1.1
Host: SITE
Connection: Keep-Alive
Connection-Type: application/x-www-from-urlencoded
Content-Length: 0
Content-Length: 44
[CRLF]
GET /poison.html HTTP/1.1
Host: SITE
Bla: [space after the Bla:, but no CRLF]

As the paper says the SunONE Proxy uses the second Content-Length entry
while SunONE W/S uses the first. So server replies with foobar.html and
after it is done it starts reading the second request starting at GET
and stalls because the request is incomplete.

Then you send the second request to the proxy:

GET http://SITE/page_to_poison.html HTTP/1.1
Host: SITE
Connection: Keep-Alive
[CRLF]

And the server responds with poison.html while the proxy caches that
under page_to_poison.html.


Note that I did not pipeline the requests to the proxy. And still I
managed to poison it. I also didn't need the server to pipeline
responces, only to not fail if multiple requests are send without
waiting for the first to complete. Something that is required for
HTTP/1.1 conformance. Pipelining just makes more of those attacks
possible, those where the first parser sees multiple requests while the
second sees only one.

 http://en.wikipedia.org/wiki/HTTP_pipelining has a decent overview.

That nicely shows how much better pipelining is for the performance.

 Its nice an interesting that some recent software has it on, but that
 is generally because the authors don't realise how broken it is,
 IMNSHO :).

 -Rob

I think you have failed to show that pipelining is broken. What seems
broken is Keep-Alive. Do you suggest we stop using Keep-Alive to
prevent broken parsers from being exploited? Make a full 3-way handshake
for every request?

MfG
Goswin


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/8739xnvpj2@frosties.localdomain



Re: APT do not work with Squid as a proxy because of pipelining default

2010-05-19 Thread Philipp Kern
On 2010-05-19, Goswin von Brederlow goswin-...@web.de wrote:
 Reading that I don't think that is really a pipelining issue. You do not
 need pipelineing for it to work. The real problem is keep-alive. The
 connection isn't destroyed after each request so you can put multiple
 requests into the stream and exploit different brokenness in different
 parsers along the way.

Those are bugs in the servers that allow that output, though.

 I think you have failed to show that pipelining is broken. What seems
 broken is Keep-Alive. Do you suggest we stop using Keep-Alive to
 prevent broken parsers from being exploited? Make a full 3-way handshake
 for every request?

I think we would want keep-alive with a pipeline depth of 1 (i.e. send the
new request after the old one was processed).  I'd rather think that
TCP slow start is a problem if you avoid keepalive than the full 3-way
handshake (which is annoying too).  Concurrent requests put an unreasonable
load onto the mirrors, so we should avoid that.

Kind regards,
Philipp Kern


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/slrnhv842i.j97.tr...@kelgar.0x539.de



Re: APT do not work with Squid as a proxy because of pipelining default

2010-05-19 Thread Goswin von Brederlow
Bjørn Mork bj...@mork.no writes:

 Petter Reinholdtsen p...@hungry.com writes:
 [Roger Lynn]
 But apt has been using pipelining for years. Why has this only just
 become a problem?

 It has been a problem in Debian Edu for years.  Just recently I
 figured out the cause and a workaround.

 And FWIW I have experienced this problem for years too, but never
 figured out why until this discussion came up.  And I do want to claim
 more than common user knowledge of http proxy servers.  Still, it never
 occured to me that my spurious apt problems could be caused by proxies.
 And no, it's not just squid - I've been seeing the exact same at work
 where the office network have some intercepting proxy solution from
 websense.

 Anyway, this is definitely the type of problem that can and do exist for
 years without that necessarily causing a massive number of bug reports
 against apt.  I still do not think that is an argument against fixing
 it?

 Can we please agree that in the real world
 1) RFC1123 beats any other standard: Be liberal in what you accept, and
conservative in what you send, and
 2) http proxy servers cannot always process pipelined requests due to
the complexity this adds (complexity is always bad for security), and
 3) http clients cannot know whether their requests are proxied
 ?

 The sum of these three points is that a http client should never send
 pipelined requests.  

Wrong, at least from arguments 2 and 3.

A HTTP/1.1 conforming server or proxy is free to process pipelined
requests serially one by one. The only requirement is that it does not
corrupt the second request by reading all available data into a buffer,
parsing the first request and then throwing away the buffer and thereby
discarding the subsequent requests in that buffer. It is perfectly fine
for the server to parse the first request, think, respond to the first
request and then continue to parse the second one.

Note that that behaviour in the server already gives a huge speed
increase. It cuts away the round trip time between the last responce and
the next request. For static web content the speedup of processing
pipelined requests truely in parallel is neglible anyway. Only dynamic
pages, where formulating the responce to a request takes time, would
benefit by working on multiple responces on multiple cores. And those
are probably busy handling requests from other connections. So I
wouldn't expect servers to actually to parallel processing of pipelined
requests at all.


And your argument 1 applies perfectly to fixing squid by the way. It
should accept pipelined requests and then it can proces them one by one
and send them non pipelined if it likes. It should NOT corrupt the
requests/responces.

So just from your argument 1 APT should default not to pipeline and
squid should be fixed.

MfG
Goswin


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/87wruzuaca@frosties.localdomain



Re: APT do not work with Squid as a proxy because of pipelining default

2010-05-19 Thread Goswin von Brederlow
Bjørn Mork bj...@mork.no writes:

 Pierre Habouzit madco...@madism.org writes:
 On Wed, May 19, 2010 at 10:42:55AM +0200, Bjørn Mork wrote:

 2) http proxy servers cannot always process pipelined requests due to
the complexity this adds (complexity is always bad for security), and

 This is bullshit. It's *VERY* easy to support pipelining: parse one
 request at a time, and until you're done with a given request, you just
 stop to watch the socket/file-descriptor for reading (IOW you let the
 consecutive request live in the kernel buffers).

Or user space buffers. You would not want to parse the requests by using
read(fd, buf, 1).

 Yeah, you make it sound easy.  I'm sure those writing proxy servers are
 just stupid.


 Bjørn

It is that easy. For a proxy call it de-pipeline-isation. In a proxy
this behaviour would destroy the benefit of pipelining. But not the data.

The hard part is writing the proxy so that it still pipelines the
requests to the server. There you get the increased complexity for
security and bandwith limiting. But that is not required for HTTP/1.1
conformance and an actually working setup.

MfG
Goswin


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/87r5l7ua12@frosties.localdomain



Re: APT do not work with Squid as a proxy because of pipelining default

2010-05-19 Thread Eduard Bloch
#include hallo.h
* Robert Collins [Tue, May 18 2010, 02:02:59PM]:
 Given that pipelining is broken by design, that the HTTP WG has

And if not? Counter example, it seems to work just fine with my
apt-cacher-ng proxy, at least bug reports related to that have appeared
for about a year now.

 increased the number of concurrent connections that are recommended,
 and removed the upper limit - no. I don't think that disabling
 pipelining hurts anyone - just use a couple more concurrent
 connections.

And here I disagree. Dealing with a dozen connections instead of having
the requests in the pipeline can require much more resources on the
proxy side.

Regards,
Eduard.


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/20100519184903.ga3...@rotes76.wohnheim.uni-kl.de



Re: APT do not work with Squid as a proxy because of pipelining default

2010-05-19 Thread Bjørn Mork
Goswin von Brederlow goswin-...@web.de writes:

 A HTTP/1.1 conforming server or proxy 

This is not the real world...

 is free to process pipelined
 requests serially one by one. The only requirement is that it does not
 corrupt the second request by reading all available data into a buffer,
 parsing the first request and then throwing away the buffer and thereby
 discarding the subsequent requests in that buffer. It is perfectly fine
 for the server to parse the first request, think, respond to the first
 request and then continue to parse the second one.

Yes, this can be done.  But you should ask yourself what proxies are
used for.  The serializing strategy will work, but it will make the
connection slower with a proxy than without.  That's not going to sell
many proxy servers.

 Note that that behaviour in the server already gives a huge speed
 increase. It cuts away the round trip time between the last responce and
 the next request. For static web content the speedup of processing
 pipelined requests truely in parallel is neglible anyway. Only dynamic
 pages, where formulating the responce to a request takes time, would
 benefit by working on multiple responces on multiple cores. And those
 are probably busy handling requests from other connections. So I
 wouldn't expect servers to actually to parallel processing of pipelined
 requests at all.

This is true for a web server.  It is only true for a proxy server if it
can either forward a pipelined request or parallelize it.  That's where
we get the complexity.

If you keep your simple serial strategy, then a pipelined request will
be slower than a parallel one.

 And your argument 1 applies perfectly to fixing squid by the way. It
 should accept pipelined requests and then it can proces them one by one
 and send them non pipelined if it likes. It should NOT corrupt the
 requests/responces.

Sure.  Squid should be fixed.  But I'm afraid fixing squid in Debian
won't fix all the http proxies around the world.

 So just from your argument 1 APT should default not to pipeline and
 squid should be fixed.

Good.


Bjørn


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/87wruz1xn7@nemi.mork.no



Re: APT do not work with Squid as a proxy because of pipelining default

2010-05-19 Thread David Kalnischkies
Hi all,

i don't want to interrupt your battles so feel free to ignore me,
but i want to raise some questions (for you and me) none the less:

The notice about the - in the eyes of the writer of this manpage
section - broken squid version 2.0.2 in the apt.conf manpage
was changed the last time in 2004, so the issue isn't new.
The manpage at least claims that this squid version is broken
also in respect to other cache control settings.

I don't know a single bit about squid but a search for squid pipeline
turns up some documentation about a pipeline_prefetch setting:
 Available in: 3.12.73.HEAD2.HEAD3.02.6

 To boost the performance of pipelined requests to closer
 match that of a non-proxied environment Squid can try to fetch
 up to two requests in parallel from a pipeline.
http://www.squid-cache.org/Doc/config/pipeline_prefetch/

For somebody without knowledge this looks like as any
version in debian should be able to handle a pipeline -
otherwise this setting wouldn't make much sense…

The default value for the APT option above is btw 10 and in apt
#413324 we have a claim that squid works well with a value of 5
or even 8 -- so it is maybe just a bug in handling too much
pipelined requests? Or something comparable to what happened
in #541428 regarding lighttpd and pipelining (timeout)?
(i am just shooting into the dark)


Also, then we talk here about pipelines and her usage
keep in mind that APTs http usage is special compared to
an implementation and usage in a browser:
We have a trust chain available so we should be on the save
side security wise, the number of debian archives is limited
and most of them should be on a sane webserver
(if not i would not have much trust in the archive…) and
especially on apt-get update we have either a lot of cache
hits (file has not changed) or a lot of very small files (Release,
Index and maybe pdiff) to transfer. New package updates come
from the same archive most of time and most packages are
relatively small, too, but having an upgrade including at least
500 packages is relatively common…

On the other hand APTs http client isn't as nice as he could be in
terms that he could fallback to non-pipeline, retry or whatever.
(and i wouldn't be too surprised if this would turn out to be an APT bug)
As we all know APT is a debian native tool and the base of a whole
bunch of other stuff so beside ranting about his shortcomings we
could also work on patches as the people with enough knowledge
to do this seems to be already around in this thread.


Thanks in advance and best regards,

David Kalnischkies


P.S. Sry Luigi Gangitano for cc'ing, but i don't know if you follow
the thread and i included too often squid in the mail to not direct
the mail into your direction.


--
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/aanlktinq0j1fd78p200qihsq5ctv3qmbwd8srxgje...@mail.gmail.com



Re: APT do not work with Squid as a proxy because of pipelining default

2010-05-19 Thread Daniel Burrows
On Wed, May 19, 2010 at 03:28:00PM +0200, Bjørn Mork bj...@mork.no was heard 
to say:
 Pierre Habouzit madco...@madism.org writes:
  This is bullshit. It's *VERY* easy to support pipelining: parse one
  request at a time, and until you're done with a given request, you just
  stop to watch the socket/file-descriptor for reading (IOW you let the
  consecutive request live in the kernel buffers).
 
 Yeah, you make it sound easy.  I'm sure those writing proxy servers are
 just stupid.

  There are lots of easy features that my own software doesn't support,
and I don't think I'm particularly stupid.  It just means I have less
than an infinite amount of time.

  Daniel


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20100520010845.ga16...@emurlahn.burrows.local



Re: APT do not work with Squid as a proxy because of pipelining default

2010-05-19 Thread Goswin von Brederlow
Philipp Kern tr...@philkern.de writes:

 On 2010-05-19, Goswin von Brederlow goswin-...@web.de wrote:
 Reading that I don't think that is really a pipelining issue. You do not
 need pipelineing for it to work. The real problem is keep-alive. The
 connection isn't destroyed after each request so you can put multiple
 requests into the stream and exploit different brokenness in different
 parsers along the way.

 Those are bugs in the servers that allow that output, though.

 I think you have failed to show that pipelining is broken. What seems
 broken is Keep-Alive. Do you suggest we stop using Keep-Alive to
 prevent broken parsers from being exploited? Make a full 3-way handshake
 for every request?

 I think we would want keep-alive with a pipeline depth of 1 (i.e. send the

Obviously I did not mean to disable Keep-Alive. :)

 new request after the old one was processed).  I'd rather think that
 TCP slow start is a problem if you avoid keepalive than the full 3-way
 handshake (which is annoying too).  Concurrent requests put an unreasonable
 load onto the mirrors, so we should avoid that.

 Kind regards,
 Philipp Kern

How about useing pipelining and if it breaks retry without pipelining
and inform the user of it? If it occurs frequently the user will
eventually notice the message and configure a depth of 1.

So 2 changes: 1) cope with the error graciously and 2) explain what is
probably happening.

That way most people would get the speed benefit and installations would
still not break when broken software is encountered.

MfG
Goswin

PS: That doesn't mean squid shouldn't be fixed too.


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/87d3wrnqyi@frosties.localdomain



Re: APT do not work with Squid as a proxy because of pipelining default

2010-05-19 Thread Goswin von Brederlow
Bjørn Mork bj...@mork.no writes:

 Goswin von Brederlow goswin-...@web.de writes:

 A HTTP/1.1 conforming server or proxy 

 This is not the real world...

 is free to process pipelined
 requests serially one by one. The only requirement is that it does not
 corrupt the second request by reading all available data into a buffer,
 parsing the first request and then throwing away the buffer and thereby
 discarding the subsequent requests in that buffer. It is perfectly fine
 for the server to parse the first request, think, respond to the first
 request and then continue to parse the second one.

 Yes, this can be done.  But you should ask yourself what proxies are
 used for.  The serializing strategy will work, but it will make the
 connection slower with a proxy than without.  That's not going to sell
 many proxy servers.

Sure. I was talking about conformance. A proxy / server that screws that
up deserves to be shot. And from the description of the probem this part
actualy works in squid, people WOULD have noticed otherwise. Squids
problem seems to be that it does break later when processing the
responce. But that is a guess from the description of the problem. Still
no tcpdump of what actually happens posted.

 Note that that behaviour in the server already gives a huge speed
 increase. It cuts away the round trip time between the last responce and
 the next request. For static web content the speedup of processing
 pipelined requests truely in parallel is neglible anyway. Only dynamic
 pages, where formulating the responce to a request takes time, would
 benefit by working on multiple responces on multiple cores. And those
 are probably busy handling requests from other connections. So I
 wouldn't expect servers to actually to parallel processing of pipelined
 requests at all.

 This is true for a web server.  It is only true for a proxy server if it
 can either forward a pipelined request or parallelize it.  That's where
 we get the complexity.

 If you keep your simple serial strategy, then a pipelined request will
 be slower than a parallel one.

But no slower than without pipelining. I was talking specifically about
server and not porxy for a reason by the way. :)

 And your argument 1 applies perfectly to fixing squid by the way. It
 should accept pipelined requests and then it can proces them one by one
 and send them non pipelined if it likes. It should NOT corrupt the
 requests/responces.

 Sure.  Squid should be fixed.  But I'm afraid fixing squid in Debian
 won't fix all the http proxies around the world.

Lets start with the beam in our eye before we go chasing moths in other
peoples.

 So just from your argument 1 APT should default not to pipeline and
 squid should be fixed.

 Good.

Let me clarify this. Apt should default not to pipeline in stable. That
is the trivial workaround for the problem. In unstable something better
can be tried, like detect when pipelining fails and automatically fall
back to depth 1. But defaulting to depth 1 till someone programs that is
an option too.

 Bjørn

MfG
Goswin


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/877hmznqgs@frosties.localdomain



Re: APT do not work with Squid as a proxy because of pipelining default

2010-05-19 Thread Goswin von Brederlow
David Kalnischkies kalnischkies+deb...@gmail.com writes:

 Hi all,

 i don't want to interrupt your battles so feel free to ignore me,
 but i want to raise some questions (for you and me) none the less:

 The notice about the - in the eyes of the writer of this manpage
 section - broken squid version 2.0.2 in the apt.conf manpage
 was changed the last time in 2004, so the issue isn't new.
 The manpage at least claims that this squid version is broken
 also in respect to other cache control settings.

 I don't know a single bit about squid but a search for squid pipeline
 turns up some documentation about a pipeline_prefetch setting:
 Available in: 3.12.73.HEAD2.HEAD3.02.6

 To boost the performance of pipelined requests to closer
 match that of a non-proxied environment Squid can try to fetch
 up to two requests in parallel from a pipeline.
 http://www.squid-cache.org/Doc/config/pipeline_prefetch/

 For somebody without knowledge this looks like as any
 version in debian should be able to handle a pipeline -
 otherwise this setting wouldn't make much sense…

 The default value for the APT option above is btw 10 and in apt
 #413324 we have a claim that squid works well with a value of 5
 or even 8 -- so it is maybe just a bug in handling too much
 pipelined requests? Or something comparable to what happened
 in #541428 regarding lighttpd and pipelining (timeout)?
 (i am just shooting into the dark)


 Also, then we talk here about pipelines and her usage
 keep in mind that APTs http usage is special compared to
 an implementation and usage in a browser:
 We have a trust chain available so we should be on the save
 side security wise, the number of debian archives is limited
 and most of them should be on a sane webserver
 (if not i would not have much trust in the archive…) and
 especially on apt-get update we have either a lot of cache
 hits (file has not changed) or a lot of very small files (Release,
 Index and maybe pdiff) to transfer. New package updates come
 from the same archive most of time and most packages are
 relatively small, too, but having an upgrade including at least
 500 packages is relatively common…

Do I hear an idea in there? Use pipelineing for small stuff (header
check, Release, Release.gpg, pdiff, dsc) but then default to depth 1 for
the big files (Packages.gz, debs, orig.tar.gz).

 On the other hand APTs http client isn't as nice as he could be in
 terms that he could fallback to non-pipeline, retry or whatever.
 (and i wouldn't be too surprised if this would turn out to be an APT bug)
 As we all know APT is a debian native tool and the base of a whole
 bunch of other stuff so beside ranting about his shortcomings we
 could also work on patches as the people with enough knowledge
 to do this seems to be already around in this thread.


 Thanks in advance and best regards,

 David Kalnischkies


 P.S. Sry Luigi Gangitano for cc'ing, but i don't know if you follow
 the thread and i included too often squid in the mail to not direct
 the mail into your direction.

MfG
Goswin


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/8739xnnq7x@frosties.localdomain



Re: APT do not work with Squid as a proxy because of pipelining default

2010-05-19 Thread Goswin von Brederlow
Daniel Burrows dburr...@debian.org writes:

 On Wed, May 19, 2010 at 03:28:00PM +0200, Bjørn Mork bj...@mork.no was 
 heard to say:
 Pierre Habouzit madco...@madism.org writes:
  This is bullshit. It's *VERY* easy to support pipelining: parse one
  request at a time, and until you're done with a given request, you just
  stop to watch the socket/file-descriptor for reading (IOW you let the
  consecutive request live in the kernel buffers).
 
 Yeah, you make it sound easy.  I'm sure those writing proxy servers are
 just stupid.

   There are lots of easy features that my own software doesn't support,
 and I don't think I'm particularly stupid.  It just means I have less
 than an infinite amount of time.

   Daniel

That isn't a problem. The problem arises when you start to support the
feature but mess it up.

Processing pipelined requests one by one is easy but slow. Processing
them in parallel is hard but faster. But please do make sure it works
before you do. :)

MfG
Goswin


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/87y6ffmbka@frosties.localdomain



Re: APT do not work with Squid as a proxy because of pipelining default

2010-05-18 Thread Goswin von Brederlow
Marvin Renich m...@renich.org writes:

 * Robert Collins robe...@robertcollins.net [100517 17:42]:
 Due to the widespread usage of intercepting proxies, its very hard, if
 not impossible, to determine if a proxy is in use. Its unwise, at
 best, to assume that no proxy configured == no proxy processing your
 traffic :(.
 
 -Rob

 IANADD, but if I had filed bug #56, I would have selected severity
 critical (makes unrelated software on the system break), and similarly
 for any other transparent proxy in Debian that fails to work
 transparently.

 The proxy may not be on a Debian system, but wouldn't the following
 logic in apt catch enough of the problem cases to be a useful
 workaround:

 If Acquire::http::Pipeline-Depth is not set and Acquire::http::Proxy
 is set, use 0 for Pipeline-Depth; use current behavior
 otherwise.

 Documenting this problem somewhere that an admin would look when seeing
 the offending Hash sum mismatch message would also help.  Turning off
 pipelining by default for everybody seems like the wrong solution to
 this problem.

 ...Marvin

Maybe apt should check size and try to resume the download. I'm assuming
it gets the right header but then the data ends prematurely?

Could you try to capture a tcpdump of the actual traffic between apt and
the proxy?

MfG
Goswin


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/87eih9itvi@frosties.localdomain



Re: APT do not work with Squid as a proxy because of pipelining default

2010-05-18 Thread Marvin Renich
* Robert Collins robe...@robertcollins.net [100517 22:03]:
 Given that pipelining is broken by design, that the HTTP WG has
 increased the number of concurrent connections that are recommended,
 and removed the upper limit - no. I don't think that disabling
 pipelining hurts anyone - just use a couple more concurrent
 connections.
 
 -Rob

I was unaware that pipelining was considered broken by design, so I
was trying to say that if there was an easy way for apt to choose
between pipelining and no pipelining (if it wasn't specifically set by
the admin) that would handle most of the cases, that was better than
disabling by default a feature that was beneficial to many.

If pipelining is considered broken, and concurrency is preferred, I'm
perfectly happy with that.

...Marvin


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20100518120244.gl1...@cleo.wdw



Re: APT do not work with Squid as a proxy because of pipelining default

2010-05-18 Thread Marvin Renich
* Goswin von Brederlow goswin-...@web.de [100518 02:53]:
 Marvin Renich m...@renich.org writes:
  Documenting this problem somewhere that an admin would look when seeing
  the offending Hash sum mismatch message would also help.  Turning off
  pipelining by default for everybody seems like the wrong solution to
  this problem.
 
  ...Marvin
 
 Maybe apt should check size and try to resume the download. I'm assuming
 it gets the right header but then the data ends prematurely?
 
 Could you try to capture a tcpdump of the actual traffic between apt and
 the proxy?
 
 MfG
 Goswin

Fortunately, I am not behind a proxy, so I can't check this.  :-)

...Marvin


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20100518120413.gm1...@cleo.wdw



Re: APT do not work with Squid as a proxy because of pipelining default

2010-05-18 Thread Mike Hommey
On Mon, May 17, 2010 at 09:54:28PM +0200, Florian Weimer wrote:
 * Petter Reinholdtsen:
 
  I am bothered by URL: http://bugs.debian.org/56 , and the fact
  that apt(-get,itude) do not work with Squid as a proxy.  I would very
  much like to have apt work out of the box with Squid in Squeeze.  To
  fix it one can either change Squid to work with pipelining the way APT
  uses, which the Squid maintainer and developers according to the BTS
  report is unlikely to implement any time soon, or change the default
  setting in apt for Aquire::http::Pipeline-Depth to zero (0).  I've
  added a file like this in /etc/apt/apt.conf.d/ to solve it locally:
 
Aquire::http::Pipeline-Depth 0;
 
 Maybe it's safe to use pipelining when a proxy is not used?  This is
 how things have been implemented in browsers, IIRC.

Mozilla browsers have had pipelining disabled for years, because
reality is that a whole lot of servers don't implement it properly if at
all.

Mike


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20100518120913.ga8...@glandium.org



Re: APT do not work with Squid as a proxy because of pipelining default

2010-05-18 Thread Luigi Gangitano
Il giorno 17/mag/2010, alle ore 09.02, Goswin von Brederlow ha scritto:
 Given that squid already has a patch, although only for newer versions,
 this really seems to be a squid bug. As such it should be fixed in
 squid as not only apt might trigger the problem.

Goswin, can you please point me to the patch you mention?

 That said setting the Pipeline-Depth to 0 as default or when a proxy is
 configured might be advisable. Adding a apt.conf.d sniplet to the stable
 apt should be a trivial change. Much simpler than fixing squid itself.
 
 And in testing/unstable one can fix it properly or update squid to 3.0.

I assume that squid3 is not affected by this bug, do you confirm this? If the 
patch you mentioned is related to squid3 a backport may or may not be feasible, 
but should try. :-)

Regards,

L

--
Luigi Gangitano -- lu...@debian.org -- gangit...@lugroma3.org
GPG: 1024D/924C0C26: 12F8 9C03 89D3 DB4A 9972  C24A F19B A618 924C 0C26


--
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/6ba9dd68-4527-4d44-a836-647352505...@debian.org



Re: APT do not work with Squid as a proxy because of pipelining default

2010-05-18 Thread brian m. carlson
On Tue, May 18, 2010 at 02:09:13PM +0200, Mike Hommey wrote:
 Mozilla browsers have had pipelining disabled for years, because
 reality is that a whole lot of servers don't implement it properly if at
 all.

Actually, I've had pipelining enabled for some time, and it works just
fine for me.  I have had zero problems with it.  And this is with
Iceweasel.

-- 
brian m. carlson / brian with sandals: Houston, Texas, US
+1 832 623 2791 | http://www.crustytoothpaste.net/~bmc | My opinion only
OpenPGP: RSA v4 4096b: 88AC E9B2 9196 305B A994 7552 F1BA 225C 0223 B187


signature.asc
Description: Digital signature


Re: APT do not work with Squid as a proxy because of pipelining default

2010-05-18 Thread Goswin von Brederlow
Luigi Gangitano lu...@debian.org writes:

 Il giorno 17/mag/2010, alle ore 09.02, Goswin von Brederlow ha scritto:
 Given that squid already has a patch, although only for newer versions,
 this really seems to be a squid bug. As such it should be fixed in
 squid as not only apt might trigger the problem.

 Goswin, can you please point me to the patch you mention?

 That said setting the Pipeline-Depth to 0 as default or when a proxy is
 configured might be advisable. Adding a apt.conf.d sniplet to the stable
 apt should be a trivial change. Much simpler than fixing squid itself.
 
 And in testing/unstable one can fix it properly or update squid to 3.0.

 I assume that squid3 is not affected by this bug, do you confirm this? If the 
 patch you mentioned is related to squid3 a backport may or may not be 
 feasible, but should try. :-)

 Regards,

 L

It was mentioned in an earlier mail that the issue was fixed in squid 3
but the patch doesn't apply to 2.x. No idea where that patch is, check
the previous mails.

MfG
Goswin


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/877hn13yz0@frosties.localdomain



Re: APT do not work with Squid as a proxy because of pipelining default

2010-05-18 Thread Roger Lynn
On 18/05/10 03:10, Robert Collins wrote:
 Given that pipelining is broken by design, that the HTTP WG has
 increased the number of concurrent connections that are recommended,
 and removed the upper limit - no. I don't think that disabling
 pipelining hurts anyone - just use a couple more concurrent
 connections.

But apt has been using pipelining for years. Why has this only just
become a problem? Not all proxies dislike pipelining - Polipo is an
example of one that works well with it. It also works with at least some
proprietary/commercial proxies too. And if transparent proxies can't
cope with pipelining then they're broken and not very transparent. I
think if this was a significant problem it would have been noticed a
long time ago. However disabling pipelining if a proxy is configured is
probably a good idea to ensure compatibility and is commonly done in
browsers, but it's not necessary for direct connections.

Roger


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/4bf31d2e.7020...@rilynn.me.uk



Re: APT do not work with Squid as a proxy because of pipelining default

2010-05-18 Thread Robert Collins
Well, I don't know why something has 'suddenly' become a problem: its
a known issue for years. The HTTP smuggling
[http://www.watchfire.com/resources/HTTP-Request-Smuggling.pdf]
attacks made that very obvious 5 years ago now.

http://en.wikipedia.org/wiki/HTTP_pipelining has a decent overview.

Its nice an interesting that some recent software has it on, but that
is generally because the authors don't realise how broken it is,
IMNSHO :).

-Rob


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/aanlktikdep4hoghns30kc-jryxqba9fmudd9-dvjf...@mail.gmail.com



Re: APT do not work with Squid as a proxy because of pipelining default

2010-05-18 Thread Brian May
On 19 May 2010 13:51, Robert Collins robe...@robertcollins.net wrote:
 Well, I don't know why something has 'suddenly' become a problem: its
 a known issue for years. The HTTP smuggling
 [http://www.watchfire.com/resources/HTTP-Request-Smuggling.pdf]
 attacks made that very obvious 5 years ago now.

From my Internet connection, that link seems to be a redirect to
http://www-01.ibm.com/software/rational/offerings/websecurity/, which
doesn't say anything about http security issues.

 http://en.wikipedia.org/wiki/HTTP_pipelining has a decent overview.

I cannot see anything about brokenness of HTTP pipelining here... Did
I miss something?
-- 
Brian May br...@microcomaustralia.com.au


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/aanlktincd4paqge1xzpwxwn3ilenm1fzanva4qyg5...@mail.gmail.com



Re: APT do not work with Squid as a proxy because of pipelining default

2010-05-18 Thread Robert Collins
Bah, link staleness.

http://www.cgisecurity.com/lib/HTTP-Request-Smuggling.pdf just worked for me.

Also, I realise that there may be a disconnect here: squid *shouldn't*
break if a client attempts to pipeline through it - if it is, thats a
bug to be fixed, squid just will not read the second request until the
first one is completed.

-Rob


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/aanlktik7h4lpnxpyhpwb1x2qlyrv_c6d9fmiqybbg...@mail.gmail.com



Re: APT do not work with Squid as a proxy because of pipelining default

2010-05-18 Thread Petter Reinholdtsen

[Roger Lynn]
 But apt has been using pipelining for years. Why has this only just
 become a problem?

It has been a problem in Debian Edu for years.  Just recently I
figured out the cause and a workaround.

Happy hacking,
-- 
Petter Reinholdtsen


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/2flpr0sjvpa@login1.uio.no



Re: APT do not work with Squid as a proxy because of pipelining default

2010-05-17 Thread Goswin von Brederlow
Petter Reinholdtsen p...@hungry.com writes:

 I am bothered by URL: http://bugs.debian.org/56 , and the fact
 that apt(-get,itude) do not work with Squid as a proxy.  I would very
 much like to have apt work out of the box with Squid in Squeeze.  To
 fix it one can either change Squid to work with pipelining the way APT
 uses, which the Squid maintainer and developers according to the BTS
 report is unlikely to implement any time soon, or change the default
 setting in apt for Aquire::http::Pipeline-Depth to zero (0).  I've
 added a file like this in /etc/apt/apt.conf.d/ to solve it locally:

   Aquire::http::Pipeline-Depth 0;

 My question to all of you is simple.  Should the APT default be
 changed or Squid be changed?  Should the bug report be reassigned to
 apt or stay as a bug with Squid?

 Happy hacking,

Given that squid already has a patch, although only for newer versions,
this really seems to be a squid bug. As such it should be fixed in
squid as not only apt might trigger the problem.

That said setting the Pipeline-Depth to 0 as default or when a proxy is
configured might be advisable. Adding a apt.conf.d sniplet to the stable
apt should be a trivial change. Much simpler than fixing squid itself.

And in testing/unstable one can fix it properly or update squid to 3.0.

My 2c,
   Goswin


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/87fx1r57tp@frosties.localdomain



Re: APT do not work with Squid as a proxy because of pipelining default

2010-05-17 Thread Florian Weimer
* Petter Reinholdtsen:

 I am bothered by URL: http://bugs.debian.org/56 , and the fact
 that apt(-get,itude) do not work with Squid as a proxy.  I would very
 much like to have apt work out of the box with Squid in Squeeze.  To
 fix it one can either change Squid to work with pipelining the way APT
 uses, which the Squid maintainer and developers according to the BTS
 report is unlikely to implement any time soon, or change the default
 setting in apt for Aquire::http::Pipeline-Depth to zero (0).  I've
 added a file like this in /etc/apt/apt.conf.d/ to solve it locally:

   Aquire::http::Pipeline-Depth 0;

Maybe it's safe to use pipelining when a proxy is not used?  This is
how things have been implemented in browsers, IIRC.

On the other hand, you probably still need to somewhat complex retry
logic, but I guess you need that anyway (if the first download fails,
try without pipelining etc.).


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/87pr0umhhn@mid.deneb.enyo.de



Re: APT do not work with Squid as a proxy because of pipelining default

2010-05-17 Thread Robert Collins
Due to the widespread usage of intercepting proxies, its very hard, if
not impossible, to determine if a proxy is in use. Its unwise, at
best, to assume that no proxy configured == no proxy processing your
traffic :(.

-Rob


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/aanlktik5teaazbkfzgwon2swqtdr_tb_tg1pnagte...@mail.gmail.com



Re: APT do not work with Squid as a proxy because of pipelining default

2010-05-17 Thread Marvin Renich
* Robert Collins robe...@robertcollins.net [100517 17:42]:
 Due to the widespread usage of intercepting proxies, its very hard, if
 not impossible, to determine if a proxy is in use. Its unwise, at
 best, to assume that no proxy configured == no proxy processing your
 traffic :(.
 
 -Rob

IANADD, but if I had filed bug #56, I would have selected severity
critical (makes unrelated software on the system break), and similarly
for any other transparent proxy in Debian that fails to work
transparently.

The proxy may not be on a Debian system, but wouldn't the following
logic in apt catch enough of the problem cases to be a useful
workaround:

If Acquire::http::Pipeline-Depth is not set and Acquire::http::Proxy
is set, use 0 for Pipeline-Depth; use current behavior
otherwise.

Documenting this problem somewhere that an admin would look when seeing
the offending Hash sum mismatch message would also help.  Turning off
pipelining by default for everybody seems like the wrong solution to
this problem.

...Marvin


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20100518002527.gk1...@cleo.wdw



Re: APT do not work with Squid as a proxy because of pipelining default

2010-05-17 Thread Robert Collins
Given that pipelining is broken by design, that the HTTP WG has
increased the number of concurrent connections that are recommended,
and removed the upper limit - no. I don't think that disabling
pipelining hurts anyone - just use a couple more concurrent
connections.

-Rob


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/aanlktiknkcqq8krqc9zupelg9gqrglou3zezgly7-...@mail.gmail.com



Re: APT do not work with Squid as a proxy because of pipelining default

2010-05-17 Thread Frank Lin PIAT
On Tue, 2010-05-18 at 14:02 +1200, Robert Collins wrote:
 Given that pipelining is broken by design, that the HTTP WG has
 increased the number of concurrent connections that are recommended,
 and removed the upper limit - no. I don't think that disabling
 pipelining hurts anyone - just use a couple more concurrent
 connections.

Lots of [new] users are using Debian in Non-Debian infrastructure, which
may use unpatched squid. They would get a bad initial perception of
Debian, if it wasn't working with standard setup.

My 2cents,

Franklin


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/1274160733.4972.94.ca...@solid.paris.klabs.be