On 2/11/2012 12:17 a.m., Heinrich Hirtzel wrote:
You are missing the intercept flag on https_port. That is what tells
Squid how to interpret the URL and TCP layer differences in the port 80
and 443 syntax traffic.
I've already tried that (with 3.1.20, since 3.2.3 isn't working here at all), but the 
client web browser just get "net::ERR_TOO_MANY_REDIRECTS" (Google Chrome)

Aha! that is a completely separate problem.

Chrome is getting a 3xx redirect message pointing it at URL 'A' when thet gets requested it gets redirected to URL 'B' which redirects someehere ... which ends up redirecting to URL 'A' - maybe from B->A.


Consider:
 1) Browser makes encrypted request to https://example.com

 2a) firewall intercepts into Squid.
 2b) Squid decrypts the traffic

 3a) something happens to force the request via a parent proxy
 3b) Squid obeys and sends *unencrypted* request to parent proxy

4a) parent proxy sends *HTTP* request it received to example.com fetching path "/" 4b) example.com responds indicating that HTTP*S* is mandatory use https://example.com/

5) browser recieves redirect instruction to repeat the request using URL https://example.com/

 ..... Oops. Back to Step 1!

 1) Browser makes encrypted request to https://example.com


... some time later. Browser counts too many 30x status in a row (step 5's). Chrome displays "net::ERR_TOO_MANY_REDIRECTS" at you.


The problem is created by step 3a causing step 3b to happen. Which might be due to: * using URL-rewriter to bypass the Squid ORIGINAL_DST protections explicitly setup to prevent this type of loop. * using a version of Squid before the loop protection was added (some 3.1 series and some 3.2 betas).
 * someone patching the loop protection and parent-block out of your Squid.



what are all the compilation options from -v (curios)
try the latest stable at:
http://www.squid-cache.org/Versions/v3/3.2/squid-3.2.3.tar.bz2
$ /usr/local/squid/sbin/squid -v
Squid Cache: Version 3.2.3-20121031-r11695
configure options:  '--enable-ssl' '--enable-ssl-crtd'

But 3.2.3 isn't working here at all (using the same config as for 3.1.20):

<timestamp> kid1| Intercept.cc(127) NetfilterInterception:  NF 
getsockopt(SO_ORIGINAL_DST) failed on local=10.0.1.1:3128 remote=10.0.1.1:51285 FD 10 
flags=33: (92) Protocol not available
<timestamp>| BUG: Orphan Comm::Connection: local=10.0.1.1:3128 
remote=10.0.1.1:51285 FD 10 flags=33
<timestamp>| NOTE: 1 Orphans since last started.

Aha! (again)
 There are two bugs here.


Bug 1 - Your NAT is broken.

As you can see the browser running on 10.0.1.1 port 51285 is connecting to a web server at 10.0.1.1 port 3128 ??

---> Whatever domain you requested is very much NOT running a webserver at 10.0.1.1:3128 though. Older Squid would ignore that lie and use the Host header. 3.2 will check the Host header accuracy and requires teh NAT results to do so. AND requires the NAT results to make outbound connection to the right server after processing the request. So this is not an acceptible error.

Bug 2 - the connection is getting orphaned.

Connections are always supposed to be closed properly. Orphans are like memory "leaks", but for sockets which were caught by Squid and cleaned up semi-safely just before any actual leakage occured.

Is the timestamp the same as the NAT failure? and one always after the NAT failure?



what about: sslcrtd_program ?
Never heard about that option, I'm just following the Squid Wiki and can't find 
that option mentioned there:
http://wiki.squid-cache.org/Features/HTTPS
http://wiki.squid-cache.org/Features/SslBump

It is a different feature extending ssl-bump onto https_port.
http://wiki.squid-cache.org/Features/DynamicSslCert


if you need to intercept all https connections(seems to be your case) I
would suggest you to try use the 3.3 beta.
According to the wiki ssl termination/interception should work since 3.1. Isn't 
that correct?

SSL termination has worked sine squid-2.5.

SSL interception has never worked, "never will" hopefully. But you can use it since 2.5. ( :-) hows them for apples? )


For HTTPS interception ssl-crtd is better. server-first feature and
certificate-mimic are even better.
Squid-3.3 which has these is needed for anything close to useful HTTPS
port 443 interception.
So SSL interception is not working for any version prior 3.3? Just asking 
because I've seen several post in the internet where ppl managed to get SSL 
interception running with 3.1 and 3.2. I'm totally confused... :-(

The answer is "kind of".

Any Squid built with SSL support - can intercept HTTPS using plain old https_port with certificate details. on condition :

* that your clients ignore and accept the errors about certificate domain not matching the one they visited. For example your cert says example.com and the client sees the name "example.com" signing traffic for google.com and yahoo.com and spam.com and their bank, etc. On *every single* object they load via https://.

* that the website in question is not checking the HTTPS vs HTTP requests match the port type they arrive on. Big names like Google, YouTube, Facebook and banks are validating the received protocol to break exactly this kind of practice. The redirect to https:// loops some people receive since ~3 years back are the user-visible result of those simple security checks being done by the web server.

* that you don't care that Squid may send users traffic completely unencrypted to Internet servers the user is contacting. (see above, loops would not happen if outbound was encrypted to the server).



3.1 - can intercept HTTPS using ssl-bump. With the same conditions as above. *Except*:
 * it is easier to configure
* ssl-bump flag makes Squid handle the upstream security a lot better, re-encrypting outbound and blocking unencrypted relay through peers


3.2 - same as 3.1. *Plus* goes as far as to ensure the actual server the clients was contacting (ORIGINAL_DST) is the one Squid relays the request to even for non-SSL traffic. Preventing load balancers or multi-IP sites causing loops and certificate issues.


3.3 - same as 3.2. *Plus*:
 * can ensure a unique certificate is generated for each requested domain,
* generated certs contains the same details as the real server one, including important flaws.


So they all "work" ... with greatly varying degrees of pain to you, your users, and those of us here where you come calling for help when it goes bad. We just recommend the one with least pain all around.

Amos

Reply via email to