Re: [squid-users] forward_max_tries 1 has no effect

2017-11-27 Thread Ivan Larionov
I think I found part of the config which triggers the retry.

==

# Check for newproxy request header
acl newproxy_acl req_header x-use-newproxy -i true

# proxy
cache_peer 127.0.0.1 parent 18070 0 no-query no-digest no-netdb-exchange
name=proxy
cache_peer_access proxy deny newproxy_acl

# newproxy
cache_peer 127.0.0.1 parent 18079 0 no-query no-digest no-netdb-exchange
name=newproxy
cache_peer_access newproxy allow newproxy_acl
cache_peer_access newproxy deny all

never_direct allow all

==

I see retries only when squid config has 2 parents. If I comment out
everything related to "newproxy" I can't reproduce this behavior anymore.

BUT

My test request could only be forwarded to "proxy" since it doesn't have
"x-use-newproxy" header.

Which results in the following:

squid has 2 parents
request could only be forwarded to the 1st parent due to ACL
parent doesn't respond and closes the TCP connection
squid retries with the same parent ignoring "forward_max_tries 1"

I also want to clarify some facts which make me think that it could be a
bug.

1. There're no issues with TCP connection. Squid successfully connects to
parent and sends an HTTP request.
2. Parent ACKs HTTP request and then correctly closes the connection with
FIN,ACK after 40 seconds. There're no TCP timeouts/reconnects involved. The
only issue here is that parent doesn't send any HTTP response.
3. forward_max_tries is set to 1 to make sure squid won't retry. Parent
handles the retry so we don't want squid to make any additional retries.

Also see
https://wiki.squid-cache.org/SquidFaq/InnerWorkings#When_does_Squid_re-forward_a_client_request.3F

> Squid does not try to re-forward a request if at least one of the
following conditions is true:
> …
> The number of forwarding attempts exceeded forward_max_tries. For
example, if you set forward_max_tries to 1 (one), then no requests will be
re-forwarded.
> …
> Squid has no alternative destinations to try. Please note that
alternative destinations may include multiple next hop IP addresses and
multiple peers.
> …

Another part of config which may or may not be related (this is to increase
the amount of local ports to use):

# 33% of traffic per local IP
acl third random 1/3
acl half random 1/2
tcp_outgoing_address 127.0.0.2 third
tcp_outgoing_address 127.0.0.3 half
tcp_outgoing_address 127.0.0.4

Logs:

ALL,2 (includes 44,2):

2017/11/27 15:53:40.542| 5,2| TcpAcceptor.cc(220) doAccept: New connection
on FD 15
2017/11/27 15:53:40.542| 5,2| TcpAcceptor.cc(295) acceptNext: connection on
local=0.0.0.0:3128 remote=[::] FD 15 flags=9
2017/11/27 15:53:40.543| 11,2| client_side.cc(2372) parseHttpRequest: HTTP
Client local=127.0.0.1:3128 remote=127.0.0.1:53798 FD 45 flags=1
2017/11/27 15:53:40.543| 11,2| client_side.cc(2373) parseHttpRequest: HTTP
Client REQUEST:
-
GET http://HOST:12345/ HTTP/1.1
Host: HOST:12345
User-Agent: curl/7.51.0
Accept: */*
Proxy-Connection: Keep-Alive


--
2017/11/27 15:53:40.543| 85,2| client_side_request.cc(745)
clientAccessCheckDone: The request GET http://HOST:12345/ is ALLOWED; last
ACL checked: localhost
2017/11/27 15:53:40.543| 85,2| client_side_request.cc(721)
clientAccessCheck2: No adapted_http_access configuration. default: ALLOW
2017/11/27 15:53:40.543| 85,2| client_side_request.cc(745)
clientAccessCheckDone: The request GET http://HOST:12345/ is ALLOWED; last
ACL checked: localhost
2017/11/27 15:53:40.543| 17,2| FwdState.cc(133) FwdState: Forwarding client
request local=127.0.0.1:3128 remote=127.0.0.1:53798 FD 45 flags=1, url=
http://HOST:12345/
2017/11/27 15:53:40.543| 44,2| peer_select.cc(258) peerSelectDnsPaths: Find
IP destination for: http://HOST:12345/' via 127.0.0.1
2017/11/27 15:53:40.543| 44,2| peer_select.cc(280) peerSelectDnsPaths:
Found sources for 'http://HOST:12345/'
2017/11/27 15:53:40.543| 44,2| peer_select.cc(281) peerSelectDnsPaths:
 always_direct = DENIED
2017/11/27 15:53:40.543| 44,2| peer_select.cc(282) peerSelectDnsPaths:
never_direct = ALLOWED
2017/11/27 15:53:40.543| 44,2| peer_select.cc(292) peerSelectDnsPaths:
cache_peer = local=127.0.0.3 remote=127.0.0.1:18070 flags=1
2017/11/27 15:53:40.543| 44,2| peer_select.cc(295) peerSelectDnsPaths:
  timedout = 0
2017/11/27 15:53:40.543| 11,2| http.cc(2229) sendRequest: HTTP Server local=
127.0.0.3:57091 remote=127.0.0.1:18070 FD 40 flags=1
2017/11/27 15:53:40.543| 11,2| http.cc(2230) sendRequest: HTTP Server
REQUEST:
-
GET http://HOST:12345/ HTTP/1.1
User-Agent: curl/7.51.0
Accept: */*
Host: HOST:12345
Cache-Control: max-age=259200
Connection: keep-alive


--

[SKIPPED 40 seconds until parent closes TCP connection with FIN,ACK]

2017/11/27 15:54:20.627| 11,2| http.cc(1299) continueAfterParsingHeader:
WARNING: HTTP: Invalid Response: No object data received for
http://HOST:12345/ AKA HOST/
2017/11/27 15:54:20.627| 17,2| FwdState.cc(655)
handleUnregisteredServerEnd: self=0x3e31838*2 err=0x409b338
http://HOST:12345/
2017/11/27 15:54:20.627| 11,2| http.cc(2229) sendRequ

[squid-users] Transparent Squid

2017-11-27 Thread LINGYUN ZHAO
Dear Squid team,


I need Squid as a real 'transparent' proxy on Fedora without changing 5
tuples. Is it possible?


The setup is simple as Client -- Fedora Server

The Squid version is 3.5.20.The key configuration on Squid as below:

   http_port 0.0.0.0:3128 transparent

   acl localnet src 10.0.0.0/24

   http_access allow localnet

And I configured a NAT on Fedora.

   iptables -t nat -A PREROUTING -i eth1 -p tcp --dport 80 -j DNAT --to
10.0.0.1:3128


When I run curl on Client to server. I found the server receives the
traffic with Fedora's IP address and different source port, instead of
Client IP address and original source port.


Thanks a lot
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Working peek/splice no longer functioning on some sites

2017-11-27 Thread James Lay
On Sun, 2017-11-26 at 09:50 +0200, Alex K wrote:
> Perhaps an alternative is to peek only on step1:
> 
> acl step1 at_step SslBump1
> 
> ssl_bump peek step1
> acl allowed_https_sites ssl::server_name_regex
> "/opt/etc/squid/http_url.txt"
> ssl_bump splice allowed_https_sites
> ssl_bump terminate all
Hrmm...wouldn't that negate the ability to read the cert on step2?
In layman's terms I'm thinking:
"peek at step1"
"splice acl allow matched sni's"
"peek at step2"
"splice acl allow'd matched certs"
"terminate the rest"
Would that work Amos?

> On Nov 25, 2017 14:46,
>  "James Lay"  wrote:
> > On Sun, 2017-11-26 at 01:33 +1300, Amos Jeffries wrote:
> > > On 26/11/17 00:52, James Lay wrote:
> > > 
> > > > 
> > > > On Sat, 2017-11-25 at 23:48 +1300, Amos Jeffries wrote:
> > > > 
> > > > > 
> > > > > On 25/11/17 08:30, James Lay wrote:
> > > > > 
> > > > > > 
> > > > > > Topic says it...this setup has been working well for a long time, 
> > > > > > but 
> > > > > > now there are some sites that are failing the TLS handshake.  
> > > > > > Here's 
> > > > > > my setup: acl localnet src 192.168.1.0/24 acl SSL_ports port 443 
> > > > > > acl 
> > > > > >  acl SSL_ports port 443 acl 
> > > > > > Safe_ports port 80 acl Safe_ports port 443 acl CONNECT method 
> > > > > > CONNECT 
> > > > > > acl allowed_http_sites url_regex "/opt/etc/squid/http_url.txt" 
> > > > > > http_access deny !Safe_ports http_access deny CONNECT !SSL_Ports 
> > > > > > http_access allow SSL_ports http_access allow allowed_http_sites 
> > > > > > http_access deny all ssl_bump peek all acl allowed_https_sites 
> > > > > > ssl::server_name_regex "/opt/etc/squid/http_url.txt" ssl_bump 
> > > > > > splice 
> > > > > > allowed_https_sites ssl_bump terminate all 
> > > > > > 

> > > > > 
> > > > > 
> > > > > 
> > > > > 
> > > > > Because you have "peek all" being performed the transaction MUST pass
> > > > > your regex patterns with both TLS SNI from the client *and* the server
> > > > > certificate SubjectName values. Either one not matching will perform
> > > > > that "terminate all" on the TLS handshake.
> > > > > 
> > > > > 

> > > > 
> > > > 
> > > > Thanks Amos...do you have a suggestion for changing this to match one 
> > > > or 
> > > > the other instead of both?
> > > > 

> > > 
> > > 
> > > Doing the splice check before the peek should do that. First one of the 
> > > server_names data sources to match will then splice and non-matches fall 
> > > through to either peek or terminate if no more peeking possible.
> > > 
> > > Amos
> > > 

> > > > Perfect..I've modded my lines with:
> > > > acl broken_https_sites ssl::server_name_regex 
> > > > "/opt/etc/squid/broken_url.> > txt"
> > ssl_bump splice broken_https_sites
> > ssl_bump peek all
> > acl allowed_https_sites ssl::server_name_regex "/opt/etc/squid/http_url.txt"
> > ssl_bump splice allowed_https_sites
> > ssl_bump terminate all

> > Hopefully that fixes these up.  Another site besides the the one this 
> > thread is fbcdn.net.  Again, these DID work, but something within the last 
> > month has changed...guessing Facebook and Elder Scrolls Online have added 
> > additional TLS security.  Thanks as always Amos.
> > > > James

> > __> > _
> > 
> > squid-users mailing list
> > 
squid-users@lists.squid-cache.org
> > 
http://lists.squid-cache.org/listinfo/squid-users
> > 

> 
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] filtering HTTPS sites with transparent child Squid

2017-11-27 Thread Amos Jeffries

On 27/11/17 21:20, Stegner, Martin wrote:

Hi everyone,

I’ve set up a Squid as a transparent child-proxy. Every request is 
redirected to another Squid with the content filtering add-on 
e2guardian. I encounter the problem that the transparent child Squid 
only forwards IP-Addresses to the e2guardian when HTTPS is used and so 
e2guardian cant filter anything because it can only filter by URL.




A good demonstration of why calling a URL-rewrite helper a "content 
filter" is completely wrong.


Real content filters receive the actual content and can filter it. ICAP 
and eCAP exist for that and get passed the decrypted HTTPS messages (if 
any).





Here are some parts of the config:

http_port 3130

http_port 3128 intercept

https_port 3129 intercept ssl-bump cert=/etc/squid/cert/squid.pem

ssl_bump splice all  (if I use any other option than splice 
nothing works for some reason)


Splice tells Squid to not decrypt. Thus no content access on those 
transactions.





cache_peer 172.16.0.252 parent 8080 0 default no-query no-digest

Is there any possibility that the transparent child Squid forwards the 
URL tot he main Squid proxy?


It already is passing what it has. "The" URI of the message being 
processed happens to be an authority-form URI. see 
.



.. and also;

* Squid requires a secure server connection to deliver decrypted content 
to. So the cache_peer needs to have the 'ssl' option and be accepting 
TLS proxy connections to receive anything other than the spliced traffic.


* The CONNECT message has to complete and the TLS inside it decrypted 
before any URL with "https://"; scheme is known. When bumping to do the 
decrypt the above criteria applies.


* HTTP/1.1 connections contain many pipelined requests. So there are 
potentially many https:// URLs involved inside the crypto - it is not 
possible to know in advance of decryption what those might be.



Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] filtering HTTPS sites with transparent child Squid

2017-11-27 Thread Stegner, Martin
Hi everyone,

I've set up a Squid as a transparent child-proxy. Every request is redirected 
to another Squid with the content filtering add-on e2guardian. I encounter the 
problem that the transparent child Squid only forwards IP-Addresses to the 
e2guardian when HTTPS is used and so e2guardian cant filter anything because it 
can only filter by URL.

Here are some parts of the config:

http_port 3130
http_port 3128 intercept
https_port 3129 intercept ssl-bump cert=/etc/squid/cert/squid.pem

ssl_bump splice all  (if I use any other option than splice nothing 
works for some reason)

cache_peer 172.16.0.252 parent 8080 0 default no-query no-digest

Is there any possibility that the transparent child Squid forwards the URL tot 
he main Squid proxy?

Thanks everyone
Martin

___

Stadt Coburg
Amt für Informations- und Kommunikationstechnik
Abteilungsleiter Systemadministration Schulen
Uferstraße 7, 96450 Coburg
Tel. 09561-89 1166
Fax 09561-89 61166
E-Mail: martin.steg...@coburg.de
http://www.coburg.de
http://schulen.coburg.de

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users