[squid-users] TCP_TUNNEL_ABORTED/200 with spliced windows updates

2018-05-14 Thread Ahmad, Sarfaraz
Hi Folks,

I am using WCCP and redirecting traffic to Squid for both HTTP/HTTPS 
interception.
In this setup, I have spliced most of the Windows updates's services using SNI 
in squid's acls. Yet even with TCP tunnel, I am getting failures with these 
messages in the accesslog.
Why could that response time be so high and is that causing the client to close 
the connection ? When I take the proxy out of the picture(no redirection 
through WCCP) the updates run just fine.

1526277713.535 119962 10.240.167.24 TCP_TUNNEL_ABORTED/200 3898 CONNECT 
sls.update.microsoft.com:443 - 
ORIGINAL_DST/13.78.168.230 -
1526277833.538 119735 10.240.167.24 TCP_TUNNEL_ABORTED/200 3898 CONNECT 
sls.update.microsoft.com:443 - 
ORIGINAL_DST/52.229.171.202 -
1526277953.501 119808 10.240.167.24 TCP_TUNNEL_ABORTED/200 3898 CONNECT 
sls.update.microsoft.com:443 - 
ORIGINAL_DST/52.229.171.202 -

Any inputs are welcome.

Regards,
Sarfaraz

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] TCP_TUNNEL_ABORTED/200 with spliced windowsupdates

2018-05-15 Thread Ahmad, Sarfaraz
Thanks Amos.
Turns out it had nothing to do with the proxy but different MTU on the 
networks. I now have a little better understanding of this amazing piece of 
software.

Sarfaraz
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] TCP FIN,ACK after ServerHelloDone with pcmag.com

2018-05-15 Thread Ahmad, Sarfaraz
Hi Folks,

I am using Squid as a HTTPS interception proxy. When I try to access 
https://www.pcmag.com , (which is supposed to be bumped in my environment ), I 
get
"unable to forward request at this time" even though the website is perfectly 
accessible outside of the proxy.

A packet capture suggests that after Client Hello -> ServerHello -> 
ServerCertificate,Server Key Exchange, ServerHelloDone, the remote server just 
sends a FIN,ACK packet, killing off the TCP connection. Nothing else looks out 
of the ordinary.  ( Without squid, firefox successfully opens the site and the 
negotiation is TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS1.2)

The only weird thing that stands out about that website is that the list of 
SubjectAlternateNames is huge. Could this be a possible bug with Squid ?

My TLS options in Squid.conf :

tls_outgoing_options cafile=/etc/pki/tls/certs/ca-bundle.crt \
options=NO_SSLv3,SINGLE_DH_USE,SINGLE_ECDH_USE \

cipher=HIGH:!RC4:!aNULL:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS:!EXPORT:!DES:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA

https_port :

https_port 23129 intercept ssl-bump \
generate-host-certificates=on \
dynamic_cert_mem_cache_size=4MB \
cert=/etc/squid/InternetCA/InternetCA.pem \
key=/etc/squid/InternetCA/InternetCA.key \
tls-cafile=/etc/squid/InternetCA/InternetCA.chain.pem \
capath=/etc/pki/tls/certs/certs.d \
options=NO_SSLv3,SINGLE_DH_USE,SINGLE_ECDH_USE \
tls-dh=prime256v1:/etc/squid/dhparam.pem

Please advise.

Regards,
Sarfaraz
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TCP FIN,ACK after ServerHelloDone with pcmag.com

2018-05-15 Thread Ahmad, Sarfaraz
I see a message similar to Marcus' in cache.log.

2018/05/16 00:20:10 kid1| ERROR: negotiating TLS on FD 77: error:14090086:SSL 
routines:ssl3_get_server_certificate:certificate verify failed (1/-1/0)

And I am running squid-4.0.24.

Sarfaraz

-Original Message-
From: squid-users  On Behalf Of 
Marcus Kool
Sent: Wednesday, May 16, 2018 1:41 AM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] TCP FIN,ACK after ServerHelloDone with pcmag.com

The proxies that I used for the test have Squid 4.0.22 and Squid 4.0.23.

Marcus


On 15/05/18 15:40, Amos Jeffries wrote:
> On 16/05/18 01:32, Marcus Kool wrote:
>> pcmag.com also does not load here, although my config parameters are 
>> slightly different.
>> The certificate is indeed huge...
>> Do you have
>>     ERROR: negotiating TLS on FD NNN: error:14090086:SSL 
>> routines:ssl3_get_server_certificate:certificate verify failed 
>> (1/-1/0) or other errors in cache.log ?
>>
>> Marcus
>>
> 
> Are these Squid-4.0.24 ? There is a regression[1] in the cafile= 
> parameter handling in the latest release.
>   
> 
> Amos
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
> 
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TCP FIN,ACK after ServerHelloDone with pcmag.com

2018-05-17 Thread Ahmad, Sarfaraz
Guys,

Any thoughts ?

Regards,
Sarfaraz

-Original Message-
From: Ahmad, Sarfaraz 
Sent: Wednesday, May 16, 2018 10:36 AM
To: 'Marcus Kool' ; 
squid-users@lists.squid-cache.org
Subject: RE: [squid-users] TCP FIN,ACK after ServerHelloDone with pcmag.com

I see a message similar to Marcus' in cache.log.

2018/05/16 00:20:10 kid1| ERROR: negotiating TLS on FD 77: error:14090086:SSL 
routines:ssl3_get_server_certificate:certificate verify failed (1/-1/0)

And I am running squid-4.0.24.

Sarfaraz

-Original Message-
From: squid-users  On Behalf Of 
Marcus Kool
Sent: Wednesday, May 16, 2018 1:41 AM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] TCP FIN,ACK after ServerHelloDone with pcmag.com

The proxies that I used for the test have Squid 4.0.22 and Squid 4.0.23.

Marcus


On 15/05/18 15:40, Amos Jeffries wrote:
> On 16/05/18 01:32, Marcus Kool wrote:
>> pcmag.com also does not load here, although my config parameters are 
>> slightly different.
>> The certificate is indeed huge...
>> Do you have
>>     ERROR: negotiating TLS on FD NNN: error:14090086:SSL 
>> routines:ssl3_get_server_certificate:certificate verify failed 
>> (1/-1/0) or other errors in cache.log ?
>>
>> Marcus
>>
> 
> Are these Squid-4.0.24 ? There is a regression[1] in the cafile= 
> parameter handling in the latest release.
>   <https://bugs.squid-cache.org/show_bug.cgi?id=4831>
> 
> Amos
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
> 
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Cert download from AIA information succeeds yet Squid reports ERR_UNABLE_TO_GET_ISSUER_CERT_LOCALLY

2018-05-21 Thread Ahmad, Sarfaraz
Hi,

I have setup Squid as a SSL MITM proxy.
I am also using the cert download feature with these configurations in my 
squid.conf

acl cert_fetch transaction_initiator certificate-fetching
http_access allow cert_fetch

Websites where certificates just share AIA information using CA-issuer method, 
those work just fine.

But try this one, https://community.verizonwireless.com/welcome (this gets 
bumped in my setup)
Here the AIA information Is provided using both OCSP/CAissuer methods.
>From Squid's access logs, I can tell that the certificate gets downloaded.

1526964147.929160 - TCP_MISS/200 1868 GET 
http://cacert.omniroot.com/vpssg142.crt - HIER_DIRECT/64.18.25.46 
application/x-x509-ca-cert

But squid still reports:

(71) Protocol error (TLS code: X509_V_ERR_UNABLE_TO_GET_ISSUER_CERT_LOCALLY)
SSL Certficate error: certificate issuer (CA) not known: 
/C=NL/L=Amsterdam/O=Verizon Enterprise Solutions/OU=Cybertrust/CN=Verizon 
Public SureServer CA G14-SHA2

That is the only intermediate certificate needed in the chain.  Here: 
https://www.ssllabs.com/ssltest/analyze.html?d=community.verizonwireless.com&latest

When I download the intermediate certificate locally and try connecting to the 
remote server using openssl -Cafile option, Openssl reports OK (0).

openssl s_client -connect 204.93.84.201:443 -showcerts -CAfile vpssg142.crt 
-servername community.verizon.com
>> Verify return code: 0 (ok)

Regards,
Sarfaraz

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] GET requests remain in pending state with Squid and Kerberos auth

2018-05-23 Thread Ahmad, Sarfaraz
Hi,

I am using Squid as an explicit proxy (configured in the browsers) and have 
configured it to authenticate all users with Kerberos.
Here are the relevant bits from squid.conf

auth_param negotiate program /usr/lib64/squid/negotiate_kerberos_auth -r -s 
HTTP/proxytest1.mydomain@mydomain.com -k /etc/squid/HTTP.keytab
auth_param negotiate children 10
auth_param negotiate keep_alive on

I know I should be expecting 407s for new TCP connections and pages do load a 
bit slower compared to Basic Auth.
But that isn't the problem. The problem is at times some of the web pages 
resources (GET requests mostly) just hang there endlessly. (Chrome just says 
pending)
When I do a refresh, the browser loads that very same resource(say a .js/.css 
file) just fine.

This is just a test setup and I looked at the negotiate helper stats. Here
   ID #  FD PID # Requests  # Replies# 
Timed-out   FlagsTime   Offset  Request
  11399482  551   551   
   00.014 0   (none)
  21799485   74  74   0 
   0.016 0   (none)
  321  106324   4 4 
00.025 0   (none)

I don't think they are the problem.
Any thoughts on what could be going on here? I don't have a way to reproduce 
this reliably so far and this happens intermittently.

Regards,
Sarfaraz
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TCP FIN,ACK after ServerHelloDone with pcmag.com

2018-05-28 Thread Ahmad, Sarfaraz
I was wrong. It is not the remote server but Squid itself which is sending a 
FIN,ACK after ServerHelloDone.
At 8 seconds, ServerKeyExchange, ServerHelloDone is received by Squid. The 
cipher suite looks like (ECDHE+RSA+SHA512 ,wireshark shows rsa_pkcs_sha512.)
After about 60 more seconds (there is no activity on the wire during this 
period), Squid sends a FIN/ACK to the remote server effectively closing the 
connection.
What debug_options should I be using for more relevant logging in cache.log ? 
26,9 11,9 and 5,9 are not helping much. 

I am adding few loglines anyways. 

2018/05/28 07:20:13.603 kid1| 5,4| AsyncCall.cc(26) AsyncCall: The AsyncCall 
clientLifetimeTimeout constructed, this=0x1c5e5f0 [call136782]
2018/05/28 07:20:13.603 kid1| 5,3| comm.cc(559) commSetConnTimeout: 
local=:3128 remote=:64774 FD 13 flags=1 timeout 86400
2018/05/28 07:20:13.603 kid1| 11,5| HttpRequest.cc(460) detailError: current 
error details: 12/-2
2018/05/28 07:20:13.603 kid1| 11,2| Stream.cc(266) sendStartOfMessage: HTTP 
Client local=:3128 remote=:64774 FD 13 flags=1
2018/05/28 07:20:13.603 kid1| 11,2| Stream.cc(267) sendStartOfMessage: HTTP 
Client REPLY:
-
HTTP/1.1 503 Service Unavailable

Post splicing the webpage opens just fine. That website (www.pcmag.com) has 
over 750 DNS names added to SAN field. The RFC does not set an upper bound on 
the number of DNS names you can have in there.

Regards,
Sarfaraz 

-Original Message-
From: Ahmad, Sarfaraz 
Sent: Thursday, May 17, 2018 4:18 PM
To: 'squid-users@lists.squid-cache.org' 
Cc: 'Marcus Kool' 
Subject: RE: [squid-users] TCP FIN,ACK after ServerHelloDone with pcmag.com

Guys,

Any thoughts ?

Regards,
Sarfaraz

-Original Message-
From: Ahmad, Sarfaraz
Sent: Wednesday, May 16, 2018 10:36 AM
To: 'Marcus Kool' ; 
squid-users@lists.squid-cache.org
Subject: RE: [squid-users] TCP FIN,ACK after ServerHelloDone with pcmag.com

I see a message similar to Marcus' in cache.log.

2018/05/16 00:20:10 kid1| ERROR: negotiating TLS on FD 77: error:14090086:SSL 
routines:ssl3_get_server_certificate:certificate verify failed (1/-1/0)

And I am running squid-4.0.24.

Sarfaraz

-Original Message-
From: squid-users  On Behalf Of 
Marcus Kool
Sent: Wednesday, May 16, 2018 1:41 AM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] TCP FIN,ACK after ServerHelloDone with pcmag.com

The proxies that I used for the test have Squid 4.0.22 and Squid 4.0.23.

Marcus


On 15/05/18 15:40, Amos Jeffries wrote:
> On 16/05/18 01:32, Marcus Kool wrote:
>> pcmag.com also does not load here, although my config parameters are 
>> slightly different.
>> The certificate is indeed huge...
>> Do you have
>>     ERROR: negotiating TLS on FD NNN: error:14090086:SSL 
>> routines:ssl3_get_server_certificate:certificate verify failed
>> (1/-1/0) or other errors in cache.log ?
>>
>> Marcus
>>
> 
> Are these Squid-4.0.24 ? There is a regression[1] in the cafile= 
> parameter handling in the latest release.
>   <https://bugs.squid-cache.org/show_bug.cgi?id=4831>
> 
> Amos
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
> 
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Use additional details in SAN field to build ACLs

2018-06-18 Thread Ahmad, Sarfaraz
Hi,

Can I leverage other information available in a server certificates's SAN field 
to build my ACLs ?
Here's a sample from the SAN field ,
DNS Name=abc.example.com
IP Address=10.0.97.72

I haven't tried it but would using ssl::server_name_regex work to match 
IP=10.0.97.* work?
Also I couldn't find a way to capture ssl::server_name (that Squid builds as 
described in the "acl" directive doc) in the logs. Logformat directive has only 
some bits of ssl information.

Regards,
Sarfaraz
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Ignore SSL error and splice by ssl::server_name at the same time

2018-06-20 Thread Ahmad, Sarfaraz
Hi,

I need to provide access to a API service exposed on the internet to my 
clients. That API uses a certificate signed by a private CA.
I don't want to trust that private CA in my proxies (lest it gets abused and I 
end up trusting certificates in the proxy that I shouldn't be.  My clients 
would be unaware since I am bumping all the TLS connections unless explicitly 
configured. )
To avoid that I tried ignoring the ssl validation error with  
sslproxy_cert_error directive and then splicing the connection. But its not 
working out.

SubjectCN in that services' certificate is "kube-apiserver"

Ignore settings :

acl broken_kubernetes ssl::server_name kube-apiserver
sslproxy_cert_error allow broken_kubernetes
sslproxy_cert_error deny all


Splicing settings:

acl no_ssl_bump_kubernetes ssl::server_name kube-apiserver
acl step1 at_step SslBump1
acl step2 at_step SslBump2
acl step3 at_step SslBump3
ssl_bump peek step1
ssl_bump splice no_ssl_bump_kubernetes
ssl_bump bump all

Splicing settings are in the lower half of my config.
But I am still getting MITM'ed (bumped) and on the clients, I get a "Not 
Trusted by MyCA" certificate is being shown. Any ideas ?

Regards,
Sarfaraz
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Ignore SSL error and splice by ssl::server_name at the same time

2018-06-20 Thread Ahmad, Sarfaraz
Forgot to add. Remote IP addresses are not expected to remain constant. So I 
cannot build ACLs that way. So ssl::server_name is the only other hope.

From: Ahmad, Sarfaraz
Sent: Wednesday, June 20, 2018 2:34 PM
To: 'squid-users@lists.squid-cache.org' 
Subject: Ignore SSL error and splice by ssl::server_name at the same time

Hi,

I need to provide access to a API service exposed on the internet to my 
clients. That API uses a certificate signed by a private CA.
I don't want to trust that private CA in my proxies (lest it gets abused and I 
end up trusting certificates in the proxy that I shouldn't be.  My clients 
would be unaware since I am bumping all the TLS connections unless explicitly 
configured. )
To avoid that I tried ignoring the ssl validation error with  
sslproxy_cert_error directive and then splicing the connection. But its not 
working out.

SubjectCN in that services' certificate is "kube-apiserver"

Ignore settings :

acl broken_kubernetes ssl::server_name kube-apiserver
sslproxy_cert_error allow broken_kubernetes
sslproxy_cert_error deny all


Splicing settings:

acl no_ssl_bump_kubernetes ssl::server_name kube-apiserver
acl step1 at_step SslBump1
acl step2 at_step SslBump2
acl step3 at_step SslBump3
ssl_bump peek step1
ssl_bump splice no_ssl_bump_kubernetes
ssl_bump bump all

Splicing settings are in the lower half of my config.
But I am still getting MITM'ed (bumped) and on the clients, I get a "Not 
Trusted by MyCA" certificate is being shown. Any ideas ?

Regards,
Sarfaraz
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Ignore SSL error and splice by ssl::server_name at the same time

2018-06-20 Thread Ahmad, Sarfaraz
I found the answer to my problem. The SNI and Subject CN were different in my 
case and I was not peeking at step2 (meaning not looking at the server 
certificate) that is why my ACLs were ineffective.

Regards,
Sarfaraz

From: Ahmad, Sarfaraz
Sent: Wednesday, June 20, 2018 3:25 PM
To: 'squid-users@lists.squid-cache.org' 
Subject: RE: Ignore SSL error and splice by ssl::server_name at the same time

Forgot to add. Remote IP addresses are not expected to remain constant. So I 
cannot build ACLs that way. So ssl::server_name is the only other hope.

From: Ahmad, Sarfaraz
Sent: Wednesday, June 20, 2018 2:34 PM
To: 'squid-users@lists.squid-cache.org' 
Subject: Ignore SSL error and splice by ssl::server_name at the same time

Hi,

I need to provide access to a API service exposed on the internet to my 
clients. That API uses a certificate signed by a private CA.
I don't want to trust that private CA in my proxies (lest it gets abused and I 
end up trusting certificates in the proxy that I shouldn't be.  My clients 
would be unaware since I am bumping all the TLS connections unless explicitly 
configured. )
To avoid that I tried ignoring the ssl validation error with  
sslproxy_cert_error directive and then splicing the connection. But its not 
working out.

SubjectCN in that services' certificate is "kube-apiserver"

Ignore settings :

acl broken_kubernetes ssl::server_name kube-apiserver
sslproxy_cert_error allow broken_kubernetes
sslproxy_cert_error deny all


Splicing settings:

acl no_ssl_bump_kubernetes ssl::server_name kube-apiserver
acl step1 at_step SslBump1
acl step2 at_step SslBump2
acl step3 at_step SslBump3
ssl_bump peek step1
ssl_bump splice no_ssl_bump_kubernetes
ssl_bump bump all

Splicing settings are in the lower half of my config.
But I am still getting MITM'ed (bumped) and on the clients, I get a "Not 
Trusted by MyCA" certificate is being shown. Any ideas ?

Regards,
Sarfaraz
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Ignore SSL error and splice by ssl::server_name at the same time

2018-06-20 Thread Ahmad, Sarfaraz
Yes.  As always appreciate the quick support this community provides. :)
Thank you guys !

Regards,
Sarfaraz

-Original Message-
From: squid-users  On Behalf Of Amos 
Jeffries
Sent: Wednesday, June 20, 2018 6:53 PM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Ignore SSL error and splice by ssl::server_name at 
the same time

On 21/06/18 00:25, Ahmad, Sarfaraz wrote:
> I found the answer to my problem. The SNI and Subject CN were 
> different in my case and I was not peeking at step2 (meaning not 
> looking at the server certificate) that is why my ACLs were ineffective.
> 

Ah, excellent. Does that mean your problem is now resolved?

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Ignore SSL error and splice by ssl::server_name at the same time

2018-06-21 Thread Ahmad, Sarfaraz
I was wrong. There is no way to read the remote certificate and then decide 
whether to bump/splice the connection. 

-Original Message-
From: Ahmad, Sarfaraz 
Sent: Wednesday, June 20, 2018 7:35 PM
To: 'Amos Jeffries' ; squid-users@lists.squid-cache.org
Subject: RE: [squid-users] Ignore SSL error and splice by ssl::server_name at 
the same time

Yes.  As always appreciate the quick support this community provides. :)
Thank you guys !

Regards,
Sarfaraz

-Original Message-
From: squid-users  On Behalf Of Amos 
Jeffries
Sent: Wednesday, June 20, 2018 6:53 PM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Ignore SSL error and splice by ssl::server_name at 
the same time

On 21/06/18 00:25, Ahmad, Sarfaraz wrote:
> I found the answer to my problem. The SNI and Subject CN were 
> different in my case and I was not peeking at step2 (meaning not 
> looking at the server certificate) that is why my ACLs were ineffective.
> 

Ah, excellent. Does that mean your problem is now resolved?

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Splice using SubjectCN/SAN from remote server certificate

2018-06-25 Thread Ahmad, Sarfaraz
I realize that unlike other proprietary MITM appliances, Squid doesn't fiddle 
with the original client hello.
I think this magnifies into the fact that we cannot look at the SubjectCN/SAN 
in the remote server certificate and then decide whether we want to splice or 
bump. (peeking at step 2 really restricts our options)
Is my understanding correct ? Or is there a way to accomplish this ?

Best Regards,
Sarfaraz
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Trust a particular CA only for a limited domain

2018-06-26 Thread Ahmad, Sarfaraz
I need to provide access to my clients to a service on the internet that is 
using a private CA.
I do not want to trust that CA outside the scope of that destination domain.  
(The thought is to not just blindly trust a random CA, rather if we have to, we 
limit it to the particular domain.)
Can something like this be achieved without toying with the squid's code ?

BR,
Sarfaraz

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Make websockets work without splicing TLS connections

2018-07-03 Thread Ahmad, Sarfaraz
Guys,

Can you think of a way to make websockets work without splicing TLS connections 
?
I don't think on_unsupported _protocol would work here . Also would 
on_unsupported_protocol work where the remote server abuses 443 for something 
other than TLS ?

Regards,
Sarfaraz
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Make websockets work without splicing TLS connections

2018-07-03 Thread Ahmad, Sarfaraz
>> Squid does not understand WebSocket protocol (yet).
Is supporting Websockets on the roadmap ? 



-Original Message-
From: squid-users  On Behalf Of Amos 
Jeffries
Sent: Tuesday, July 3, 2018 6:15 PM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Make websockets work without splicing TLS connections

On 04/07/18 00:19, Ahmad, Sarfaraz wrote:
> Guys,
> 
>  
> 
> Can you think of a way to make websockets work without splicing TLS 
> connections ?
> 

Squid does not understand WebSocket protocol (yet). So splicing is the only 
option once the traffic is already going into the proxy.

Squid does support enough WebSockets to trigger the HTTP failover mechanism sin 
WebSockets. But many clients and/or servers apparently do not actually support 
WebSockets properly and break when that proxy compatibility mechanism is used.

WebSocket has its own port for native traffic. So letting that through your 
firewall should theoretically be enough.



> I don’t think on_unsupported _protocol would work here .// Also would

It may, but I agree that is not expected. WebSockets uses HTTP-like syntax in 
its first message to be compatible with HTTPS servers.


> on_unsupported_protocol work where the remote server abuses 443 for 
> something other than TLS ?

It should. Weird non-standard crap abusing port 443 is what that directive was 
designed to help workaround.

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Allow weaker ciphers for selected sites using an ACL?

2018-07-09 Thread Ahmad, Sarfaraz
Hi,

I have disabled weak ciphers through tls_outgoing_options . Is there a way to 
allow weak ciphers for selected websites, say, using an ACL and without 
splicing the connections?

Regards,
Sarfaraz

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Cache ran out of descriptors due to ICAP service/TCP SYNs ?

2018-07-17 Thread Ahmad, Sarfaraz
Can somebody please explain what could have happened here?

First squid(4.0.25) encountered a URL > 8K bytes. I think this caused it to 
crash.

Jul 13 11:04:13  squid[9102]: parse URL too large (9697 bytes)
Jul 13 11:04:13  squid[29254]: Squid Parent: squid-1 process 9102 
exited due to signal 11 with status 0

squid-1 was respawned by the parent squid process.

Then I see ,
WARNING: ICAP Max-Connections limit exceeded for service 
icap://127.0.0.1:1344/reqmod. Open connections now: 16, including 0 idle 
persistent connections.
The newly spawned squid-1  crashes yet again. As seen below,
Jul 13 11:16:14  squid[29254]: Squid Parent: squid-1 process 10951 
exited due to signal 11 with status 0
Logs don't explain why squid-1 crashed here. ICAP message above is just a 
warning.

squid-1 is respawned a second time and I see,

Jul 13 11:22:18  squid[13123]: ERROR: negotiating TLS on FD 1722: 
error:14090086:SSL routines:ssl3_get_server_certificate:certificate verify 
failed (1/-1/0)
Jul 13 11:22:18  squid[13123]: Error negotiating SSL connection on FD 
1400: (104) Connection reset by peer
Jul 13 11:23:14  squid[13123]: Error negotiating SSL connection on FD 
1046: (104) Connection reset by peer
Jul 13 11:23:14  squid[13123]: Error negotiating SSL connection on FD 
582: (104) Connection reset by peer
Jul 13 11:23:15  squid[13123]: Error negotiating SSL connection on FD 
61: (104) Connection reset by peer
Jul 13 11:23:16  squid[13123]: Error negotiating SSL connection on FD 
1150: (104) Connection reset by peer
Jul 13 11:23:18  squid[13123]: Error negotiating SSL connection on FD 
1674: (104) Connection reset by peer
Jul 13 11:23:18  squid[13123]: Error negotiating SSL connection on FD 
1519: (104) Connection reset by peer
Jul 13 11:23:18  squid[13123]: Error negotiating SSL connection on FD 
1292: (104) Connection reset by peer
Jul 13 11:23:18  squid[13123]: Error negotiating SSL connection on FD 
1631: (104) Connection reset by peer
Jul 13 11:35:17  squid[13123]: Error negotiating SSL connection on FD 
1331: (104) Connection reset by peer
Jul 13 11:35:24  squid[13123]: WARNING! Your cache is running out of 
filedescriptors
Jul 13 11:35:56  squid[13123]: Error negotiating SSL connection on FD 
1867: (104) Connection reset by peer
Jul 13 11:35:58  squid[13123]: Error negotiating SSL connection on FD 
1715: (104) Connection reset by peer
Jul 13 11:35:59  squid[13123]: suspending ICAP service for too many 
failures
Jul 13 11:35:59  squid[13123]: optional ICAP service is suspended: 
icap://127.0.0.1:1344/reqmod [down,susp,fail11]
Jul 13 11:36:00  squid[13123]: comm_openex socket failure: (24) Too 
many open files
Jul 13 11:36:00  squid[13123]: comm_openex socket failure: (24) Too 
many open files
Jul 13 11:36:00  squid[13123]: comm_openex socket failure: (24) Too 
many open files
Jul 13 11:36:00  squid[13123]: comm_openex socket failure: (24) Too 
many open files
Jul 13 11:36:00  squid[13123]: comm_openex socket failure: (24) Too 
many open files


There is only one icap service defined as below :

icap_enable on
icap_service test_icap reqmod_precache icap://127.0.0.1:1344/reqmod bypass=on 
routing=off on-overload=wait

The open file ulimit is set to 16k. How many TCP connections would Squid have 
opened up that it exhausted 16k file descriptors ?  Some sort of file 
descriptor leak ?
I am unable to connect the dots where an unresponsive ICAP service lead to the 
proxy running out of file descriptors ?  Too many TCP SYN attempts ?

When in working condition, this is what it looks like, from cachemgr,

File descriptor usage for squid:
Maximum number of file descriptors:   16384
Largest file desc currently in use: 58
Number of file desc currently in use:   27
Files queued for open:   0
Available number of file descriptors: 16357
Reserved number of file descriptors:   100
Store Disk files open:   0

I will be installing Squid4.1 shortly but I need an explanation for what 
happened here. Please provide some pointers or let me know if any other 
information is needed to figure this out.

Regards,
Sarfaraz
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Cache ran out of descriptors due to ICAP service/TCP SYNs ?

2018-07-17 Thread Ahmad, Sarfaraz
Thanks for the reply. I haven't completely understood the revert and have a few 
more related questions.

I see these messages, 
Jul 17 19:21:14 proxy2.hyd.deshaw.com squid[5747]: suspending ICAP service for 
too many failures
Jul 17 19:21:14 proxy2.hyd.deshaw.com squid[5747]: optional ICAP service is 
suspended: icap://127.0.0.1:1344/reqmod [down,susp,fail11]
1)   If the ICAP service is unresponsive, Squid would not exhaust its file 
descriptors trying to reach the service again and again right (too many TCP 
SYNs for trying to connect to the ICAP service )? 



Max Connections returned by the ICAP service is 16. And given my ICAP settings, 
icap_enable on
icap_service test_icap reqmod_precache icap://127.0.0.1:1344/reqmod bypass=on 
routing=off on-overload=wait
On-overload is set to "wait". The documentation says " * wait:   wait (in a 
FIFO queue) for an ICAP connection slot" . This means that a new TCP connection 
would not be attempted if max connections is reached right ? 
2)   Am I right in saying that if the ICAP service is underperforming or has 
failed, this won't lead a sudden increase in the open file descriptors with 
on-overload set to "wait" ?


Also I have no way to explain the "connection reset by peer" messages.
Jul 13 11:23:18  squid[13123]: Error negotiating SSL connection on FD 
1292: (104) Connection reset by peer
Jul 13 11:23:18  squid[13123]: Error negotiating SSL connection on FD 
1631: (104) Connection reset by peer
Jul 13 11:35:17  squid[13123]: Error negotiating SSL connection on FD 
1331: (104) Connection reset by peer

I have a few proxies (running in separate virtual machines). All of them went 
unresponsive at around the same time, leading to an outage of the internet.
I am using WCCPv2 to redirect from firewall to these proxies.  I checked the 
logs there and WCCP communication was not intermittent.
The logs on the proxies are bombarded with " Error negotiating SSL connection 
on FD 1331: (104) Connection reset by peer " messages. 
Since the ICAP service in not SSL-protected I think these messages mostly imply 
receiving TCP RSTs from remote servers. (or could it be clients somehow ?). 
Once I removed WCCP direction rules from the firewall, internet was back up.
This hints that something in this proxy pipeline was amiss and not with the 
internet link itself. I don't see any outages on that. 
I am pretty sure ACLs weren't changed and there was no forwarding loop.
What could possibly explain the connection reset by peer messages ? Even if the 
internet was down, that won't lead to TCP RSTs. 
I cannot tie these TCP RSTs and the incoming requests getting held up and 
ultimately leading to FD exhaustion.

You earlier said 
>> In normal operation it is not serious, but you are already into abnormal 
>> operation by the crashing. So not releasing sockets/FD fast enough makes the 
>> overall problem worse.
If squid-1 is crashing and getting respawned, it will have its own 16K FD limit 
right, I wonder how the newer squid-1 serves older requests. Can you please 
elaborate on " So not releasing sockets/FD fast enough makes the overall 
problem worse." ?

Please share your thoughts.

Regards,
Sarfaraz


-Original Message-
From: squid-users  On Behalf Of Amos 
Jeffries
Sent: Tuesday, July 17, 2018 6:22 PM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Cache ran out of descriptors due to ICAP service/TCP 
SYNs ?

On 17/07/18 19:17, Ahmad, Sarfaraz wrote:
> Can somebody please explain what could have happened here?
> 
>  
> 
> First squid(4.0.25) encountered a URL > 8K bytes. I think this caused 
> it to crash.
> 

Unless you patched the MAX_URL definition to be larger than default, that 
should not happen. So is a bug IMO.

If you did patch MAX_URL, then you have encountered one of the many hidden 
issues why we keep it low and 
<https://bugs.squid-cache.org/show_bug.cgi?id=4422> open. Any assistance 
finding out where that crash occurs is VERY welcome.


>  
> 
> Jul 13 11:04:13  squid[9102]: parse URL too large (9697 
> bytes)
> 
> Jul 13 11:04:13  squid[29254]: Squid Parent: squid-1 process
> 9102 exited due to signal 11 with status 0
> 
>  
> 
> squid-1 was respawned by the parent squid process.
> 
>  
> 
> Then I see ,
> 
> WARNING: ICAP Max-Connections limit exceeded for service 
> icap://127.0.0.1:1344/reqmod. Open connections now: 16, including 0 
> idle persistent connections.
> 
> The newly spawned squid-1  crashes yet again. As seen below,
> 
> Jul 13 11:16:14  squid[29254]: Squid Parent: squid-1 process
> 10951 exited due to signal 11 with status 0
> 
> Logs don’t explain why squid-1 crashed here. ICAP message above is 
> just a warning.

In normal operation it is not serious, but you are already into abnormal 
operation by the crashing.

Re: [squid-users] Cache ran out of descriptors due to ICAP service/TCP SYNs ?

2018-07-19 Thread Ahmad, Sarfaraz
Thanks for the explanation.
From your first email : 
>> "In normal operation it is not serious, but you are already into abnormal 
>> operation by the crashing. So not releasing sockets/FD fast enough makes the 
>> overall problem worse."
I see that all the relevant FDs are opened by squid-1 process.  If squid-1 
crashes, won't the OS clean up its file descriptors. Why would the parent Squid 
process be bothered with these FDs? 
I don't see how frequent crashing slows down releasing sockets/FDs.  Can you 
please explain how this works ? Also I am not using SMP workers.

Regards,
Sarfaraz

-Original Message-
From: squid-users  On Behalf Of Amos 
Jeffries
Sent: Wednesday, July 18, 2018 9:23 PM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Cache ran out of descriptors due to ICAP service/TCP 
SYNs ?

On 18/07/18 18:30, Ahmad, Sarfaraz wrote:
> Thanks for the reply. I haven't completely understood the revert and have a 
> few more related questions.
> 
> I see these messages,
> Jul 17 19:21:14 proxy2.hyd.deshaw.com squid[5747]: suspending ICAP 
> service for too many failures Jul 17 19:21:14 proxy2.hyd.deshaw.com 
> squid[5747]: optional ICAP service is suspended: icap://127.0.0.1:1344/reqmod 
> [down,susp,fail11]
> 1)   If the ICAP service is unresponsive, Squid would not exhaust its file 
> descriptors trying to reach the service again and again right (too many TCP 
> SYNs for trying to connect to the ICAP service )? 
> 

Correct. It would not exhaust resources on *that* action. Other actions 
possibly resulting from that state existing are another matter entirely.


> 
> 
> Max Connections returned by the ICAP service is 16. And given my ICAP 
> settings, icap_enable on icap_service test_icap reqmod_precache 
> icap://127.0.0.1:1344/reqmod bypass=on routing=off on-overload=wait
> On-overload is set to "wait". The documentation says " * wait:   wait (in a 
> FIFO queue) for an ICAP connection slot" . This means that a new TCP 
> connection would not be attempted if max connections is reached right ? 
> 2)   Am I right in saying that if the ICAP service is underperforming or has 
> failed, this won't lead a sudden increase in the open file descriptors with 
> on-overload set to "wait" ?
> 

No. The side effects of the ICAP service not being used determine the possible 
outcomes there.


> 
> Also I have no way to explain the "connection reset by peer" messages.

Neither, given the details provided.


> Jul 13 11:23:18  squid[13123]: Error negotiating SSL 
> connection on FD 1292: (104) Connection reset by peer Jul 13 11:23:18 
>  squid[13123]: Error negotiating SSL connection on FD 1631: 
> (104) Connection reset by peer Jul 13 11:35:17  
> squid[13123]: Error negotiating SSL connection on FD 1331: (104) 
> Connection reset by peer
> 
> I have a few proxies (running in separate virtual machines). All of them went 
> unresponsive at around the same time, leading to an outage of the internet.
> I am using WCCPv2 to redirect from firewall to these proxies.  I checked the 
> logs there and WCCP communication was not intermittent.
> The logs on the proxies are bombarded with " Error negotiating SSL connection 
> on FD 1331: (104) Connection reset by peer " messages.

A strong sign that forwarding loops are occuring, or something cut a huge 
number of TCP connections at once.

Although syslog recording is limited by the network traffic. So situations of 
high network flooding its timestamps can be very inaccurate or unordered.


> Since the ICAP service in not SSL-protected I think these messages mostly 
> imply receiving TCP RSTs from remote servers. (or could it be clients somehow 
> ?).

Yes, another reason I am thinking along the lines of forwarding loops.

> Once I removed WCCP direction rules from the firewall, internet was
back up.
> This hints that something in this proxy pipeline was amiss and not with the 
> internet link itself. I don't see any outages on that.

Nod. Keep in mind though that "proxy pipeline" includes the WCCP rules in the 
router, NAT rules on the proxy machine, proxy config, connection to/from the 
ICAP server, and NAT rules on the proxy machine outgoing, and WCCP rules on the 
router a second time.

So a lot of parts, most outside of Squid - any one of which can screw up the 
entire pathway.


> I am pretty sure ACLs weren't changed and there was no forwarding loop.
> What could possibly explain the connection reset by peer messages ? Even if 
> the internet was down, that won't lead to TCP RSTs. 
> I cannot tie these TCP RSTs and the incoming requests getting held up and 
> ultimately leading to FD exhaustion.

Too many possibilities to list here, and we do not have sufficient information

[squid-users] Squid returns NONE_ABORTED/000 and high response time but the internet access itself looks okay

2018-08-07 Thread Ahmad, Sarfaraz
Hi,

I am WCCPv2 for redirecting traffic to Squid.
Intermittently I see these messages in access.log and the internet for clients 
goes away.

1533612202.312  79102  NONE_ABORTED/000 0 CONNECT 198.22.156.64:443 - 
HIER_NONE/- -
1533612202.312  82632  NONE_ABORTED/000 0 CONNECT 173.194.142.186:443 - 
HIER_NONE/- -
1533612202.312  16030  NONE_ABORTED/000 0 CONNECT 172.217.15.67:443 - 
HIER_NONE/- -
1533612202.312  78477  NONE_ABORTED/000 0 CONNECT 173.194.142.186:443 - 
HIER_NONE/- -

But I can access internet on the host running squid itself just fine yet Squid 
reports those messages with high response times (the second column).
I gather from 
http://lists.squid-cache.org/pipermail/squid-users/2016-February/009295.html 
that HIER_NONE implies no remote server was contacted. (or could be contacted ?)

Note: I replaced internal IP addresses with  tag. Please don't get confused.

We use an ICAP service. Could that play a role here ?
Any thoughts ?

Regards,
Sarfaraz
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid returns NONE_ABORTED/000 and high response time but the internet access itself looks okay

2018-08-07 Thread Ahmad, Sarfaraz
I cannot reproduce this. This is intermittent.  In Chrome's dev tools, it 
appeared to take over 20 secs to setup the TCP connection.
I am SSL bumping all TLS connections unless they match certain ACLs. So it is 
safe to assume that the vast majority of the traffic was bumped.

I don't see any TLS handshake failure messages in cache.log. I think the 
access.log messages I posted earlier are fake CONNECT requests created using 
TCP-level info (the response time logged there is directly proportionate to 
what I see in Chrome's dev tools). Guessing that Squid would send TCP SYN-ACK 
only after it receives SYN-ACK from remote/origin server.
I don’t think ICAP(reqmod) would come into the picture yet either (assuming 
that even the TCP connections have not been set up yet) so that is safe to rule 
out. Am I right here ?

Also restarting squid service fixed this.  I had a python script running in the 
background that was able to GET a webpage using requests module(timeout set to 
30) but Squid apparently couldn't even set up a TCP connection.

- Sarfaraz



-Original Message-
From: squid-users  On Behalf Of Amos 
Jeffries
Sent: Tuesday, August 7, 2018 6:04 PM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Squid returns NONE_ABORTED/000 and high response 
time but the internet access itself looks okay

On 07/08/18 21:55, Ahmad, Sarfaraz wrote:
> Hi,
> 
>  
> 
> I am WCCPv2 for redirecting traffic to Squid.
> 

Squid version?

> Intermittently I see these messages in access.log and the internet for 
> clients goes away.
> 
>  
> 
> 1533612202.312  79102  NONE_ABORTED/000 0 CONNECT 
> 198.22.156.64:443
> - HIER_NONE/- -
> 
> 1533612202.312  82632  NONE_ABORTED/000 0 CONNECT
> 173.194.142.186:443 - HIER_NONE/- -
> 
> 1533612202.312  16030  NONE_ABORTED/000 0 CONNECT 
> 172.217.15.67:443
> - HIER_NONE/- -
> 
> 1533612202.312  78477  NONE_ABORTED/000 0 CONNECT
> 173.194.142.186:443 - HIER_NONE/- -
> 
>  
> 
> But I can access internet on the host running squid itself just fine 
> yet Squid reports those messages with high response times (the second column).
> 
...>  
> 
> We use an ICAP service. Could that play a role here ?

A lot of things *might* play a role there.

> 
> Any thoughts ?

Trace the traffic.

What did the client actually send to Squid?
  It's probably not a port-80 style CONNECT request.

What does Squid send back to the client?

Does Squid complete the TLS handshake?

What are your SSL-Bump settings?


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid returns NONE_ABORTED/000 and high response time but the internet access itself looks okay

2018-08-07 Thread Ahmad, Sarfaraz
>> Your guess is wrong. The TCP level setup is only between Squid and the 
>> client. It has to have completed before the TLS stuff can begin.
So when does Squid start setting up the TCP connection with the origin server ? 
After setting up a TCP connection with client and identifying it to be TLS ? 

What would this log message likely mean then ? I was reading that as 78477ms 
was the time it took for Squid to connect to 173.194.142.186 on port 443 and 
Squid and client(not the origin server) had already established a TCP 
connection beforehand (while it(squid) tries connecting to the remote server on 
port 443).
1533612202.312  78477  NONE_ABORTED/000 0 CONNECT 173.194.142.186:443 - 
HIER_NONE/- -

That would imply two things.
1) It took a lot of time for clients to set up a TCP connection with Squid 
given Chrome's dev tools 
2) Second, Squid took a while to establish a connection with origin server. 

Moreover, my ICAP settings look like this,
icap_service localicap reqmod_precache icap://127.0.0.1:1345/reqmod bypass=on 
routing=off on-overload=wait

ICAP would come into the picture only after I see a GET request in the 
access.log, right? 

Regards,
Sarfaraz

-Original Message-
From: Amos Jeffries  
Sent: Tuesday, August 7, 2018 9:04 PM
To: Ahmad, Sarfaraz ; 
squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Squid returns NONE_ABORTED/000 and high response 
time but the internet access itself looks okay

On 08/08/18 02:14, Ahmad, Sarfaraz wrote:
> I cannot reproduce this. This is intermittent.  In Chrome's dev tools, 
> it appeared to take over 20 secs to setup the TCP connection.
> I am SSL bumping all TLS connections unless they match certain ACLs.
> So it is safe to assume that the vast majority of the traffic was 
> bumped.
> 
> I don't see any TLS handshake failure messages in cache.log. I think 
> the access.log messages I posted earlier are fake CONNECT requests 
> created using TCP-level info (the response time logged there is 
> directly proportionate to what I see in Chrome's dev tools). Guessing 
> that Squid would send TCP SYN-ACK only after it receives SYN-ACK from 
> remote/origin server.

Your guess is wrong. The TCP level setup is only between Squid and the client. 
It has to have completed before the TLS stuff can begin.

The first fake-CONNECT is done after TCP connection setup to see whether the 
client is allowed to perform TLS inside it - and how Squid handles that TLS.


> I don’t think ICAP(reqmod) would come into the picture yet either 
> (assuming that even the TCP connections have not been set up yet) so 
> that is safe to rule out. Am I right here ?

You are right about that in relation to TCP.

But TCP is already over and done with by the time the fake-CONNECT gets 
generated. So wrong about ICAP's lack of involvement - it may (or not) be.

NP: The only thing fake about the early CONNECT's is that the client did not 
actually generate it. They are handled in Squid same as a regular CONNECT 
message would be.

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid returns NONE_ABORTED/000 and high response time but the internet access itself looks okay

2018-08-08 Thread Ahmad, Sarfaraz
>> As late as it can be done. How late that is depends on your SSL-Bump setup, 
>> any ICAP or other changes being made to the fake-CONNECT request, maybe even 
>> your cache contents.
All I am doing is peeking at Step1 (Client SNI) and then bumping vast majority 
of connections, splicing very few. Caching is completely disabled.

>> The client "" opened a TCP connection to 173.194.142.186:443 - which was 
>> intercepted and delivered to this Squid. The client disconnected after 78 
>> seconds.
This doesn't explain why I saw 20+secs for TCP in Chrome dev tools'.  From 
Chrome's dev tools, it looks like setting up TLS did not take long but setting 
up TCP surely did.

 >> We know that the server IP:port should be 173.194.142.186:443 because those 
 >> are the details on the CONNECT message URI. Squid has not gone far enough 
 >> into the processing of that message to identify that detail.
What would usually be the next step here? Could DNS be involved ?


-Original Message-
From: Amos Jeffries  
Sent: Tuesday, August 7, 2018 11:14 PM
To: Ahmad, Sarfaraz ; 
squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Squid returns NONE_ABORTED/000 and high response 
time but the internet access itself looks okay

On 08/08/18 05:15, Ahmad, Sarfaraz wrote:
>>> Your guess is wrong. The TCP level setup is only between Squid and the 
>>> client. It has to have completed before the TLS stuff can begin.
> So when does Squid start setting up the TCP connection with the origin server 
> ? After setting up a TCP connection with client and identifying it to be TLS ?

As late as it can be done. How late that is depends on your SSL-Bump setup, any 
ICAP or other changes being made to the fake-CONNECT request, maybe even your 
cache contents.

Anyhow, there is no sign of a server connection existing. So whatever is 
happening is one of the early parts of TLS handshake.


> 
> What would this log message likely mean then ? I was reading that as 78477ms 
> was the time it took for Squid to connect to 173.194.142.186 on port 443 and 
> Squid and client(not the origin server) had already established a TCP 
> connection beforehand (while it(squid) tries connecting to the remote server 
> on port 443).
> 1533612202.312  78477  NONE_ABORTED/000 0 CONNECT 
> 173.194.142.186:443 - HIER_NONE/- -
> 

The client "" opened a TCP connection to 173.194.142.186:443 - which was 
intercepted and delivered to this Squid. The client disconnected after 78 
seconds.
 That is all that can be known about what did happen from this log entry.

The list of things which did not (or could not have) happen is longer:

No server connection was made.
No bytes delivered to the client.
 and a lot of other things cannot have happened.


> That would imply two things.
> 1) It took a lot of time for clients to set up a TCP connection with 
> Squid given Chrome's dev tools

The client->Squid TCP setup seems to be completed. Either Squid is waiting for 
TLS handshake from the client to arrive, or something is hung in the processing 
of the clientHello stages.

NP: the 0-bytes number in the log is how much Squid delivered to the client. 
There may be some bytes the client delivered to Squid which are not accounted.



> 2) Second, Squid took a while to establish a connection with origin server.

I don't think Squid got that far. There is no server IP following the 
"HEIR_NONE/". The "-" part specifically says there is no known server IP.

We know that the server IP:port should be 173.194.142.186:443 because those are 
the details on the CONNECT message URI. Squid has not gone far enough into the 
processing of that message to identify that detail.


> 
> Moreover, my ICAP settings look like this, icap_service localicap 
> reqmod_precache icap://127.0.0.1:1345/reqmod bypass=on routing=off 
> on-overload=wait
> 
> ICAP would come into the picture only after I see a GET request in the 
> access.log, right?

ICAP should be passed any HTTP request. Since Squid is handling the CONNECT 
message the ICAP should be invoked for them as well. I don't recall of it 
actually is or not.

Also, note that at no point have I confirmed that is is ICAP being the problem 
(nor ruled it out). You are the one focusing on that possibility. There are 
other possibilities such as SSL-Bump issues, http_access delays and the like. 
Even client_delay_pool's may be the problem.


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid fails to bump where there are too many DNS names in SAN field

2018-09-03 Thread Ahmad, Sarfaraz
Hi,

I am using Squid in an interception role with WCCP.
I am peeking at Step1 to read the SNI and determining whether to splice or bump.

That interception/MITM appears to fail where remote certificates from origin 
servers have way too many dnsnames in the SAN field.
I have noticed this behavior with at least these 2 websites. In both the cases, 
my setup would be bumping the connections. (Obviously otherwise we won't be 
having this problem with splicing.)

https://www.pcmag.com/
https://www.extremetech.com/


The RFC doesn't set an upper bound on the number of dnsnames you can set in the 
SAN field.
If I splice these domains/URLs, browsers don't complain either. So this seems 
local to Squid.

Points to note:

1)  Even though openssl s_client can connect/negotiate just fine, Squid 
doesn't.

2)  This is the behavior that I gather from a packet capture.

a.   My client (say a workstation XYZ) tried to connect to 
103.243.13.183:443 (That is https://www.extremetech.com)

b.   WCCP ships packet to the proxy over GRE tunnel and a TCP connection 
with the proxy acting as the origin server is established.

c.   XYZ sends ClientHello to the proxy.

d.   Squid starts conversing the origin server and sends a ClientHello.

e.   Origin server replies with ServerHello, ServerKeyExchange, Certificate 
packets, Squid just waits endlessly.

f.The client, XYZ, ends up sending a FIN packet after ClientHello, 
since Squid doesn't revert back with a ServerHello.

I will have to file a bug ?

Regards,
Sarfaraz




___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid fails to bump where there are too many DNS names in SAN field

2018-09-04 Thread Ahmad, Sarfaraz
llocating 0x110b508
2018/09/04 12:45:58.125 kid1| 45,9| cbdata.cc(419) cbdataReferenceValid: 
0x17c3f18
2018/09/04 12:45:58.125 kid1| 45,9| cbdata.cc(419) cbdataReferenceValid: 
0x17c3f18
2018/09/04 12:45:58.125 kid1| 45,9| cbdata.cc(419) cbdataReferenceValid: 
0x17c3f18
2018/09/04 12:45:58.125 kid1| 45,9| cbdata.cc(419) cbdataReferenceValid: 
0x17c3f18
2018/09/04 12:45:58.125 kid1| 45,9| cbdata.cc(351) cbdataInternalLock: 
0x110b508=1
2018/09/04 12:45:58.125 kid1| 83,5| PeerConnector.cc(559) callBack: TLS setup 
ended for local=10.240.180.31:43674 remote=103.243.13.183:443 FD 12 flags=1
2018/09/04 12:45:58.125 kid1| 5,5| comm.cc(1030) comm_remove_close_handler: 
comm_remove_close_handler: FD 12, AsyncCall=0x1635fc0*2
2018/09/04 12:45:58.125 kid1| 9,5| AsyncCall.cc(56) cancel: will not call 
Security::PeerConnector::commCloseHandler [call2844544] because 
comm_remove_close_handler
2018/09/04 12:45:58.125 kid1| 45,9| cbdata.cc(419) cbdataReferenceValid: 
0x1f6b778
2018/09/04 12:45:58.125 kid1| 17,4| AsyncCall.cc(93) ScheduleCall: 
PeerConnector.cc(572) will call FwdState::ConnectedToPeer(0x1f6b778, 
local=10.240.180.31:43674 remote=103.243.13.183:443 FD 12 flags=1, 
0x110b508/0x110b508) [call2844542]
2018/09/04 12:45:58.125 kid1| 45,9| cbdata.cc(419) cbdataReferenceValid: 
0xf67698
2018/09/04 12:45:58.125 kid1| 93,5| AsyncJob.cc(139) callEnd: 
Security::PeerConnector::negotiate() ends job [ FD 12 job194686]
2018/09/04 12:45:58.125 kid1| 83,5| PeerConnector.cc(48) ~PeerConnector: 
Security::PeerConnector destructed, this=0xf67698
2018/09/04 12:45:58.125 kid1| 45,9| cbdata.cc(383) cbdataInternalUnlock: 
0xf67698=2
2018/09/04 12:45:58.125 kid1| 45,9| cbdata.cc(383) cbdataInternalUnlock: 
0xf67698=1
2018/09/04 12:45:58.125 kid1| 93,5| AsyncJob.cc(40) ~AsyncJob: AsyncJob 
destructed, this=0xf67750 type=Ssl::PeekingPeerConnector [job194686]

Again as this is with an explicit CONNECT request, I do get ERR_CANNOT_FORWARD 
and that error page uses a certificate signed for www.extremetech.com by my 
internal CA without any thing in SAN field guessing ssl_crtd isn't crashing 
here unlike the previous bugreport.
Anything from these loglines ?

Regards,
Sarfaraz


-Original Message-
From: squid-users  On Behalf Of Amos 
Jeffries
Sent: Tuesday, September 4, 2018 10:10 AM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Squid fails to bump where there are too many DNS 
names in SAN field

On 4/09/18 10:39 AM, Alex Rousskov wrote:
> On 09/03/2018 01:34 AM, Ahmad, Sarfaraz wrote:
> 
>> interception/MITM appears to fail where remote certificates from 
>> origin servers have way too many dnsnames in the SAN field.
>>
>> I have noticed this behavior with at least these 2 websites. In both 
>> the cases, my setup would be bumping the connections.
>>
>> https://www.pcmag.com/
>> https://www.extremetech.com/
> 
>> I will have to file a bug ?
> 

Does it look like a reoccurance of this bug?
 <https://bugs.squid-cache.org/show_bug.cgi?id=3665>

We did not have a concrete confirmation that the exact issue was permanently 
gone, it may have just been shifted to larger more obscure SAN field values.


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid fails to bump where there are too many DNS names in SAN field

2018-09-04 Thread Ahmad, Sarfaraz
Forgot to mention, this is with Squid-4.0.24.

-Original Message-
From: Ahmad, Sarfaraz 
Sent: Tuesday, September 4, 2018 1:04 PM
To: 'Amos Jeffries' ; squid-users@lists.squid-cache.org
Cc: 'rouss...@measurement-factory.com' 
Subject: RE: [squid-users] Squid fails to bump where there are too many DNS 
names in SAN field

With debug_options ALL,9 and retrieving just this page, I found the following 
relevant loglines (this is with an explicit CONNECT request) ,

2018/09/04 12:45:46.112 kid1| 24,8| SBuf.cc(30) SBuf: SBuf6005084 created
2018/09/04 12:45:46.112 kid1| 24,7| BinaryTokenizer.cc(65) got: 
TLSPlaintext.type=22 occupying 1 bytes @91 in 0xfa4d38;
2018/09/04 12:45:46.112 kid1| 24,7| BinaryTokenizer.cc(65) got: 
TLSPlaintext.version.major=3 occupying 1 bytes @92 in 0xfa4d38;
2018/09/04 12:45:46.112 kid1| 24,7| BinaryTokenizer.cc(65) got: 
TLSPlaintext.version.minor=3 occupying 1 bytes @93 in 0xfa4d38;
2018/09/04 12:45:46.112 kid1| 24,7| BinaryTokenizer.cc(65) got: 
TLSPlaintext.fragment.length=16384 occupying 2 bytes @94 in 0xfa4d38;
2018/09/04 12:45:46.112 kid1| 24,8| SBuf.cc(38) SBuf: SBuf6005085 created from 
id SBuf6005054
2018/09/04 12:45:46.112 kid1| 24,7| BinaryTokenizer.cc(74) got: 
TLSPlaintext.fragment.octets= <16384 OCTET Bytes fit here> 
2018/09/04 12:45:46.112 kid1| 24,8| SBuf.cc(70) ~SBuf: SBuf6005085 destructed
2018/09/04 12:45:46.112 kid1| 24,7| BinaryTokenizer.cc(57) got: TLSPlaintext 
occupying 16389 bytes @91 in 0xfa4d38;
2018/09/04 12:45:46.112 kid1| 24,7| SBuf.cc(160) rawSpace: reserving 16384 for 
SBuf6005052
2018/09/04 12:45:46.112 kid1| 24,8| SBuf.cc(886) cow: SBuf6005052 new size:16470
2018/09/04 12:45:46.112 kid1| 24,8| SBuf.cc(857) reAlloc: SBuf6005052 new size: 
16470
2018/09/04 12:45:46.112 kid1| 24,9| MemBlob.cc(56) MemBlob: constructed, 
this=0x1dd2860 id=blob1555829 reserveSize=16470
2018/09/04 12:45:46.112 kid1| 24,8| MemBlob.cc(101) memAlloc: blob1555829 
memAlloc: requested=16470, received=16470
2018/09/04 12:45:46.112 kid1| 24,7| SBuf.cc(865) reAlloc: SBuf6005052 new store 
capacity: 16470
2018/09/04 12:45:46.112 kid1| 24,7| SBuf.cc(85) assign: assigning SBuf6005056 
from SBuf6005052
2018/09/04 12:45:46.112 kid1| 24,9| MemBlob.cc(82) ~MemBlob: destructed, 
this=0x1dd27a0 id=blob1555826 capacity=65535 size=8208
2018/09/04 12:45:46.112 kid1| 24,8| SBuf.cc(30) SBuf: SBuf6005086 created
2018/09/04 12:45:46.112 kid1| 24,7| BinaryTokenizer.cc(65) got: 
Handshake.msg_type=11 occupying 1 bytes @86 in 0xfa4d70;
2018/09/04 12:45:46.112 kid1| 24,7| BinaryTokenizer.cc(65) got: 
Handshake.msg_body.length=16900 occupying 3 bytes @87 in 0xfa4d70;
2018/09/04 12:45:46.112 kid1| 24,5| BinaryTokenizer.cc(47) want: 520 more bytes 
for Handshake.msg_body.octets occupying 16900 bytes @90 in 0xfa4d70;
2018/09/04 12:45:46.112 kid1| 24,8| SBuf.cc(70) ~SBuf: SBuf6005086 destructed
2018/09/04 12:45:46.112 kid1| 24,8| SBuf.cc(70) ~SBuf: SBuf6005084 destructed
2018/09/04 12:45:46.112 kid1| 83,5| Handshake.cc(532) parseHello: need more data
2018/09/04 12:45:46.112 kid1| 83,7| bio.cc(168) stateChanged: FD 15 now: 0x1002 
23RSHA (SSLv2/v3 read server hello A)
2018/09/04 12:45:46.112 kid1| 83,5| PeerConnector.cc(451) noteWantRead: 
local=10.240.180.31:43716 remote=103.243.13.183:443 FD 15 flags=1
2018/09/04 12:45:46.112 kid1| 5,3| comm.cc(559) commSetConnTimeout: 
local=10.240.180.31:43716 remote=103.243.13.183:443 FD 15 flags=1 timeout 60
2018/09/04 12:45:46.112 kid1| 5,5| ModEpoll.cc(117) SetSelect: FD 15, type=1, 
handler=1, client_data=0x2818f58, timeout=0
2018/09/04 12:45:46.112 kid1| 45,9| cbdata.cc(419) cbdataReferenceValid: 
0x2818f58
2018/09/04 12:45:46.112 kid1| 83,7| AsyncJob.cc(154) callEnd: 
Ssl::PeekingPeerConnector status out: [ FD 15 job194701]
2018/09/04 12:45:46.112 kid1| 83,7| AsyncCallQueue.cc(57) fireNext: leaving 
Security::PeerConnector::negotiate()
Later on after about 10 secs

2018/09/04 12:45:58.124 kid1| 83,7| AsyncJob.cc(123) callStart: 
Ssl::PeekingPeerConnector status in: [ FD 12 job194686]
2018/09/04 12:45:58.124 kid1| 45,9| cbdata.cc(419) cbdataReferenceValid: 
0xf67698
2018/09/04 12:45:58.124 kid1| 83,5| PeerConnector.cc(187) negotiate: 
SSL_connect session=0x122c430
2018/09/04 12:45:58.124 kid1| 24,7| SBuf.cc(160) rawSpace: reserving 65535 for 
SBuf6002798
2018/09/04 12:45:58.124 kid1| 24,8| SBuf.cc(886) cow: SBuf6002798 new size:82887
2018/09/04 12:45:58.124 kid1| 24,8| SBuf.cc(857) reAlloc: SBuf6002798 new size: 
82887
2018/09/04 12:45:58.124 kid1| 24,9| MemBlob.cc(56) MemBlob: constructed, 
this=0x1dd27a0 id=blob1555830 reserveSize=82887
2018/09/04 12:45:58.124 kid1| 24,8| MemBlob.cc(101) memAlloc: blob1555830 
memAlloc: requested=82887, received=82887
2018/09/04 12:45:58.124 kid1| 24,7| SBuf.cc(865) reAlloc: SBuf6002798 new store 
capacity: 82887
2018/09/04 12:45:58.124 kid1| 24,8| SBuf.cc(139) rawAppendStart: SBuf6002798 
start appending up to 65535 bytes
2018/09/04 12:45:58.124 kid1| 83,5| bio.cc(140) read: FD 12 read 0

Re: [squid-users] Squid fails to bump where there are too many DNS names in SAN field

2018-09-05 Thread Ahmad, Sarfaraz
Tested with Squid-4.2 and ended with same results. 
How do we proceed here ?


-Original Message-
From: Alex Rousskov  
Sent: Tuesday, September 4, 2018 9:14 PM
To: Ahmad, Sarfaraz ; 
squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Squid fails to bump where there are too many DNS 
names in SAN field

On 09/04/2018 02:00 AM, Ahmad, Sarfaraz wrote:

> 2018/09/04 12:45:46.112 kid1| 24,5| BinaryTokenizer.cc(47) want: 520 more 
> bytes for Handshake.msg_body.octets occupying 16900 bytes @90 in 0xfa4d70;
> 2018/09/04 12:45:46.112 kid1| 83,5| PeerConnector.cc(451) noteWantRead: 
> local=10.240.180.31:43716 remote=103.243.13.183:443 FD 15 flags=1


Translation: Squid did not read enough data from the server to finish
parsing TLS server handshake. Squid needs to read at least 520 more
bytes from FD 15.


> Later on after about 10 secs

> 2018/09/04 12:45:58.124 kid1| 83,5| bio.cc(140) read: FD 12 read 0 <= 65535

And end-of-file on the wrong/different connection.


My recommendations remain the same, but please follow Amos advice and
upgrade to the latest v4 first.

Please note that I do _not_ recommend analyzing ALL,9 logs. On average,
such analysis by non-developers wastes more time than it saves.

Alex.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Why does Squid4 do socket(AF_NETLINK, SOCK_RAW, NETLINK_NETFILTER) = -1 EACCES (Permission denied) ?

2018-11-30 Thread Ahmad, Sarfaraz
I think almost every time squid opens a TCP connection, It also tried to open a 
raw socket of type AF_NETLINK. Syscall pasted below.
All that I can make sense of this is that Squid is trying to engage with 
iptables subsystem somehow ?
I have SELinux enforcing and would like to know what Squid is trying to do 
before figuring out how to allow that.

socket(AF_INET, SOCK_STREAM, IPPROTO_IP) = 90
socket(AF_NETLINK, SOCK_RAW, NETLINK_NETFILTER) = -1 EACCES (Permission denied)

I am using WCCP and TLS interception with Squid 4.0.24 release. Everything 
works as expected except auditd is getting spammed with denial messages.
type=AVC msg=audit(1543478005.027:49455970): avc:  denied  { getattr } for  
pid=13766 comm="squid" scontext=system_u:system_r:squid_t:s0 tcontext=sys
tem_u:system_r:squid_t:s0 tclass=netlink_socket

Any thoughts ?

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] High response times with Squid

2019-02-07 Thread Ahmad, Sarfaraz
Hi,

I am using Squid 4.5 with WCCP. Intercepting SSL by peeking at step1 and then 
deciding to either splice or bump upon the SNI.
I am noticing a weird behavior for some of my TCP connections.  Squid is taking 
over 20s to decide what do with the ClientHello sent by the browser. It is only 
after 20s that it decides to send out a ClientHello to the origin server and at 
the same time reply to the client with a ServerHello.
This behavior is hard to reproduce and only some clients are affected.

I will try to summarize what I see in cache.log with ALL, 6 debug options.


1)  Squid's INTERCEPTION thread/program receives a TCP SYN from workstation.
2019/02/06 17:23:19.070 kid1| 89,5| Intercept.cc(405) Lookup: address BEGIN: 
me/client= :23129, destination/me= :58232


2)  Squid becomes the origin server and sets up the TCP connection.
2019/02/06 17:23:19.070 kid1| 5,5| AsyncCall.cc(93) ScheduleCall: 
TcpAcceptor.cc(339) will call 
httpsAccept(local=:443 
remote=:58232 FD 40 flags=33, MXID_1101703) [call34733258]
2019/02/06 17:23:19.070 kid1| 5,5| AsyncCall.cc(38) make: make call httpsAccept 
[call34733258]
   2019/02/06 17:23:19.070 kid1| 33,4| client_side.cc(2776) httpsAccept: 
local=:443 remote=:58232 FD 
40 flags=33 accepted, starting SSL negotiation.


3)  Squid checks the SSL ACLs for the destination IP.
2019/02/06 17:23:19.071 kid1| 28,5| Acl.cc(124) matches: checking (ssl_bump 
rules)
2019/02/06 17:23:19.071 kid1| 28,5| Checklist.cc(397) bannedAction: Action 
'ALLOWED/6' is not banned
2019/02/06 17:23:19.071 kid1| 28,5| Acl.cc(124) matches: checking (ssl_bump 
rule)
2019/02/06 17:23:19.071 kid1| 28,5| Acl.cc(124) matches: checking 
no_ssl_bump_src_ip
2019/02/06 17:23:19.071 kid1| 28,3| Ip.cc(538) match: aclIpMatchIp: 
':58232' NOT found
2019/02/06 17:23:19.071 kid1| 28,3| Acl.cc(151) matches: checked: 
no_ssl_bump_src_ip = 0


4)  Squid decides to allow connections to the remote IP i.e 
 and decides to peek at the SNI (will accept 
ClientHello), hence fakes a CONNECT request
2019/02/06 17:23:19.071 kid1| 28,3| Checklist.cc(163) checkCallback: 
ACLChecklist::checkCallback: 0x1fb19fd8 answer=ALLOWED
2019/02/06 17:23:19.071 kid1| 33,2| client_side.cc(2744) 
httpsSslBumpAccessCheckDone: sslBump action peekneeded for 
local=:443 remote=:58232 FD 
40 flags=33
2019/02/06 17:23:19.071 kid1| 33,2| client_side.cc(3395) fakeAConnectRequest: 
fake a CONNECT request to force connState to tunnel for ssl-bump


5)  The FAKE connect requests again runs through ACLs.  20ms are spent for 
DNS PTR lookup. A total of 30ms is spent parsing ACLs.
2019/02/06 17:23:19.103 kid1| 85,2| client_side_request.cc(758) 
clientAccessCheckDone: The request CONNECT  is 
ALLOWED; last ACL checked: localnet
2019/02/06 17:23:19.103 kid1| 93,4| AccessCheck.cc(145) checkCandidates: NO 
candidates left
2019/02/06 17:23:19.103 kid1| 93,3| AccessCheck.cc(196) callBack: NULL
2019/02/06 17:23:19.103 kid1| 93,5| AsyncCall.cc(26) AsyncCall: The AsyncCall 
Adaptation::Initiator::noteAdaptationAclCheckDone constructed, this=0x1552b0b0 
[call34733267]


6)  FAKE connect is processed and Squid reads the TCP connection, gets the 
ClientHello and reads the SNI.
2019/02/06 17:23:19.104 kid1| 33,5| AsyncCall.cc(38) make: make call 
Server::doClientRead [call34733270]
2019/02/06 17:23:19.104 kid1| 33,5| AsyncJob.cc(123) callStart: Http1::Server 
status in: [ job2971436]
2019/02/06 17:23:19.104 kid1| 33,5| Server.cc(104) doClientRead: 
local=:443 remote=:58232 FD 
40 flags=33
2019/02/06 17:23:19.104 kid1| 5,3| Read.cc(92) ReadNow: 
local=:443 remote=:58232 FD 
40 flags=33, size 4096, retval 203, errno 0
2019/02/06 17:23:19.104 kid1| 33,5| AsyncCall.cc(26) AsyncCall: The AsyncCall 
ConnStateData::requestTimeout constructed, this=0x3d42d40 [call34733271]
2019/02/06 17:23:19.104 kid1| 5,3| comm.cc(559) commSetConnTimeout: 
local=:443 remote=:58232 FD 
40 flags=33 timeout 300
2019/02/06 17:23:19.104 kid1| 83,5| Handshake.cc(404) parseExtensions: first 
unsupported extension: 19018
2019/02/06 17:23:19.104 kid1| 83,3| Handshake.cc(497) parseSniExtension: 
host_name=


7)  This is followed by another round of ACL processing now that we have 
the SNI.

2019/02/06 17:23:19.106 kid1| 33,3| Pipeline.cc(35) front: Pipeline 0xc2d7b60 
front 0x1ae2e730*2
2019/02/06 17:23:19.106 kid1| 33,3| Pipeline.cc(35) front: Pipeline 0xc2d7b60 
front 0x1ae2e730*3
2019/02/06 17:23:19.107 kid1| 83,5| Session.cc(103) NewSessionObject: SSL_new 
session=0x78136a0
2019/02/06 17:23:19.107 kid1| 83,5| bio.cc(616) squid_bio_ctrl: 0xcd0fe80 
104(6000, 0x7ffc32a6e6b4)
2019/02/06 17:23:19.107 kid1| 83,5| Session.cc(162) CreateSession: link FD 40 
to TLS session=0x78136a0
2019/02/06 17:23:19.107 kid1| 33,5| client_side.cc(2535) httpsCreate: will 
negotiate TLS on local=:443 
remote=:58232 FD 40 flags=33
2019/02/06 17:23:19.107 kid1| 5,5| ModEpoll.cc(117) SetSelect: FD 40, type=1, 
handler=0, client_data=0, timeout=0
2019/02/06 17:23:19.107 kid1| 83,5| client_side.c

Re: [squid-users] Problem rtmp traffic through Squid

2019-02-13 Thread Ahmad, Sarfaraz
Did you add them to "safe_ports" acl ? ( assuming you have one )

Look here some more inputs,
http://squid-web-proxy-cache.1019090.n4.nabble.com/squid-conf-blocking-live-video-stream-td4680866.html



From: squid-users  On Behalf Of 
? ?? 
Sent: Wednesday, February 13, 2019 5:56 PM
To: squid-users@lists.squid-cache.org
Subject: [squid-users] Problem rtmp traffic through Squid

Hello! In our organization, we use squid proxy server. And we found a problem 
with viewing webinars that run on adobe Flash. Network engineers found out that 
rtmp traffic on port 1935 bypasses the proxy server, which is specified in the 
browser settings. In this connection, the site does not work media content. The 
same problem is covered on the Adobe website 
https://forums.adobe.com/thread/905051
Can you help with providing information on configuring squid to work with adobe 
Flash?
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] High response times with Squid

2019-02-14 Thread Ahmad, Sarfaraz
Hi again,
I made some progress on this.
To reiterate, I am peeking at the SNI and then bump all connections to the 
origin server in context of this problem. ( the origin server is seamless.com )

Here are the new findings ,
1) The 20sec lag is noticed even when I splice the connection.
2) It 99% has to do with the following slow ACL acl.

acl deny_explicit_dstdomain dstdomain "/etc/squid/acls/deny_explicit_dstdomain"

I see PTR lookups failing when Squid tries to validate my ACLs. When I disable 
that ACL, the 20second lag is gone. So I am pretty confident that subsequent 
PTR lookups are causing the delay here.
I don't see a configuration directive with which I can configure how many times 
Squid retries the lookup.
I see one that sets the timeout though (dns_timeout  defaults 30 seconds).

Could you guys give me some pointers on what could be happening here ?

Regards,
Ahmad


-Original Message-
From: squid-users  On Behalf Of Amos 
Jeffries
Sent: Saturday, February 9, 2019 10:20 AM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] High response times with Squid

On 8/02/19 7:30 pm, Ahmad, Sarfaraz wrote:
> Hi,
> 
>  
> 
> I am using Squid 4.5 with WCCP. Intercepting SSL by peeking at step1 
> and then deciding to either splice or bump upon the SNI.
> 
> I am noticing a weird behavior for some of my TCP connections.  Squid 
> is taking over 20s to decide what do with the ClientHello sent by the 
> browser. It is only after 20s that it decides to send out a 
> ClientHello to the origin server and at the same time reply to the 
> client with a ServerHello.
> 
> This behavior is hard to reproduce and only some clients are affected.
> 
>  
> 
> I will try to summarize what I see in cache.log with ALL, 6 debug options.
> 
>  
> 
> 1)  Squid's INTERCEPTION thread/program receives a TCP SYN from 
> workstation.
> 
> 2019/02/06 17:23:19.070 kid1| 89,5| Intercept.cc(405) Lookup: address
> BEGIN: me/client= *:*23129, destination/me=
> *:*58232
> 

No. This is looking up the original TCP dst-IP:port in the kernel NAT tables.


>  
> 
> 2)  Squid becomes the origin server and sets up the TCP connection.
> 

No. The local= log values are a simple statement of the TCP packet values 
received from the NAT system at (1). Squid is an MITM in this setup, so the 
client *thinks* it is talking to the origin.

Being an MITM Squid is designed to operate as transparently as possible, but at 
no time has the abilities of the origin server.


> 2019/02/06 17:23:19.070 kid1| 5,5| AsyncCall.cc(93) ScheduleCall:
> TcpAcceptor.cc(339) will call
> httpsAccept(local*=*:443
> remote=*:*58232 FD 40 flags=33, MXID_1101703) 
> [call34733258]
> 

...
> 
> 8)  No ServerHello has been sent back to the client yet, Squid 
> starts a TCP connection with the origin server
> 
> 2019/02/06 17:23:19.110 kid1| 5,4| AsyncJob.cc(123) callStart:
> Comm::ConnOpener status in: [ job2971439]
> 
> 2019/02/06 17:23:19.110 kid1| 5,5| ConnOpener.cc(350) doConnect:
> local=0.0.0.0 remote*=:*443 flags=1:
> Comm::OK - connected
> 
> 2019/02/06 17:23:19.110 kid1| 5,4| ConnOpener.cc(155) cleanFd:
> local=0.0.0.0 remote=<*ORIGIN_SERVER_ON_THE_INTERNET*>:443 flags=1 
> closing temp FD 50
> 
>  
> 
> 9)  Squid starts a TLS session with the remote/origin server, 
> sends the ClientHello. A total of 0.4 seconds in Squid sending 
> clienthello to origin server. This is probably when Squid decides to 
> send back the ServerHello to the browser.

Don't guess. Check.

Either you have step2 / client-first bumping - in which case the Squid 
serverHello would have been sent to the client at (7).

Or, you have step3 / server-first bumping - in which case Squid cannot send a 
serverHello to the client until it has received the origin's serverHello. Which 
still has not yet been received despite your trace ending here.

...
> 
> 2019/02/06 17:23:19.111 kid1| 83,5| PeerConnector.cc(123) initialize:
> local=**:44498 remote=**:443 
> FD
> 50 flags=1, session=0x14899390
> 
>  
> 
> So somewhere between Step 8 and Step 9, Squid is taking over 20s.
> 

There is only 1 millisecond between those steps.

The client connection was received at 17:23:19.070, your (9) finished at
17:23:19.111 -> so there is your 0.41 seconds. If there is any 20s gap for this 
transaction it is later in the log part you have not shown.


> 
> What could possibly be keeping it busy ?
> 

Other transactions? Nothing?

What is going on at (9) is *preparing* to send a TLS clientHello. At the point 
your log stops it still has not actually been written to the network.

There is actually still a good half of the SSl-Bump process to happen:
 - assemble the Squid clientHello bytes,
 - send that to origin
 - receive origin s

Re: [squid-users] High response times with Squid

2019-02-14 Thread Ahmad, Sarfaraz
Thanks for all the pointers :) I figured it out. Seamless.com's PTR lookups are 
slow and end up in SERVFAIL. 
And that was causing the delay here. I purged that ACL and it's all good.


-Original Message-
From: Amos Jeffries  
Sent: Friday, February 15, 2019 9:24 AM
To: Ahmad, Sarfaraz ; 
squid-users@lists.squid-cache.org
Subject: Re: [squid-users] High response times with Squid

On 14/02/19 11:38 pm, Ahmad, Sarfaraz wrote:
> Hi again,
> I made some progress on this.
> To reiterate, I am peeking at the SNI and then bump all connections to 
> the origin server in context of this problem. ( the origin server is 
> seamless.com )
> 
> Here are the new findings ,
> 1) The 20sec lag is noticed even when I splice the connection.
> 2) It 99% has to do with the following slow ACL acl.
> 
> acl deny_explicit_dstdomain dstdomain 
> "/etc/squid/acls/deny_explicit_dstdomain"
> 
> I see PTR lookups failing when Squid tries to validate my ACLs. When I 
> disable that ACL, the 20second lag is gone. So I am pretty confident that 
> subsequent PTR lookups are causing the delay here.
> I don't see a configuration directive with which I can configure how many 
> times Squid retries the lookup.
> I see one that sets the timeout though (dns_timeout  defaults 30 seconds).
> 
> Could you guys give me some pointers on what could be happening here ?

Only repeat back to you what you have described to us ... DNS PTR lookups are 
slow.

Your squid.conf is needed to know where those lookups are happening and see if 
any can be avoided.

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users