Re: [squid-users] a decent way to speed up Facebook?

2018-09-04 Thread Amos Jeffries
On 5/09/18 4:44 AM, turgut kalfaoğlu wrote:
> Hello there. I have a transparent squid at my home to speed up the
> browsing by caching stuff.  And it works well for HTTP.
> 
> For HTTPS, I was only able to get it to "peek" and I'd like to able to
> bump the connections.
> 
> I installed the server certificate on the client, but still, the browser
> (firefox) keeps complaining:
> 
> Your connection is not secure
> The owner of www.facebook.com has configured their website improperly.
> To protect your information from being stolen, Firefox has not connected
> to this website.
> This site uses HTTP Strict Transport Security (HSTS) to specify that
> Firefox may only connect to it securely. As a result, it is not possible
> to add an exception for this certificate.

Squid removes HSTS from any network traffic it handles (except splice'd
traffic). So clearing the browser info and ensuring that the other
non-HTTP protocols Browser like to use these days (eg QUIC, SPDY,
WebSockets, HTTP/2) are not happening should resolve this issue.

If you do not (or cannot) clear the browser info the HSTS should only
last until the TTL it last mentioned in traffic expires - but that can
be a very long timeout.


> 
> Here is what I have:
> #
> # serverIsBank is a list of domains that are banks essentially. They
> seem more picky.
> #
> ssl_bump splice serverIsBank
> ssl_bump peek all
> # ssl_bump bump all    # this does not work, it gives the error above..

Try:

 # splice as soon as detected
 ssl_bump splice serverIsBank

 # step 1 - peek to get TLS SNI
 acl step1 at_step SslBump1
 ssl_bump peek step1

 # step 2 - stare to get server cert details for bump
 ssl_bump stare all

 # step 3 - terminate if splice failed, bump everything else
 ssl_bump terminate serverIsBank
 ssl_bump bump all


> 
> https_port 3129 intercept ssl-bump \
>     generate-host-certificates=on dynamic_cert_mem_cache_size=4MB \
>     cert=/etc/squid/ssl_cert/tk2ca.pem
> key=/etc/squid/ssl_cert/tk2ca.pem \

When cert= and key= are in the same file you do not need to specify key=.


>    sslflags=NO_SESSION_REUSE
> tls_outgoing_options cafile=/etc/pki/tls/certs/ca-bundle.crt

That ca-bundle.crt is the global trusted CA right?

If yes, you do not need to manually configure it. The system default CA
/ global Trusted CA are used by default on MITM outgoing connections.


> sslproxy_cert_adapt setCommonName ssl::certDomainMismatch
> sslproxy_cert_error allow all

Remove the above line. It prevents you being told about important problems.

Instead investigate errors that come up, and either fix or ignore on an
individual basis. Some errors are simple and easily avoided, others
depend on your policy about whether the client should be allowed to do
the operation.


HTH
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SSL reverse proxy cert error

2018-09-04 Thread Amos Jeffries
On 5/09/18 4:05 PM, Hariharan Sethuraman wrote:
> Hi All,
> 
> I have my https_port 443 in reverse proxy. When client sends a GET
> request, the rewrite correctly rewrites the URL and that rewritten GET
> request fails with below error.
> 2018/09/05 03:03:38| Error negotiating SSL on FD 15: error:14007086:SSL
> routines:CONNECT_CR_CERT:certificate verify failed (1/-1/0)
> 
> I dont where to add the trusted certificates, because I dont know where
> to specify the trusted certificates in /etc/ssl/certs directory.
> 
> I have two ways to support: 
> 1) I may have cache_peer parent proxy (next proxy to internet)

For reverse-proxy the peer should be (or be towards) the origin. Not
towards the public Internet.

Use the cache_peer tls-ca= option to tell Squid which specific CA that
peer/origin is supposed to be using.


> 2) I dont need to give any parent proxy (because this host is connected
> to internet without next proxy)

For connections directly to the Internet (which reverse-proxy cannot
make without being forced) the global "Trusted CA" are used by default,
there is nothing to be done in that regard.

You can choose to disable them with:

  tls_outgoing_options default-ca=off


If you need to make Squid trust a specific CA which is not one of the
global trusted set (eg private for your use, or self-signed) then use:

  tls_outgoing_options cafile=/path/to/ca.pem


You can also combine the above settings so only a few global CA which
you actually trust get loaded. The cafile= option can be repeated in
Squid-4 to load multiple CA details.

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] SSL reverse proxy cert error

2018-09-04 Thread Hariharan Sethuraman
Hi All,

I have my https_port 443 in reverse proxy. When client sends a GET request,
the rewrite correctly rewrites the URL and that rewritten GET request fails
with below error.
2018/09/05 03:03:38| Error negotiating SSL on FD 15: error:14007086:SSL
routines:CONNECT_CR_CERT:certificate verify failed (1/-1/0)

I dont where to add the trusted certificates, because I dont know where to
specify the trusted certificates in /etc/ssl/certs directory.

I have two ways to support:
1) I may have cache_peer parent proxy (next proxy to internet)
2) I dont need to give any parent proxy (because this host is connected to
internet without next proxy)

Thanks,
Hari
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] a decent way to speed up Facebook?

2018-09-04 Thread turgut kalfaoğlu
Hello there. I have a transparent squid at my home to speed up the 
browsing by caching stuff.  And it works well for HTTP.


For HTTPS, I was only able to get it to "peek" and I'd like to able to 
bump the connections.


I installed the server certificate on the client, but still, the browser 
(firefox) keeps complaining:


Your connection is not secure
The owner of www.facebook.com has configured their website improperly. 
To protect your information from being stolen, Firefox has not connected 
to this website.
This site uses HTTP Strict Transport Security (HSTS) to specify that 
Firefox may only connect to it securely. As a result, it is not possible 
to add an exception for this certificate.


Here is what I have:
#
# serverIsBank is a list of domains that are banks essentially. They 
seem more picky.

#
ssl_bump splice serverIsBank
ssl_bump peek all
# ssl_bump bump all    # this does not work, it gives the error above..

https_port 3129 intercept ssl-bump \
    generate-host-certificates=on dynamic_cert_mem_cache_size=4MB \
    cert=/etc/squid/ssl_cert/tk2ca.pem 
key=/etc/squid/ssl_cert/tk2ca.pem \

   sslflags=NO_SESSION_REUSE
tls_outgoing_options cafile=/etc/pki/tls/certs/ca-bundle.crt
sslproxy_cert_adapt setCommonName ssl::certDomainMismatch
sslproxy_cert_error allow all
sslcrtd_program  /usr/lib64/squid/security_file_certgen  -s 
/var/lib/ssl_db -M $

sslcrtd_children 50 startup=5 idle=5


Thanks, -turgut


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid Kerberos helper leaking memory - OpenBSD 6.3

2018-09-04 Thread Alex Rousskov
On 09/04/2018 09:22 AM, Silamael wrote:

> At moment a helper will call exit(0) after 1 requests. 

> good to know that there aren't any general objections.


Here is one: Squid is currently not designed to gracefully handle a
helper-initiated exit/death. Helpers that decide to exit may kill
in-progress transactions, and/or may slow down or even kill Squid,
depending, in part, on your Squid version and/or configuration.

AFAICT, there are a few better options for going forward, including:

1. Fixing helper memory leak (just stating the obvious for completeness
sake).

2. Wrapping leaking/exiting helper process into a
non-leaking/non-exiting helper that is going to kill/restart the wrapped
helper after N requests (transparently to Squid).

3. Hacking Squid to kill/restart a helper process after N requests.

4. Enhancing Squid and helper protocol to handle helper-initiated exits.


HTH,

Alex.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid fails to bump where there are too many DNS names in SAN field

2018-09-04 Thread Alex Rousskov
On 09/04/2018 02:00 AM, Ahmad, Sarfaraz wrote:

> 2018/09/04 12:45:46.112 kid1| 24,5| BinaryTokenizer.cc(47) want: 520 more 
> bytes for Handshake.msg_body.octets occupying 16900 bytes @90 in 0xfa4d70;
> 2018/09/04 12:45:46.112 kid1| 83,5| PeerConnector.cc(451) noteWantRead: 
> local=10.240.180.31:43716 remote=103.243.13.183:443 FD 15 flags=1


Translation: Squid did not read enough data from the server to finish
parsing TLS server handshake. Squid needs to read at least 520 more
bytes from FD 15.


> Later on after about 10 secs

> 2018/09/04 12:45:58.124 kid1| 83,5| bio.cc(140) read: FD 12 read 0 <= 65535

And end-of-file on the wrong/different connection.


My recommendations remain the same, but please follow Amos advice and
upgrade to the latest v4 first.

Please note that I do _not_ recommend analyzing ALL,9 logs. On average,
such analysis by non-developers wastes more time than it saves.

Alex.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid Kerberos helper leaking memory - OpenBSD 6.3

2018-09-04 Thread Silamael

On 09/04/2018 03:51 PM, Amos Jeffries wrote:

On 5/09/18 1:24 AM, Silamael wrote:

Hello,

I'm currently investigating a memory leak in with the Kerberos negotiate
authentication helper in Squid 3.5.27 under OpenBSD 6.3. It's a own port
with added Kerberos support since OpenBSD's port does not support
Kerberos at all.

As library Heimdal 7.5.0 is used. So far I had no luck in finding the
memory leak itself.


Have you tried valgrind and either GCC or clang static analysis features
on your helper and/or library?


valgrind doesn't seem to work properly on OpenBSD. I get a bunch of 
nonsense output and then a segmentation fault...

What are the GCC/clang statistic features? I'm no C/C++ pro ;)



Would it be safe for Squid, to patch the helper code so that it does a
clean exit after every X processed requests?

Or will this bring new problems on Squid's side?



Should be okay so long as the helpers do reply to at least some queries,
and do not exit all at once.

Squid-3.5 will log errors about helpers exiting unexpectedly, but should
only die if the helpers did so on their startup or many are dying within
a shifting 30sec window of time.
At moment a helper will call exit(0) after 1 requests. Don't know, 
how Squid distributes the requests over all helper processes and if we 
have too many helpers exiting within 30 seconds...

But good to know that there aren't any general objections.



Squid-4 can use the auth_param on-persistent-overload=ERR option to
prevent even the death cases above.


Good to know.

-- Matthias
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid fails to bump where there are too many DNS names in SAN field

2018-09-04 Thread Marcus Kool



On 04/09/18 11:20, Amos Jeffries wrote:

On 4/09/18 7:33 PM, Ahmad, Sarfaraz wrote:

With debug_options ALL,9 and retrieving just this page, I found the following 
relevant loglines (this is with an explicit CONNECT request) ,



... skip TLS/1.2 clientHello arriving



Later on after about 10 secs

2018/09/04 12:45:58.124 kid1| 83,7| AsyncJob.cc(123) callStart: 
Ssl::PeekingPeerConnector status in: [ FD 12 job194686]
2018/09/04 12:45:58.124 kid1| 45,9| cbdata.cc(419) cbdataReferenceValid: 
0xf67698
2018/09/04 12:45:58.124 kid1| 83,5| PeerConnector.cc(187) negotiate: 
SSL_connect session=0x122c430...
2018/09/04 12:45:58.124 kid1| 24,8| MemBlob.cc(101) memAlloc: blob1555830 
memAlloc: requested=82887, received=82887
2018/09/04 12:45:58.124 kid1| 24,7| SBuf.cc(865) reAlloc: SBuf6002798 new store 
capacity: 82887
2018/09/04 12:45:58.124 kid1| 24,8| SBuf.cc(139) rawAppendStart: SBuf6002798 
start appending up to 65535 bytes
2018/09/04 12:45:58.124 kid1| 83,5| bio.cc(140) read: FD 12 read 0 <= 65535
2018/09/04 12:45:58.124 kid1| 83,5| NegotiationHistory.cc(83) 
retrieveNegotiatedInfo: SSL connection info on FD 12 SSL version NONE/0.0 
negotiated cipher
2018/09/04 12:45:58.124 kid1| ERROR: negotiating TLS on FD 12: 
error::lib(0):func(0):reason(0) (5/0/0)


... the server delivered 82KB of something which was not TLS/SSL syntax
according to OpenSSL.


I ran 'ufdbpeek', an OpenSSL-based utility that I wrote that peeks at the TLS certificate of a website and it displays a large correct certificate and that (in my case) cipher 
ECDHE-RSA-AES256-GCM-SHA384 is used.

OpenSSL 1.0.2k and 1.1.0g  have no issues with the certificate nor handshake.

Also sslLabs shows that all is well and that all popular modern browsers and 
OpenSSL 0.9.8 and 1.0.1 can connect to the site:
https://www.ssllabs.com/ssltest/analyze.html?d=www.extremetech.com

Marcus

[...]
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid fails to bump where there are too many DNS names in SAN field

2018-09-04 Thread Amos Jeffries
On 4/09/18 7:33 PM, Ahmad, Sarfaraz wrote:
> With debug_options ALL,9 and retrieving just this page, I found the following 
> relevant loglines (this is with an explicit CONNECT request) ,
> 

... skip TLS/1.2 clientHello arriving


> Later on after about 10 secs
> 
> 2018/09/04 12:45:58.124 kid1| 83,7| AsyncJob.cc(123) callStart: 
> Ssl::PeekingPeerConnector status in: [ FD 12 job194686]
> 2018/09/04 12:45:58.124 kid1| 45,9| cbdata.cc(419) cbdataReferenceValid: 
> 0xf67698
> 2018/09/04 12:45:58.124 kid1| 83,5| PeerConnector.cc(187) negotiate: 
> SSL_connect session=0x122c430...
> 2018/09/04 12:45:58.124 kid1| 24,8| MemBlob.cc(101) memAlloc: blob1555830 
> memAlloc: requested=82887, received=82887
> 2018/09/04 12:45:58.124 kid1| 24,7| SBuf.cc(865) reAlloc: SBuf6002798 new 
> store capacity: 82887
> 2018/09/04 12:45:58.124 kid1| 24,8| SBuf.cc(139) rawAppendStart: SBuf6002798 
> start appending up to 65535 bytes
> 2018/09/04 12:45:58.124 kid1| 83,5| bio.cc(140) read: FD 12 read 0 <= 65535
> 2018/09/04 12:45:58.124 kid1| 83,5| NegotiationHistory.cc(83) 
> retrieveNegotiatedInfo: SSL connection info on FD 12 SSL version NONE/0.0 
> negotiated cipher
> 2018/09/04 12:45:58.124 kid1| ERROR: negotiating TLS on FD 12: 
> error::lib(0):func(0):reason(0) (5/0/0)

... the server delivered 82KB of something which was not TLS/SSL syntax
according to OpenSSL.

...
> 2018/09/04 12:45:58.125 kid1| 83,5| PeerConnector.cc(559) callBack: TLS setup 
> ended for local=10.240.180.31:43674 remote=103.243.13.183:443 FD 12 flags=1


> 
> Again as this is with an explicit CONNECT request, I do get 
> ERR_CANNOT_FORWARD and that error page uses a certificate signed for 
> www.extremetech.com by my internal CA without any thing in SAN field guessing 
> ssl_crtd isn't crashing here unlike the previous bugreport.
> Anything from these loglines ?

Lacking any server TLS info (eg inability to TLS handshake with server),
the behaviour and output from Squid to the client is expected to be as
described above.

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid fails to bump where there are too many DNS names in SAN field

2018-09-04 Thread Amos Jeffries
On 4/09/18 8:00 PM, Ahmad, Sarfaraz wrote:
> Forgot to mention, this is with Squid-4.0.24.
> 

Please upgrade to Squid-4.2 ASAP. All 4.0.* releases are beta code and
no longer supported.

Recent as it was there are already several rather major fixes to the
SSL-Bump code since that version. I don't think the upgrade will solve
this particular problem (but it may do if we are lucky) and those other
issues need to be avoided.

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid Kerberos helper leaking memory - OpenBSD 6.3

2018-09-04 Thread Amos Jeffries
On 5/09/18 1:24 AM, Silamael wrote:
> Hello,
> 
> I'm currently investigating a memory leak in with the Kerberos negotiate
> authentication helper in Squid 3.5.27 under OpenBSD 6.3. It's a own port
> with added Kerberos support since OpenBSD's port does not support
> Kerberos at all.
> 
> As library Heimdal 7.5.0 is used. So far I had no luck in finding the
> memory leak itself.

Have you tried valgrind and either GCC or clang static analysis features
on your helper and/or library?

> 
> Would it be safe for Squid, to patch the helper code so that it does a
> clean exit after every X processed requests?
> 
> Or will this bring new problems on Squid's side?
> 

Should be okay so long as the helpers do reply to at least some queries,
and do not exit all at once.

Squid-3.5 will log errors about helpers exiting unexpectedly, but should
only die if the helpers did so on their startup or many are dying within
a shifting 30sec window of time.

Squid-4 can use the auth_param on-persistent-overload=ERR option to
prevent even the death cases above.


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid intermittently not sending host header to peer

2018-09-04 Thread Amos Jeffries
On 4/09/18 2:13 AM, Michael Thomas wrote:
> HI Amos,
> 
> Thank you for responding.
> 
> To clarify, when I referred to HTTPS requests, I was referring to
> CONNECT requests - I should have been more clear, my apologies. No
> authentication is being performed by either server, so I'm not sure what
> you're seeing in the logs that relates to that.

The log format looks like Squid native format. On all the 200 status
transactions there is "connect" instead of "-" where that format prints
username.


> 
> CONNECT requests are logged correctly on both squid servers and appear
> to operate correctly for every request.
> 
> Interestingly, I was mistaken before. It's not the host header that's
> missing - that's still present. It's the full URI within the GET request.
> 

Nod. Squid2 is receiving an origin-form request. Such as a client would
send *inside* a CONNECT tunnel, or Squid would send on DIRECT traffic.

The former was what I suspected at first, but the message does say Via:
with Squid1 details. So somehow Squid1 must think this connection is a
DIRECT (origin) connection.


> As requested, here is all the information:
> 
> *Squid1 version and build information:*
> Squid Cache: Version 3.5.12
> Service Name: squid
> Ubuntu linux

Please upgrade his machine if you can. All this may turn out to be a
side effect of one of the many bugs fixed already.


> 
> Here is a verbatim copy of both squid.conf files, with sensitive
> information replaced:
> 
> *Squid1:*
> http_port 3128 name=port_3128
> http_access allow all
> nonhierarchical_direct off
> 
> acl port_3128_acl myportname port_3128
> always_direct deny port_3128_acl
> never_direct allow port_3128_acl

If this is our actual config there is no need for these ACLs. This Squid
already accepts *everything* it is handed which has even vague
resemblance of HTTP syntax. All they are doing is making a false
illusion of some control existing.

It should be sufficient to use:
  never_direct allow all
  cache_peer_access proxy3128 allow all


Really you should leave the security checks we put into the default
config. They are there to prevent things like Squid being instructed to
send spam email, or worse DoS'ing your internal network.


> 
> # 3128
> cache_peer 2.2.2.2 parent 3128 0 no-query proxy-only default  name=proxy3128
> cache_peer_access proxy3128 allow port_3128_acl
> cache_peer_access proxy3128 deny all
> debug_options 11,2
> 
> 
> *Squid2:*
> http_access allow all
> http_port 3128
> debug_options 11,2
> 
> 
> And here is a copy of the cache.log for a failed request:
> 
> *Squid1:*
...
> --
> 2018/09/03 13:36:45.088 kid1| 11,2| http.cc(2234) sendRequest: HTTP
> Server local=1.1.1.1:55718  remote=2.2.2.2:3128
>  FD 14 flags=1
> 2018/09/03 13:36:45.089 kid1| 11,2| http.cc(2235) sendRequest: HTTP
> Server REQUEST:
> -
> GET /messages/391/ HTTP/1.1
> Upgrade-Insecure-Requests: 1
> User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36
> (KHTML, like Gecko) Chrome/68.0.3440.106 Safari/537.36
> Accept:
> text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8
> Accept-Encoding: gzip, deflate
> Accept-Language: en-US,en;q=0.9,en-GB;q=0.8
> Cookie: __cfduid=redacted; csrftoken=redacted; sessionid=redacted;
> _ga=redacted
> AlexaToolbar-ALX_NS_PH: AlexaToolbar/alx-4.0.3
> Host: redacted.com 
> Via: 1.1 Squid1 (squid/3.5.12)
> X-Forwarded-For: 3.3.3.3
> Cache-Control: max-age=0
> Connection: keep-alive
> 


Okay. Next thing to do is identify what Squid1 thinks type of connection
this FD is used for. Please add debug level "44,3 51,3" to the Squid1
config and repeat the test.



Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid Kerberos helper leaking memory - OpenBSD 6.3

2018-09-04 Thread Silamael

Hello,

I'm currently investigating a memory leak in with the Kerberos negotiate 
authentication helper in Squid 3.5.27 under OpenBSD 6.3. It's a own port 
with added Kerberos support since OpenBSD's port does not support 
Kerberos at all.


As library Heimdal 7.5.0 is used. So far I had no luck in finding the 
memory leak itself.


Would it be safe for Squid, to patch the helper code so that it does a 
clean exit after every X processed requests?


Or will this bring new problems on Squid's side?


Thanks for any help!


-- Matthias

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] [icap] Web Safety 6.4 web filter plugin for Squid proxy is available

2018-09-04 Thread Rafael Akchurin
Greetings everyone,

Next version of Web Safety web filter for Squid proxy (version 6.4.0.2517 built 
on July 5, 2108) is now available for download.
This version contains the following fixes and improvements:


  *   YouTube Guard filtering daemon now runs as a separate process. This 
allows to filter traffic by both Google Safe Browsing and YouTube restriction 
modules at the same time.
  *   UI of YouTube filtering rules is completely rewritten, it is now possible 
to selectively filter YouTube videos by policies (enable for students, disable 
for staff).
  *   Fixed error in policy filtering exclusions by remote domain IP address.
  *   Added initial support for Ubuntu 18 LTS and Squid 4 (full support will be 
added in Web Safety 6.5)
  *   Added advanced field to manually manage additions to NIC management file 
/etc/network/interfaces on Ubuntu 16 and Debian 9.
  *   Builds for FreeBSD(pfSense) are not produced any more, please use version 
6.3 if you require running Web Safety on FreeBSD(pfSense). We are now trying to 
build a separate product for pfSense platform.

Pre-configured virtual appliance is available from 
https://www.diladele.com/virtual_appliance.html (can be run in VMWare 
ESXi/vSphere or Microsoft Hyper-V).
The same virtual appliance can be easily deployed in Microsoft Azure with the 
following link 
https://azuremarketplace.microsoft.com/en-us/marketplace/apps/diladele.websafety?tab=Overview

GitHub repo with automation scripts we used to build this virtual appliance 
from stock Ubuntu 16 LTS image is at 
https://github.com/diladele/websafety-virtual-appliance
Your questions/issues/bugs are welcome at 
supp...@diladele.com

Direct link to virtual appliance:


  *   
http://packages.diladele.com/websafety/6.4.0.2517/va/ubuntu16/websafety.zip

Version 6.5 will include initial implementation of Application Control (like 
allow Spotify, block Facebook Messenger) module as well as support for Ubuntu 
18 LTS and latest Squid 4. See the version history at 
https://docs.diladele.com/version_history/index.html

Thanks to all of you for making this possible!

Best regards,
Rafael Akchurin
Diladele B.V.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid fails to bump where there are too many DNS names in SAN field

2018-09-04 Thread Ahmad, Sarfaraz
Forgot to mention, this is with Squid-4.0.24.

-Original Message-
From: Ahmad, Sarfaraz 
Sent: Tuesday, September 4, 2018 1:04 PM
To: 'Amos Jeffries' ; squid-users@lists.squid-cache.org
Cc: 'rouss...@measurement-factory.com' 
Subject: RE: [squid-users] Squid fails to bump where there are too many DNS 
names in SAN field

With debug_options ALL,9 and retrieving just this page, I found the following 
relevant loglines (this is with an explicit CONNECT request) ,

2018/09/04 12:45:46.112 kid1| 24,8| SBuf.cc(30) SBuf: SBuf6005084 created
2018/09/04 12:45:46.112 kid1| 24,7| BinaryTokenizer.cc(65) got: 
TLSPlaintext.type=22 occupying 1 bytes @91 in 0xfa4d38;
2018/09/04 12:45:46.112 kid1| 24,7| BinaryTokenizer.cc(65) got: 
TLSPlaintext.version.major=3 occupying 1 bytes @92 in 0xfa4d38;
2018/09/04 12:45:46.112 kid1| 24,7| BinaryTokenizer.cc(65) got: 
TLSPlaintext.version.minor=3 occupying 1 bytes @93 in 0xfa4d38;
2018/09/04 12:45:46.112 kid1| 24,7| BinaryTokenizer.cc(65) got: 
TLSPlaintext.fragment.length=16384 occupying 2 bytes @94 in 0xfa4d38;
2018/09/04 12:45:46.112 kid1| 24,8| SBuf.cc(38) SBuf: SBuf6005085 created from 
id SBuf6005054
2018/09/04 12:45:46.112 kid1| 24,7| BinaryTokenizer.cc(74) got: 
TLSPlaintext.fragment.octets= <16384 OCTET Bytes fit here> 
2018/09/04 12:45:46.112 kid1| 24,8| SBuf.cc(70) ~SBuf: SBuf6005085 destructed
2018/09/04 12:45:46.112 kid1| 24,7| BinaryTokenizer.cc(57) got: TLSPlaintext 
occupying 16389 bytes @91 in 0xfa4d38;
2018/09/04 12:45:46.112 kid1| 24,7| SBuf.cc(160) rawSpace: reserving 16384 for 
SBuf6005052
2018/09/04 12:45:46.112 kid1| 24,8| SBuf.cc(886) cow: SBuf6005052 new size:16470
2018/09/04 12:45:46.112 kid1| 24,8| SBuf.cc(857) reAlloc: SBuf6005052 new size: 
16470
2018/09/04 12:45:46.112 kid1| 24,9| MemBlob.cc(56) MemBlob: constructed, 
this=0x1dd2860 id=blob1555829 reserveSize=16470
2018/09/04 12:45:46.112 kid1| 24,8| MemBlob.cc(101) memAlloc: blob1555829 
memAlloc: requested=16470, received=16470
2018/09/04 12:45:46.112 kid1| 24,7| SBuf.cc(865) reAlloc: SBuf6005052 new store 
capacity: 16470
2018/09/04 12:45:46.112 kid1| 24,7| SBuf.cc(85) assign: assigning SBuf6005056 
from SBuf6005052
2018/09/04 12:45:46.112 kid1| 24,9| MemBlob.cc(82) ~MemBlob: destructed, 
this=0x1dd27a0 id=blob1555826 capacity=65535 size=8208
2018/09/04 12:45:46.112 kid1| 24,8| SBuf.cc(30) SBuf: SBuf6005086 created
2018/09/04 12:45:46.112 kid1| 24,7| BinaryTokenizer.cc(65) got: 
Handshake.msg_type=11 occupying 1 bytes @86 in 0xfa4d70;
2018/09/04 12:45:46.112 kid1| 24,7| BinaryTokenizer.cc(65) got: 
Handshake.msg_body.length=16900 occupying 3 bytes @87 in 0xfa4d70;
2018/09/04 12:45:46.112 kid1| 24,5| BinaryTokenizer.cc(47) want: 520 more bytes 
for Handshake.msg_body.octets occupying 16900 bytes @90 in 0xfa4d70;
2018/09/04 12:45:46.112 kid1| 24,8| SBuf.cc(70) ~SBuf: SBuf6005086 destructed
2018/09/04 12:45:46.112 kid1| 24,8| SBuf.cc(70) ~SBuf: SBuf6005084 destructed
2018/09/04 12:45:46.112 kid1| 83,5| Handshake.cc(532) parseHello: need more data
2018/09/04 12:45:46.112 kid1| 83,7| bio.cc(168) stateChanged: FD 15 now: 0x1002 
23RSHA (SSLv2/v3 read server hello A)
2018/09/04 12:45:46.112 kid1| 83,5| PeerConnector.cc(451) noteWantRead: 
local=10.240.180.31:43716 remote=103.243.13.183:443 FD 15 flags=1
2018/09/04 12:45:46.112 kid1| 5,3| comm.cc(559) commSetConnTimeout: 
local=10.240.180.31:43716 remote=103.243.13.183:443 FD 15 flags=1 timeout 60
2018/09/04 12:45:46.112 kid1| 5,5| ModEpoll.cc(117) SetSelect: FD 15, type=1, 
handler=1, client_data=0x2818f58, timeout=0
2018/09/04 12:45:46.112 kid1| 45,9| cbdata.cc(419) cbdataReferenceValid: 
0x2818f58
2018/09/04 12:45:46.112 kid1| 83,7| AsyncJob.cc(154) callEnd: 
Ssl::PeekingPeerConnector status out: [ FD 15 job194701]
2018/09/04 12:45:46.112 kid1| 83,7| AsyncCallQueue.cc(57) fireNext: leaving 
Security::PeerConnector::negotiate()
Later on after about 10 secs

2018/09/04 12:45:58.124 kid1| 83,7| AsyncJob.cc(123) callStart: 
Ssl::PeekingPeerConnector status in: [ FD 12 job194686]
2018/09/04 12:45:58.124 kid1| 45,9| cbdata.cc(419) cbdataReferenceValid: 
0xf67698
2018/09/04 12:45:58.124 kid1| 83,5| PeerConnector.cc(187) negotiate: 
SSL_connect session=0x122c430
2018/09/04 12:45:58.124 kid1| 24,7| SBuf.cc(160) rawSpace: reserving 65535 for 
SBuf6002798
2018/09/04 12:45:58.124 kid1| 24,8| SBuf.cc(886) cow: SBuf6002798 new size:82887
2018/09/04 12:45:58.124 kid1| 24,8| SBuf.cc(857) reAlloc: SBuf6002798 new size: 
82887
2018/09/04 12:45:58.124 kid1| 24,9| MemBlob.cc(56) MemBlob: constructed, 
this=0x1dd27a0 id=blob1555830 reserveSize=82887
2018/09/04 12:45:58.124 kid1| 24,8| MemBlob.cc(101) memAlloc: blob1555830 
memAlloc: requested=82887, received=82887
2018/09/04 12:45:58.124 kid1| 24,7| SBuf.cc(865) reAlloc: SBuf6002798 new store 
capacity: 82887
2018/09/04 12:45:58.124 kid1| 24,8| SBuf.cc(139) rawAppendStart: SBuf6002798 
start appending up to 65535 bytes
2018/09/04 12:45:58.124 kid1| 83,5| bio.cc(140) read: FD 12 read 0 <= 65535
2018/09/04 

Re: [squid-users] Squid fails to bump where there are too many DNS names in SAN field

2018-09-04 Thread Ahmad, Sarfaraz
With debug_options ALL,9 and retrieving just this page, I found the following 
relevant loglines (this is with an explicit CONNECT request) ,

2018/09/04 12:45:46.112 kid1| 24,8| SBuf.cc(30) SBuf: SBuf6005084 created
2018/09/04 12:45:46.112 kid1| 24,7| BinaryTokenizer.cc(65) got: 
TLSPlaintext.type=22 occupying 1 bytes @91 in 0xfa4d38;
2018/09/04 12:45:46.112 kid1| 24,7| BinaryTokenizer.cc(65) got: 
TLSPlaintext.version.major=3 occupying 1 bytes @92 in 0xfa4d38;
2018/09/04 12:45:46.112 kid1| 24,7| BinaryTokenizer.cc(65) got: 
TLSPlaintext.version.minor=3 occupying 1 bytes @93 in 0xfa4d38;
2018/09/04 12:45:46.112 kid1| 24,7| BinaryTokenizer.cc(65) got: 
TLSPlaintext.fragment.length=16384 occupying 2 bytes @94 in 0xfa4d38;
2018/09/04 12:45:46.112 kid1| 24,8| SBuf.cc(38) SBuf: SBuf6005085 created from 
id SBuf6005054
2018/09/04 12:45:46.112 kid1| 24,7| BinaryTokenizer.cc(74) got: 
TLSPlaintext.fragment.octets= <16384 OCTET Bytes fit here> 
2018/09/04 12:45:46.112 kid1| 24,8| SBuf.cc(70) ~SBuf: SBuf6005085 destructed
2018/09/04 12:45:46.112 kid1| 24,7| BinaryTokenizer.cc(57) got: TLSPlaintext 
occupying 16389 bytes @91 in 0xfa4d38;
2018/09/04 12:45:46.112 kid1| 24,7| SBuf.cc(160) rawSpace: reserving 16384 for 
SBuf6005052
2018/09/04 12:45:46.112 kid1| 24,8| SBuf.cc(886) cow: SBuf6005052 new size:16470
2018/09/04 12:45:46.112 kid1| 24,8| SBuf.cc(857) reAlloc: SBuf6005052 new size: 
16470
2018/09/04 12:45:46.112 kid1| 24,9| MemBlob.cc(56) MemBlob: constructed, 
this=0x1dd2860 id=blob1555829 reserveSize=16470
2018/09/04 12:45:46.112 kid1| 24,8| MemBlob.cc(101) memAlloc: blob1555829 
memAlloc: requested=16470, received=16470
2018/09/04 12:45:46.112 kid1| 24,7| SBuf.cc(865) reAlloc: SBuf6005052 new store 
capacity: 16470
2018/09/04 12:45:46.112 kid1| 24,7| SBuf.cc(85) assign: assigning SBuf6005056 
from SBuf6005052
2018/09/04 12:45:46.112 kid1| 24,9| MemBlob.cc(82) ~MemBlob: destructed, 
this=0x1dd27a0 id=blob1555826 capacity=65535 size=8208
2018/09/04 12:45:46.112 kid1| 24,8| SBuf.cc(30) SBuf: SBuf6005086 created
2018/09/04 12:45:46.112 kid1| 24,7| BinaryTokenizer.cc(65) got: 
Handshake.msg_type=11 occupying 1 bytes @86 in 0xfa4d70;
2018/09/04 12:45:46.112 kid1| 24,7| BinaryTokenizer.cc(65) got: 
Handshake.msg_body.length=16900 occupying 3 bytes @87 in 0xfa4d70;
2018/09/04 12:45:46.112 kid1| 24,5| BinaryTokenizer.cc(47) want: 520 more bytes 
for Handshake.msg_body.octets occupying 16900 bytes @90 in 0xfa4d70;
2018/09/04 12:45:46.112 kid1| 24,8| SBuf.cc(70) ~SBuf: SBuf6005086 destructed
2018/09/04 12:45:46.112 kid1| 24,8| SBuf.cc(70) ~SBuf: SBuf6005084 destructed
2018/09/04 12:45:46.112 kid1| 83,5| Handshake.cc(532) parseHello: need more data
2018/09/04 12:45:46.112 kid1| 83,7| bio.cc(168) stateChanged: FD 15 now: 0x1002 
23RSHA (SSLv2/v3 read server hello A)
2018/09/04 12:45:46.112 kid1| 83,5| PeerConnector.cc(451) noteWantRead: 
local=10.240.180.31:43716 remote=103.243.13.183:443 FD 15 flags=1
2018/09/04 12:45:46.112 kid1| 5,3| comm.cc(559) commSetConnTimeout: 
local=10.240.180.31:43716 remote=103.243.13.183:443 FD 15 flags=1 timeout 60
2018/09/04 12:45:46.112 kid1| 5,5| ModEpoll.cc(117) SetSelect: FD 15, type=1, 
handler=1, client_data=0x2818f58, timeout=0
2018/09/04 12:45:46.112 kid1| 45,9| cbdata.cc(419) cbdataReferenceValid: 
0x2818f58
2018/09/04 12:45:46.112 kid1| 83,7| AsyncJob.cc(154) callEnd: 
Ssl::PeekingPeerConnector status out: [ FD 15 job194701]
2018/09/04 12:45:46.112 kid1| 83,7| AsyncCallQueue.cc(57) fireNext: leaving 
Security::PeerConnector::negotiate()
Later on after about 10 secs

2018/09/04 12:45:58.124 kid1| 83,7| AsyncJob.cc(123) callStart: 
Ssl::PeekingPeerConnector status in: [ FD 12 job194686]
2018/09/04 12:45:58.124 kid1| 45,9| cbdata.cc(419) cbdataReferenceValid: 
0xf67698
2018/09/04 12:45:58.124 kid1| 83,5| PeerConnector.cc(187) negotiate: 
SSL_connect session=0x122c430
2018/09/04 12:45:58.124 kid1| 24,7| SBuf.cc(160) rawSpace: reserving 65535 for 
SBuf6002798
2018/09/04 12:45:58.124 kid1| 24,8| SBuf.cc(886) cow: SBuf6002798 new size:82887
2018/09/04 12:45:58.124 kid1| 24,8| SBuf.cc(857) reAlloc: SBuf6002798 new size: 
82887
2018/09/04 12:45:58.124 kid1| 24,9| MemBlob.cc(56) MemBlob: constructed, 
this=0x1dd27a0 id=blob1555830 reserveSize=82887
2018/09/04 12:45:58.124 kid1| 24,8| MemBlob.cc(101) memAlloc: blob1555830 
memAlloc: requested=82887, received=82887
2018/09/04 12:45:58.124 kid1| 24,7| SBuf.cc(865) reAlloc: SBuf6002798 new store 
capacity: 82887
2018/09/04 12:45:58.124 kid1| 24,8| SBuf.cc(139) rawAppendStart: SBuf6002798 
start appending up to 65535 bytes
2018/09/04 12:45:58.124 kid1| 83,5| bio.cc(140) read: FD 12 read 0 <= 65535
2018/09/04 12:45:58.124 kid1| 83,5| NegotiationHistory.cc(83) 
retrieveNegotiatedInfo: SSL connection info on FD 12 SSL version NONE/0.0 
negotiated cipher
2018/09/04 12:45:58.124 kid1| ERROR: negotiating TLS on FD 12: 
error::lib(0):func(0):reason(0) (5/0/0)
2018/09/04 12:45:58.125 kid1| 45,9| cbdata.cc(256) cbdataInternalAlloc: 
Allocating