Re: [squid-users] Squid TPROXY issues with Google sites

2017-05-26 Thread Alex Rousskov
On 05/26/2017 05:22 PM, Vieri wrote:

> If I have this:
> 
> ssl_bump peek all
> ssl_bump splice AllowTroublesome
> ssl_bump bump all

... then you have a configuration that does not make sense because one
cannot bump after peeking at step2. Your configuration is equivalent to

  * if the current step is 1 or 2, then peek
  * if AllowTroublesome during step 3, then splice
  * otherwise, do the impossible

which, bugs notwithstanding, is equivalent to

  ssl_bump peek all
  ssl_bump splice all

If the above does not block anything then your http_access rules allow
all CONNECTs (and you never get beyond CONNECTs because you do not bump).


> If I replace the above snippet with this:
> 
> ssl_bump stare all
> ssl_bump bump all

This configuration makes sense (but it may not do what you want).

If you want to be able to make a "splice or bump" decision, then you
have to make it during step2:

  ssl_bump peek step1
  ssl_bump splice AllowTroublesome
  ssl_bump bump all


> If I had an http_access rule that allowed the transaction to take
> place then I would expect it to happen regardless of the ssl_bump
> directive.

Your expectations are wrong. SslBump directives expose http_access rules
to more (or fewer) transactions. For example, the "splice all"
configuration does not expose http_access rules to any GET requests.


> Alex, you mention the SSLPeekAndSplice web page. I'll try to sum it
> up in just a few lines

The SslBump feature is too complex to sum it up in just a few lines
unless those lines are something like "do not use it without fully
understanding it". Once you learn the basics of SSL handshake, which
Squid steps look at what parts of the handshake, and what the essential
difference between peeking and staring is, then SslBump becomes less of
black magic. Without that knowledge, it is a dark mystery.


> - peek implies splice which means you can't do content analysis (as
> in scan for threats via c-icap modules)

Wrong. Peeking at step1 does not preclude future bumping. Peeking at
step2 precludes future bumping. If you peek at step2, then you have to
splice or terminate at step3.


> - stare implies bump which means you can do content analysis

Wrong for similar/symmetrical reasons.


> - you don't need to stare, you can just bump

Wrong in many cases -- usually you _do_ need to stare (or peek) at least
at step1, but YMMV.


> - you need to stare before bump if you want the clients to accept a
> certificate with domain names instead of IP addresses

Misleading. You need to stare or peek to get more information, including
the server domain name. That information comes from either the client or
the server, depending on the step. That information is used to generate
a fake certificate. The more info Squid has, the better it can
fake/mimic the true certificate, but learning more information restricts
the set of ssl_bump actions.


> - you can bump first by ACLs and then splice the rest

If you mean that ssl_bump rules may start with "bump" rules and end with
"splice" rules, then this is true, but the reverse is also true, and the
rules may contain a mixture of many actions.


> - you can bump after peek but only if you do that at SslBump1

Too vague to be generally useful. You can bump after peeking at step1.
You cannot bump after peeking at step2.


> the "Bump All Sites Except Banks" example where the next
> phrase contradicts the title by saying that the requests to non-banks
> won't be bumped.

Correct! The example, the title, and the warning were written by
different people. One of them is right. If you know how SslBump works,
you know which part is correct. At that time, please feel free to
propose changes to fix the wiki page. Hundreds of folks receive SslBump
help on this mailing list, but only one of them took the pains to
improve the page afterwards (thank you again, Marcus Kool!). Parts of
that page still need a lot of work.


HTH,

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TCP_DENIED/407 accessing webserver on same machine as squid

2017-05-26 Thread j m
Yes, I sort of pieced together what I found online, which is probably 
dangerous.  I really need to become familiar with how exactly this works for 
security's sake if nothing else.

  From: Amos Jeffries 
 To: j m ; "squid-users@lists.squid-cache.org" 
 
 Sent: Friday, May 26, 2017 2:53 PM
 Subject: Re: [squid-users] TCP_DENIED/407 accessing webserver on same machine 
as squid
   
Ah, your problem seems to be a misunderstanding of how authentication works.

What Squid receives on messages can have three forms:

  1) no credentials at all
  2) correct credentials
  3) invalid credentials

Your definition of the auth_users ACL using "REQUIRED" takes care of the 
(1) situation. Squid will respond with 407 to get credentials from any 
client that does not send any. This is what you are seeing on that 
second log line of your previous post, and the popup in your tests.

Now the "http_access allow auth_users" line only takes care of situation 
(2), permitting valid users.

Which leaves situation (3) undefined. ... All other traffic continues on 
to the next http_access line, which is "allow all", ouch.


This is why best practice is to use a "deny" line like so:
  http_access deny !auth_users

... which makes it clear what is happening for every non-authenticated 
thing, both situation (1) and (2) traffic.

Rules permitting things through without authenticating go above that 
http_access line, and things applying to authenticated users go below it.

Amos



   ___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TCP_DENIED/407 accessing webserver on same machine as squid

2017-05-26 Thread Amos Jeffries

On 27/05/17 07:52, Amos Jeffries wrote:

This is why best practice is to use a "deny" line like so:
  http_access deny !auth_users

... which makes it clear what is happening for every non-authenticated 
thing, both situation (1) and (2) traffic.


Sorry "both situation (1) and (3) traffic".

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TCP_DENIED/407 accessing webserver on same machine as squid

2017-05-26 Thread Amos Jeffries

Ah, your problem seems to be a misunderstanding of how authentication works.

What Squid receives on messages can have three forms:

 1) no credentials at all
 2) correct credentials
 3) invalid credentials

Your definition of the auth_users ACL using "REQUIRED" takes care of the 
(1) situation. Squid will respond with 407 to get credentials from any 
client that does not send any. This is what you are seeing on that 
second log line of your previous post, and the popup in your tests.


Now the "http_access allow auth_users" line only takes care of situation 
(2), permitting valid users.


Which leaves situation (3) undefined. ... All other traffic continues on 
to the next http_access line, which is "allow all", ouch.



This is why best practice is to use a "deny" line like so:
  http_access deny !auth_users

... which makes it clear what is happening for every non-authenticated 
thing, both situation (1) and (2) traffic.


Rules permitting things through without authenticating go above that 
http_access line, and things applying to authenticated users go below it.


Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TCP_DENIED/407 accessing webserver on same machine as squid

2017-05-26 Thread j m
Here's my squid.conf.  For what it's worth, shellinabox can be made to use only 
HTTP if that's the issue.

auth_param digest program /usr/lib/squid/digest_file_auth -c /etc/squid/passwd 
auth_param digest realm myrealm auth_param digest children 2  acl auth_users 
proxy_auth REQUIRED acl SSL_ports port 443 acl SSL_ports port SHELLINABOX_PORT 
acl Safe_ports port SHELLINABOX_PORT acl Safe_ports port 80 # http acl 
Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl Safe_ports port 70 
# gopher acl Safe_ports port 210 # wais #acl Safe_ports port 1025-65535 # 
unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 
# gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # 
multiling http acl CONNECT method CONNECT http_access deny !Safe_ports 
http_access deny CONNECT !SSL_ports http_access allow auth_users http_access 
allow all https_port SQUID_PORT cert=/etc/squid/squid.pem cache deny all 
netdb_filename none 

  From: Amos Jeffries 
 To: squid-users@lists.squid-cache.org 
 Sent: Friday, May 26, 2017 12:29 PM
 Subject: Re: [squid-users] TCP_DENIED/407 accessing webserver on same machine 
as squid
   


On 27/05/17 04:17, j m wrote:
> I have a webserver and squid 3.5 running on the same Linux machine.  > The 
> webserver is actually part of shellinabox, so it's only for me 
to > access.  Shellinabox simply presents a terminal and login in a web 
 > browser.  I want it to be accessible only through squid for more > 
security. > > shellinabox works fine if I access it directly, but 
through squid I > see this in access.log: > > 1495813953.860    79 
204.155.22.30 TCP_TUNNEL/200 1440 CONNECT > IP:PORT USER HIER_DIRECT/IP 
 > > > 1495813962.001      0 204.155.22.30 TCP_DENIED/407 4397 CONNECT > 
IP:PORT USER HIER_NONE/- text/html > > > I've replaced the real IP, 
PORT, and USER with those words, however > the real PORT is a 
nonstandard port number.There are some other > posts I found mentioning 
a 407 error and it was said it occurs when > the webpage is asking for 
authentication.  However I don't understand > this, since shellinabox 
only display a login prompt which I wouldn't > think would be a 
problem.  Another post said a 407 is when squid auth > is failing, but I 
can get to external websites through squid. > > Does it matter that what 
I'm trying to access is HTTPS instead of > HTTP?
Yes it does. Beyond the obvious encryption there are messaging 
differences that directly effect what the proxy can do.


The first log entry indicates that something has already been done to 
let the port "work", so your config is already non-standard and probably 
doing something weird. The presence of a USER value other than "-" 
indicates that the proxy-auth is working at least for that transaction.

Yes the 407 is login to *Squid*. Nothing to do with the shellinabox 
software, the HEIR_NONE/- on the second line says shellinabox is not 
even being contacted yet for that transaction.


It is not possible to say why anything is happening here without knowing 
your config structure and intended policy. You will need to provide your 
squid.conf details to get much help.

If you need to obfuscate IP's please map them as if you were using the 
10/8 or 192.168/16 ranges so we can still identify any subtle things 
like TCP connections going wrong without revealing your public addresses.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


   ___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Repeated assertions

2017-05-26 Thread Alex Rousskov
On 05/26/2017 10:55 AM, Amos Jeffries wrote:
> On 27/05/17 03:27, Junior Cunha wrote:
>> "assertion failed: Read.cc:73: "fd_table[conn->fd].halfClosedReader !=
>> NULL" can be seen in the cache.log file.


> I recommend for you to try the 4.0


FWIW, I second Amos recommendation -- at least consider an upgrade! It
is a hassle to upgrade, especially if you have many Squid instances, and
the upgrade itself may introduce new/different problems, but it may
still be a more efficient path forward. SslBump code in v4 is not
problem-free, but it is better than v3 code, on many levels.

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TCP_DENIED/407 accessing webserver on same machine as squid

2017-05-26 Thread Amos Jeffries



On 27/05/17 04:17, j m wrote:
I have a webserver and squid 3.5 running on the same Linux machine.  > The webserver is actually part of shellinabox, so it's only for me 
to > access.  Shellinabox simply presents a terminal and login in a web 
> browser.  I want it to be accessible only through squid for more > 
security. > > shellinabox works fine if I access it directly, but 
through squid I > see this in access.log: > > 1495813953.860 79 
204.155.22.30 TCP_TUNNEL/200 1440 CONNECT > IP:PORT USER HIER_DIRECT/IP 
> > > 1495813962.001  0 204.155.22.30 TCP_DENIED/407 4397 CONNECT > 
IP:PORT USER HIER_NONE/- text/html > > > I've replaced the real IP, 
PORT, and USER with those words, however > the real PORT is a 
nonstandard port number.There are some other > posts I found mentioning 
a 407 error and it was said it occurs when > the webpage is asking for 
authentication.  However I don't understand > this, since shellinabox 
only display a login prompt which I wouldn't > think would be a 
problem.  Another post said a 407 is when squid auth > is failing, but I 
can get to external websites through squid. > > Does it matter that what 
I'm trying to access is HTTPS instead of > HTTP?
Yes it does. Beyond the obvious encryption there are messaging 
differences that directly effect what the proxy can do.



The first log entry indicates that something has already been done to 
let the port "work", so your config is already non-standard and probably 
doing something weird. The presence of a USER value other than "-" 
indicates that the proxy-auth is working at least for that transaction.


Yes the 407 is login to *Squid*. Nothing to do with the shellinabox 
software, the HEIR_NONE/- on the second line says shellinabox is not 
even being contacted yet for that transaction.



It is not possible to say why anything is happening here without knowing 
your config structure and intended policy. You will need to provide your 
squid.conf details to get much help.


If you need to obfuscate IP's please map them as if you were using the 
10/8 or 192.168/16 ranges so we can still identify any subtle things 
like TCP connections going wrong without revealing your public addresses.


Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid TPROXY issues with Google sites

2017-05-26 Thread Amos Jeffries

On 27/05/17 03:44, Vieri wrote:

Hi,

I'd like to block access to Google Mail but allow it to Google Drive. I also 
need to intercept Google Drive traffic (https) and scan its content via c-icap 
modules for threats (with clamav and other tools which would block potentially 
harmful files).

I've failed so far.

I added mail.google.com to a custom file named "denied.domains" and loaded as 
denied_domains ACL in Squid. I know that in TLS traffic there are only IP addresses, so I created 
the "server_name" ACL as seen below.


Erm, not quite. The raw-IP having to be dealt with comes from the use of 
TPROXY (or NAT), not TLS. It is used when Squid is deciding whether to 
permit the traffic on the TCP connection to be processed.


Once the TLS is received (by "peek") the TLS SNI (if any) becomes available.



[...]
acl denied_domains dstdomain "/usr/local/share/proxy-settings/denied.domains"
http_access deny denied_domains !allowed_groups !allowed_ips
http_access deny CONNECT denied_domains !allowed_groups !allowed_ips
[...]
reply_header_access Alternate-Protocol deny all
acl AllowTroublesome ssl::server_name .google.com .gmail.com
acl DenyTroublesome ssl::server_name mail.google.com
http_access deny DenyTroublesome
ssl_bump peek all
ssl_bump splice AllowTroublesome
ssl_bump bump all

First of all, I was expecting that if a client tried to open 
https://mail.google.com, the connection would be blocked by Squid 
(DenyTroublesome ACL). It isn't. Why?


Any of the http_access lines you omitted from the config snippet might 
be letting it through. Order is important, and knowing the whole 
http_access sequence (and more) is just as important to correctly answer 
a question such as this. So take the below with a grain of salt, I am 
assuming nothing else in your config has subtle effects on the 
processing outcome.


There are several things that can lead to it;

* Google servers do have working rDNS. So raw-IP becomes a server 
hostname for dstdomain ACL to match.
 - the rDNS is within *.1e1.net so will not match your list shown, but 
it is enough to possibly evade your IP rules.


* If no provides access control lines match Squid inverted the action on 
the last one and does that.

  - yours are all deny, so the implicit action there is "allow all"

* "ssl_bump peek all" fetches the TLS SNI server name for 
ssl::server_name ACL to match.
 - so by the time Squid gets to processing the AllowTroublesome it 
already knows the client is trying to reach a *.google.com domain.



Second, I am unable to scan content since Squid is splicing all Google traffic. However, 
if I "bump AllowTroublesome", I can enter my username in 
https://accounts.google.com, but trying to access to the next step (user password) fails 
with an unreported error.

Any suggestions?


The rest of your related squid.conf is needed for that, including 
details of the files includes into the ACLs. In particular it is not 
clear what this "unreported error" is or why it happens.


Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Repeated assertions

2017-05-26 Thread Alex Rousskov
On 05/26/2017 09:27 AM, Junior Cunha wrote:

> We are facing a strange problem with a squid 3.5.25 installation in
> one of our customers. Every minute an assertion like this "assertion
> failed: Read.cc:73: "fd_table[conn->fd].halfClosedReader != NULL" can
> be seen in the cache.log file.

Could be http://bugs.squid-cache.org/show_bug.cgi?id=4554
and/or http://bugs.squid-cache.org/show_bug.cgi?id=4270

If you can collect a stack trace from that assertion, please post it to
the first bugzilla link mentioned above. A stack trace may help
developers fix this bug. AFAIK, nobody is working on that fix right now
though :-(.


Thank you,

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Repeated assertions

2017-05-26 Thread Amos Jeffries

On 27/05/17 03:27, Junior Cunha wrote:

Hi all,

We are facing a strange problem with a squid 3.5.25 installation in one of our customers. 
Every minute an assertion like this "assertion failed: Read.cc:73: 
"fd_table[conn->fd].halfClosedReader != NULL" can be seen in the cache.log file. 
Below some information related to our current setup:




Some of the changes in 4.0.16 - 4.0.19 (beta) releases seem to have 
resolved it though not clear which, so I'm bit doubtful the 3.5 stable 
series will see a fix any time soon.


I recommend for you to try the 4.0 even though it is a beta. The bumping 
feature there is a bit better than in 3.5.


Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid TPROXY issues with Google sites

2017-05-26 Thread Alex Rousskov
On 05/26/2017 09:44 AM, Vieri wrote:

> I know that in TLS traffic there are only IP addresses

This is a gross exaggeration. The reality is much more nuanced.


> I added mail.google.com to a custom file named "denied.domains" and loaded as 
> denied_domains ACL in Squid. 

> [...]
> acl denied_domains dstdomain "/usr/local/share/proxy-settings/denied.domains"
> http_access deny denied_domains !allowed_groups !allowed_ips
> http_access deny CONNECT denied_domains !allowed_groups !allowed_ips
> [...]
> reply_header_access Alternate-Protocol deny all
> acl AllowTroublesome ssl::server_name .google.com .gmail.com
> acl DenyTroublesome ssl::server_name mail.google.com
> http_access deny DenyTroublesome
> ssl_bump peek all
> ssl_bump splice AllowTroublesome
> ssl_bump bump all


> First of all, I was expecting that if a client tried to open
> https://mail.google.com, the connection would be blocked by Squid
> (DenyTroublesome ACL). It isn't. Why?

If a transaction is not blocked, then you have an http_access rule that
allows it. You need to figure out which rule does that. You can figure
that out by studying debugging logs, adding/logging annotate_transaction
ACLs, and/or altering http_access rules.


> Second, I am unable to scan content since Squid is splicing all
> Google traffic.

You told Squid to bump nothing because nothing can be bumped after
"ssl_bump peek all". You may want to study the following wiki page,
including definitions of actions such as "peek" and examples.

http://wiki.squid-cache.org/Features/SslPeekAndSplice

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] TCP_DENIED/407 accessing webserver on same machine as squid

2017-05-26 Thread j m
I have a webserver and squid 3.5 running on the same Linux machine.  The 
webserver is actually part of shellinabox, so it's only for me to access.  
Shellinabox simply presents a terminal and login in a web browser.  I want it 
to be accessible only through squid for more security.
shellinabox works fine if I access it directly, but through squid I see this in 
access.log:
1495813953.860 79 204.155.22.30 TCP_TUNNEL/200 1440 CONNECT IP:PORT USER 
HIER_DIRECT/IP 
1495813962.001 0 204.155.22.30 TCP_DENIED/407 4397 CONNECT IP:PORT USER 
HIER_NONE/- text/html 

I've replaced the real IP, PORT, and USER with those words, however the real 
PORT is a nonstandard port number. There are some other posts I found 
mentioning a 407 error and it was said it occurs when the webpage is asking for 
authentication. However I don't understand this, since shellinabox only display 
a login prompt which I wouldn't think would be a problem. Another post said a 
407 is when squid auth is failing, but I can get to external websites through 
squid.

Does it matter that what I'm trying to access is HTTPS instead of HTTP?


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid TPROXY issues with Google sites

2017-05-26 Thread Benjamin E. Nichols

Here is a list of google domains that may help you,

http://www.squidblacklist.org/downloads/whitelists/google.domains


On 5/26/2017 10:44 AM, Vieri wrote:

Hi,

I'd like to block access to Google Mail but allow it to Google Drive. I also 
need to intercept Google Drive traffic (https) and scan its content via c-icap 
modules for threats (with clamav and other tools which would block potentially 
harmful files).

I've failed so far.

I added mail.google.com to a custom file named "denied.domains" and loaded as 
denied_domains ACL in Squid. I know that in TLS traffic there are only IP addresses, so I created 
the "server_name" ACL as seen below.

[...]
acl denied_domains dstdomain "/usr/local/share/proxy-settings/denied.domains"
http_access deny denied_domains !allowed_groups !allowed_ips
http_access deny CONNECT denied_domains !allowed_groups !allowed_ips
[...]
reply_header_access Alternate-Protocol deny all
acl AllowTroublesome ssl::server_name .google.com .gmail.com
acl DenyTroublesome ssl::server_name mail.google.com
http_access deny DenyTroublesome
ssl_bump peek all
ssl_bump splice AllowTroublesome
ssl_bump bump all

First of all, I was expecting that if a client tried to open 
https://mail.google.com, the connection would be blocked by Squid 
(DenyTroublesome ACL). It isn't. Why?

Second, I am unable to scan content since Squid is splicing all Google traffic. However, 
if I "bump AllowTroublesome", I can enter my username in 
https://accounts.google.com, but trying to access to the next step (user password) fails 
with an unreported error.

Any suggestions?

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


--
--

Signed,

Benjamin E. Nichols
http://www.squidblacklist.org

1-405-397-1360 - Call Anytime.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] CentOS6 and squid34 package ...

2017-05-26 Thread Amos Jeffries

On 26/05/17 07:51, Mike wrote:
Walter, what I've found is when compiling to squid 3.5.x and higher, 
the compile options change. Also remember that many of the options 
that were available with 3.1.x are depreciated and likely will not 
work with 3.4.x and higher.


The other issue is that squid is only supposed to be handling HTTP and 
HTTPS traffic, not FTP. trying to use it as a FTP proxy will need a 
different configuration than the standard HTTP/Secure proxy.




Well, to be correct Squid talks HTTP to the client software. It has log 
supported mapping FTP server URLs into HTTP.


This second problem seems like the symptoms of 
 which was fixed years 
ago in the Squid-3.5.5 release. But that was apparently a regression not 
affecting 3.4 or 3.1. Hmm.



Amos




Mike


On 5/25/2017 14:07 PM, Walter H. wrote:

On 25.05.2017 12:50, Amos Jeffries wrote:

On 25/05/17 20:19, Walter H. wrote:

Hello

what is the essential difference between the default squid package 
and this squid34 package,


Run "squid -v" to find out if there are any build options different. 
Usually its just two alternative versions from the vendor.



Squid Cache: Version 3.4.14
configure options:  '--build=x86_64-redhat-linux-gnu' 
'--host=x86_64-redhat-linux-gnu' '--target=x86_64-redhat-linux-gnu' 
'--program-prefix=' '--prefix=/usr' '--exec-prefix=/usr' 
'--bindir=/usr/bin' '--sbindir=/usr/sbin' '--sysconfdir=/etc' 
'--datadir=/usr/share' '--includedir=/usr/include' 
'--libdir=/usr/lib64' '--libexecdir=/usr/libexec' 
'--sharedstatedir=/var/lib' '--mandir=/usr/share/man' 
'--infodir=/usr/share/info' '--enable-internal-dns' 
'--disable-strict-error-checking' '--exec_prefix=/usr' 
'--libexecdir=/usr/lib64/squid' '--localstatedir=/var' 
'--datadir=/usr/share/squid' '--sysconfdir=/etc/squid' 
'--with-logdir=$(localstatedir)/log/squid' 
'--with-pidfile=$(localstatedir)/run/squid.pid' 
'--disable-dependency-tracking' '--enable-arp-acl' 
'--enable-follow-x-forwarded-for' 
'--enable-auth-basic=LDAP,MSNT,NCSA,PAM,SMB,POP3,RADIUS,SASL,getpwnam,NIS,MSNT-multi-domain' 
'--enable-auth-ntlm=smb_lm,fake' 
'--enable-auth-digest=file,LDAP,eDirectory' 
'--enable-auth-negotiate=kerberos' 
'--enable-external-acl-helpers=file_userip,LDAP_group,session,unix_group,wbinfo_group' 
'--enable-cache-digests' '--enable-cachemgr-hostname=localhost' 
'--enable-delay-pools' '--enable-epoll' '--enable-icap-client' 
'--enable-ident-lookups' '--enable-linux-netfilter' 
'--enable-referer-log' '--enable-removal-policies=heap,lru' 
'--enable-snmp' '--enable-ssl' '--enable-storeio=aufs,diskd,ufs' 
'--enable-useragent-log' '--enable-wccpv2' '--enable-esi' 
'--enable-http-violations' '--with-aio' '--with-default-user=squid' 
'--with-filedescriptors=16384' '--with-dl' '--with-openssl' 
'--with-pthreads' '--disable-arch-native' 
'build_alias=x86_64-redhat-linux-gnu' 
'host_alias=x86_64-redhat-linux-gnu' 
'target_alias=x86_64-redhat-linux-gnu' 'CFLAGS=-O2 -g -pipe -Wall 
-Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector 
--param=ssp-buffer-size=4 -m64 -mtune=generic -fpie' 'CXXFLAGS=-O2 -g 
-pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector 
--param=ssp-buffer-size=4 -m64 -mtune=generic -fpie' 
'PKG_CONFIG_PATH=:/usr/lib64/pkgconfig:/usr/share/pkgconfig'


and

Squid Cache: Version 3.1.23
configure options:  '--build=x86_64-redhat-linux-gnu' 
'--host=x86_64-redhat-linux-gnu' '--target=x86_64-redhat-linux-gnu' 
'--program-prefix=' '--prefix=/usr' '--exec-prefix=/usr' 
'--bindir=/usr/bin' '--sbindir=/usr/sbin' '--sysconfdir=/etc' 
'--datadir=/usr/share' '--includedir=/usr/include' 
'--libdir=/usr/lib64' '--libexecdir=/usr/libexec' 
'--sharedstatedir=/var/lib' '--mandir=/usr/share/man' 
'--infodir=/usr/share/info' '--enable-internal-dns' 
'--disable-strict-error-checking' '--exec_prefix=/usr' 
'--libexecdir=/usr/lib64/squid' '--localstatedir=/var' 
'--datadir=/usr/share/squid' '--sysconfdir=/etc/squid' 
'--with-logdir=$(localstatedir)/log/squid' 
'--with-pidfile=$(localstatedir)/run/squid.pid' 
'--disable-dependency-tracking' '--enable-arp-acl' 
'--enable-follow-x-forwarded-for' 
'--enable-auth=basic,digest,ntlm,negotiate' 
'--enable-basic-auth-helpers=LDAP,MSNT,NCSA,PAM,SMB,YP,getpwnam,multi-domain-NTLM,SASL,DB,POP3,squid_radius_auth' 
'--enable-ntlm-auth-helpers=smb_lm,no_check,fakeauth' 
'--enable-digest-auth-helpers=password,ldap,eDirectory' 
'--enable-negotiate-auth-helpers=squid_kerb_auth' 
'--enable-external-acl-helpers=ip_user,ldap_group,session,unix_group,wbinfo_group' 
'--enable-cache-digests' '--enable-cachemgr-hostname=localhost' 
'--enable-delay-pools' '--enable-epoll' '--enable-icap-client' 
'--enable-ident-lookups' '--enable-linux-netfilter' 
'--enable-referer-log' '--enable-removal-policies=heap,lru' 
'--enable-snmp' '--enable-ssl' '--enable-storeio=aufs,diskd,ufs' 
'--enable-useragent-log' '--enable-wccpv2' '--enable-esi' 
'--enable-http-violations' '--with-aio' '--with-default-user=squid' 

[squid-users] Squid TPROXY issues with Google sites

2017-05-26 Thread Vieri
Hi,

I'd like to block access to Google Mail but allow it to Google Drive. I also 
need to intercept Google Drive traffic (https) and scan its content via c-icap 
modules for threats (with clamav and other tools which would block potentially 
harmful files).

I've failed so far.

I added mail.google.com to a custom file named "denied.domains" and loaded as 
denied_domains ACL in Squid. I know that in TLS traffic there are only IP 
addresses, so I created the "server_name" ACL as seen below.

[...]
acl denied_domains dstdomain "/usr/local/share/proxy-settings/denied.domains"
http_access deny denied_domains !allowed_groups !allowed_ips
http_access deny CONNECT denied_domains !allowed_groups !allowed_ips
[...]
reply_header_access Alternate-Protocol deny all
acl AllowTroublesome ssl::server_name .google.com .gmail.com
acl DenyTroublesome ssl::server_name mail.google.com
http_access deny DenyTroublesome
ssl_bump peek all
ssl_bump splice AllowTroublesome
ssl_bump bump all

First of all, I was expecting that if a client tried to open 
https://mail.google.com, the connection would be blocked by Squid 
(DenyTroublesome ACL). It isn't. Why?

Second, I am unable to scan content since Squid is splicing all Google traffic. 
However, if I "bump AllowTroublesome", I can enter my username in 
https://accounts.google.com, but trying to access to the next step (user 
password) fails with an unreported error.

Any suggestions?

Vieri
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Repeated assertions

2017-05-26 Thread Junior Cunha
Hi all,

   We are facing a strange problem with a squid 3.5.25 installation in one of 
our customers. Every minute an assertion like this "assertion failed: 
Read.cc:73: "fd_table[conn->fd].halfClosedReader != NULL" can be seen in the 
cache.log file. Below some information related to our current setup:

   - 2 physical servers running Squid 3.5.25 ( 1 instance per machine ) linked 
with OpenSSL 1.0.1e-57
   - haproxy to provide load balancing between the nodes + keepalived to 
provide vip
   - ~3000 users
   - diskd for cache
   - ssl bump enabled (config below)

http_port 58080 require-proxy-header dynamic_cert_mem_cache_size=1KB 
generate-host-certificates=on ssl-bump 
cert=/opt/hsc/webcontrol/squid/etc/ssl/myCA.pem sslflags=NO_DEFAULT_CA

   (...)

acl s1_tls_connect at_step SslBump1
sslproxy_cipher 
ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDSA-RSA-AES256-SHA:ECDSA-RSA-AES256:ECDHE-RSA-AES256-SHA:DHE-RSA-AES256-SHA:AES256-SHA:ECDHE-RSA-RC4-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES128-SHA:DHE-RSA-CAMELLIA128-SHA:AES128-SHA:RC4-SHA:HIGH:!aNULL:!MD5:!ADH
ssl_bump peek s1_tls_connect
ssl_bump bump all

   We have no idea why this is happening since we have another customer with 
the same setup and this doesn't happen.

   Could someone please help us to solve this problem? Our company is willing 
to pay for any kind of help (in this case contact me directly via e-mail or 
skype "juniorcunha.rs").

   Best regards.

   []s

--
Junior Cunha
HSC Brasil
telefone  55 (51) 3216-7007 | Porto Alegre
telefone  55 (11) 3522-8191 | São Paulo
site:  www.hscbrasil.com.br

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Help troubleshooting proxy<-->client https

2017-05-26 Thread Alex Rousskov
On 05/26/2017 12:00 AM, Masha Lifshin wrote:
> I have added an https_port directive
> to squid.conf, but it must be misconfigured.

> http_port 172.30.0.67:443 ...
> https_port 172.30.0.67:443 ...

You are right -- your Squid is misconfigured. You cannot use the same
address for two ports. Unfortunately, Squid thinks that port binding
errors are a minor inconvenience and continues running after logging an
error message (that looks like many other benign error messages).

Changing one of the ports will solve the "same address" problem
described above.

Do not use port 443 for http_port. It makes triage extremely confusing
because port 443 usually implies SSL. Consider using port 3128 instead.


HTH,

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Youtube not TCP_HIT Squid3.5.21-25

2017-05-26 Thread Yuri

With defrosting! Welcome from the cryocamera outside :-D

http://wiki.squid-cache.org/ConfigExamples/DynamicContent/YouTube/Discussion


26.05.2017 19:09, Eduardo Carneiro пишет:

I have the same issue. And not just Youtube, but any dynamic content cache.
If you need to rewrite doesn't work.



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Youtube-not-TCP-HIT-Squid3-5-21-25-tp4682582p4682584.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Youtube not TCP_HIT Squid3.5.21-25

2017-05-26 Thread Eduardo Carneiro
I have the same issue. And not just Youtube, but any dynamic content cache.
If you need to rewrite doesn't work.



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Youtube-not-TCP-HIT-Squid3-5-21-25-tp4682582p4682584.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Help troubleshooting proxy<-->client https

2017-05-26 Thread Masha Lifshin
Hello Dear Squid Users,

I am trying to configure my Squid 4.0.17 to use an https connection between
the client and the proxy.  I have added an https_port directive to
squid.conf, but it must be misconfigured. When I test with a dev version of
curl that supports https proxies, I am getting
ERR_PROTOCOL_UNKNOWN errors.  Below is the curl output, my squid.conf, and
access.log and cache.log snippets.

I appreciate any insights that you can offer.  Thank you very much,
-Masha



curl output

$ ~/bin/curl -v -x https://proxy.somwhere.com:443 https://github.com
* Rebuilt URL to: https://github.com/
*   Trying 54.210.69.61...
* TCP_NODELAY set
* Connected to proxy.somwhere.com (54.210.69.61) port 443 (#0)
* ALPN, offering http/1.1
* Cipher selection:
ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH
* successfully set certificate verify locations:
*   CAfile: /etc/ssl/cert.pem
  CApath: none
* TLSv1.2 (OUT), TLS header, Certificate Status (22):
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol
* Closing connection 0
curl: (35) error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown
protocol


squid.conf

acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
acl localnet src fc00::/7   # RFC 4193 local private network range
acl localnet src fe80::/10  # RFC 4291 link-local (directly plugged)
machines

acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 81  # http
acl Safe_ports port 800 # http
acl Safe_ports port 8000# http
acl Safe_ports port 8080# http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl SSL method CONNECT
acl CONNECT method CONNECT

# Only allow cachemgr access from localhost
http_access allow manager to_localhost
http_access deny manager

# Deny requests to certain unsafe ports
http_access deny !Safe_ports

# Deny CONNECT to other than secure SSL ports
http_access deny CONNECT !SSL_ports

http_access deny to_localhost

# ICAP CONFIG
icp_access deny all
htcp_access deny all

http_port 172.30.0.67:443 ssl-bump cert=/path/to/some.cert.pem
generate-host-certificates=on dynamic_cert_mem_cache_size=4MB
tls-dh=/usr/local/squid/etc/dhparam.pem
https_port 172.30.0.67:443 cert=/path/to/other.cert.pem
cipher=EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH+aRSA+RC4:EECDH:EDH+aRSA:!RC4:!aNULL:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS
tls_outgoing_options options=NO_SSLv3,SINGLE_DH_USE,SINGLE_ECDH_USE
cipher=EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH+aRSA+RC4:EECDH:EDH+aRSA:!RC4:!aNULL:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS

acl step1 at_step SslBump1
ssl_bump peek step1
ssl_bump bump all

access_log stdio:/usr/local/squid/var/log/access.log custom
cache_store_log stdio:/usr/local/squid/var/log/store.log custom
log_mime_hdrs on

pid_filename /usr/local/squid/var/run/custom-squid.pid

# Leave coredumps in the first cache dir
coredump_dir /usr/local/squid/var/cache

# Add any of your own refresh_pattern entries above these.
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .   0   20% 4320

request_header_access Proxy-Authenticate deny all
request_header_access Proxy-Authentication-Info deny all
request_header_access Proxy-Authorization deny all
request_header_access Proxy-Connection deny all
request_header_access Proxy-support deny all
request_header_access custom-version deny all
request_header_access custom-watermark deny all
request_header_access custom-token deny all
request_header_access custom-parent-host deny all
request_header_access Via deny all
request_header_access X-Cache deny all
request_header_access X-Cache-Lookup deny all
request_header_access X-Forwarded-For deny all
reply_header_access X-XSS-Protection deny all
request_header_access Other allow all

cache_mgr cache_...@somewhere.com
mail_from sq...@somewhere.com
icap_enable on
icap_preview_enable on
icap_preview_size 1024
icap_default_options_ttl 60
icap_persistent_connections on
icap_send_client_ip on
icap_send_client_username on
icap_client_username_header