Re: [squid-users] Squid 3.5.19 how to find banking server name for no bump

2016-06-28 Thread Amos Jeffries
On 29/06/2016 11:47 a.m., Stanford Prescott wrote:
> When I enter .wellsfargo.com in
> 
> *acl tls_s1_connect at_step SslBump1*
> *acl tls_s2_client_hello at_step SslBump2*
> *acl tls_s3_server_hello at_step SslBump3*
> 
> *acl tls_server_name_is_ip ssl::server_name_regex
> ^[0-9]+.[0-9]+.[0-9]+.[0-9]+n*
> *acl tls_allowed_hsts ssl::server_name .akamaihd.net *
> *acl tls_server_is_bank ssl::server_name .wellsfargo.com
> *
> *acl tls_to_splice any-of tls_allowed_hsts tls_server_is_bank*
> 
> *ssl_bump peek tls_s1_connect all*
> *ssl_bump splice tls_s2_client_hello tls_to_splice*
> *ssl_bump stare tls_s2_client_hello all*
> *ssl_bump bump tls_s3_server_hello all*
> 
> 
> it appears that the banking site is still getting bumped i.e.like in this
> access.log snippet
> 

Most of the log entries have a) a raw-IP and no SNI, b) a non-wellsfargo
domain name [Google advertising].

All uses of CONNECT *.wellsfargo.com I have spotted in there also have a
"TCP_TUNNEL" tag - which means splice was done in accordance with your
above config.


For example; To follow one client:

Initial raw-TCP connection handling (TAG_NONE). No SNI available yet ...

> *1467156900.838   5423 10.40.40.100 TAG_NONE/200 0 CONNECT
> 159.45.170.145:443  - HIER_NONE/- -*

... begin step-1 processing ...

[ Matches: ssl_bump peek tls_s1_connect all ]

[ Note that the wellsfargo ACL is not even reached at this stage. ]
[ If it did the string "159.45.170.145" != "*.wellsfargo.com" anyway ]

... which says to get the clientHello and SNI (if any) ...


> *1467156900.838   5088 10.40.40.100 TCP_TUNNEL/200 4631 CONNECT
> www.wellsfargo.com:443  -
> ORIGINAL_DST/159.45.170.145  -*

... begin step 2 processing. SNI available ...

[ The string "www.wellsfargo.com" ~= "*.wellsfargo.com" ]
[ Matches: ssl_bump splice tls_s2_client_hello tls_to_splice ]

... connection spliced (TCP_TUNNEL).

> 
> If I disable sslbumping then the bank site does not get bumped, of course.
> 
> 1467157349.321230 10.40.40.100 TCP_MISS/301 243 GET
> http://wellsfargo.com/ - ORIGINAL_DST/159.45.66.143 -
> 

That is http://, not HTTPS. ssl_bump has no relevance for plain-text
traffic.
The same thing would be done for that request regardless of what your
ssl_bump settings are.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Subject: Bandwidth Ceiling

2016-06-28 Thread Amos Jeffries
On 29/06/2016 1:04 p.m., squid-cache wrote:
> My squid server has 1Gbps connectivity to the internet and it
> routinely gets 600 Mbps up/down to speedtest.net.
> 
> When a client computer on the same network has a direct connection to
> the internet it, too, gets 600 Mbps up/down.
> 
> However, when that client computer connects through the squid server,
> it can't seem to do any better than 120 Mbps down, 60 Mbps up.
> 
> I've tried things like disabling disk cache, increasing
> maximum_object_size*, etc. Nothing I change in the config seems to
> increase or decrease my clients' bandwidth.
> 
> Any tips for getting better bandwidth to clients in a proxy-only
> setup?
> 

Sadly, that is kind of expected at present for any single client
connection. We have some evidence that Squid is artificially lowering
packet sizes in a few annoying ways. Used to make sense on slower
networks, but not nowdays.

Nathan Hoad has been putting a lot of work into this recently to figure
out what can be done and has a performance fix in Squid-4. That is not
going to make it into 3.5 because it relies on some major restructuring
done only in Squid-4 code.


But, if you are okay with playing around in the code his initial patch
submission shows the key value to change:

which should be the same in Squid-3. The 64KB bump in that patch leads
to some pain so dont just apply that. In the end we went with 16KB to
avoid huge per-connection memory requirements. It should really be tuned
to about 1/2 or 1/4 the TCP buffer size on your system.
After bumping up that read_ahead_gap directive also needs to be bumped
up to a minimum of whatever value you choose there.

HTH
Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Config changes between 2.7 and 3.5

2016-06-28 Thread Amos Jeffries
On 29/06/2016 9:19 a.m., Bidwell, Christopher wrote:
> Hi all,
> 
> I'm trying to find what's used to replace these:
> 
> squid 2.7   squid 3.5
> --
> zero_buffers  ???

An experiment in 2.7. It ceased to be an experiment a while back and the
directive to disable was dropped.


> refresh_stale_hit   ???

This Squid-2 feature has been superceeded by the HTTP/1.1
stale-while-revalidate mechanism at the protocol level. So a port to
Squid-3 is not planned.

But Squid-3 does not implement the stale-while-revalidate yet.

There is currently some work underway an Measurement Factory to make the
Squid-3 collapsed_forwarding feature operate properly with revalidation,
which will get halfway there.

If you need this directives behaviour and would like to assist with
development, testing, or sponsorship please drop a message in to
squid-dev mailing list.


> ignore_ims_on_miss ???

The behaviour of this directive was a naive and simplistic duplicate of
the request_header_access directive with a very limited amount of
usefulness.

Consider removal, but if you need the behaviour still use
"request_header_access If-Modified-Since deny all" instead.

The request_header_access is also more powerful in that it is ACL driven
and can strip other If-* headers as well for other types of IMS/INM request.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Problems with ACL's using squid as intercept proxy

2016-06-28 Thread Amos Jeffries
On 29/06/2016 2:18 a.m., C. L. Martinez wrote:
> I have configured new PF rules in this new FreeBSD host:
> 
> rdr pass on $vpnif proto tcp from $int_network to any port http tag 
> intlans-to-inet -> lo0 port 5144
> 
>  .. And the result is:
> 
> 1467122773.928  0 127.0.0.1 TCP_MISS/403 4357 GET http://www.osnews.com/ 
> - HIER_NONE/- text/html
> 1467122773.928 35 172.22.55.1 TCP_MISS/403 4489 GET 
> http://www.osnews.com/ - ORIGINAL_DST/127.0.0.1 text/html
> 1467122774.068  0 172.22.55.1 TCP_MEM_HIT/200 13096 GET 
> http://fbsdprx.my.domain.com:3128/squid-internal-static/icons/SN.png - 
> HIER_NONE/- image/png
> 1467122774.102  0 127.0.0.1 TCP_MISS/403 4314 GET 
> http://www.osnews.com/favicon.ico - HIER_NONE/- text/html
> 1467122774.103  2 172.22.55.1 TCP_MISS/403 4446 GET 
> http://www.osnews.com/favicon.ico - ORIGINAL_DST/127.0.0.1 text/html
> 
>  .. What is the problem?? Are ACL's wrong?? Why?? At first stage, I was 
> thinking about a problem with the pf rules ... but, now, I am not sure 
> because packets arrives to squid ...
> 

The current releases of Squid need to be built with:
  ./configure --with-nat-devpf

for the old PF version on FreeBSD or NetBSD to work.


Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Subject: Bandwidth Ceiling

2016-06-28 Thread squid-cache
My squid server has 1Gbps connectivity to the internet and it routinely gets 
600 Mbps up/down to speedtest.net.

When a client computer on the same network has a direct connection to the 
internet it, too, gets 600 Mbps up/down.

However, when that client computer connects through the squid server, it can't 
seem to do any better than 120 Mbps down, 60 Mbps up. 

I've tried things like disabling disk cache, increasing maximum_object_size*, 
etc. Nothing I change in the config seems to increase or decrease my clients' 
bandwidth.

Any tips for getting better bandwidth to clients in a proxy-only setup?

Thanks,
Jamie

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Running squid on a machine with only one network interface.

2016-06-28 Thread Amos Jeffries
On 29/06/2016 1:49 a.m., Ataro wrote:
> Hi and thanks for your help.
> 
> as for your request, here's the content of my IPFW rules and my squid 
> configuration:
> 
> IPFW rules:
> 
> ipfw -f flush
> ipfw add 50 pass all from any to any via lo0
> ipfw add 100 pass all from any to any proto udp
> ipfw add 150 pass icmp from any to any
> ipfw add 200 fwd 127.0.0.1,3128 tag  tcp from me to any
> ipfw add 250 pass all from 10.0.2.15 to any tagged 
> 

You said earlier there was a VM running Squid.

Do not use localhost IP addresses for any of this. Use the globally
routable IP assigned to the VM.

Do not tag the traffic in IPFW. Squid uses tcp_outgoing_tos or *_mark
directives to tag its outgoing the traffic. The firewall just uses those
tags for tags exceptions.



> squid.conf:
> 
> acl my_machine src 10.0.2.15 # this is the ip of my machine.

So what?..

> http_access allow my_machine
> 

... Ah. Open proxy!

And since this is in 10.*/8 the localnet ACL will also allow the traffic
through, but after some basic safety checks.

Which means the above ACL and rule are not useful.


> acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
> acl localnet src 172.16.0.0/12 # RFC1918 possible internal network
> acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
> acl localnet src fc00::/7 # RFC 4193 local private network range
> acl localnet src fe80::/10 # RFC 4291 link-local (directly plugged) machines
> 

> 
> http_access deny !Safe_ports
> 
> http_access deny CONNECT !SSL_ports
> 
> 
> http_access allow localhost manager
> http_access deny manager
> 
> visible_hostname mynet.mydomain
> acl MYSITE dstdomain cnn.com


Matches http://cnn.com/* URLs.

I'm pointing that out to highlight that it wont match sub-domains like
www.cnn.com etc.

> acl MYSITE dstdomain 10.0.2.15

Matches http://10.0.2.15/* URLs.

> http_access allow MYSITE
> 

The MYSITE stuff is also not needed since traffic comes from a 10.*
machine. The "allow localnet" line right below will let that traffic
through, and much more.

> http_access allow localnet
> http_access allow localhost
> 
> http_access deny all
> 
> http_port 127.0.0.1:3128 intercept
> http_port 3129

Replace that with:
  http_port 3129 intercept
  http_port 3128

Why? 3128 is a well known port for proxy traffic. It can be very
dangerous to use a known port for intercept. There are also some changes
coming in future Squid that will prevent the registered ports being used
for special modes like intercept.


Notice that after the above changes that the only thing different from
the default squid.conf is your new "intercept" port line.

> 
> I'm almost surely that the problem is that as other people said here, the 
> firewall redirect the traffic originated from the squid server back to squid 
> and hence the forwarding loop.
> 
> I've tried to allow the traffic originated from the squid server by using the 
> "tag/tagged" feature in the IPFW rules but this doesn't work, apparently 
> because squid issue a new connection that is not tagged.

Yes.

> since squid and the firewall resides on the same machine I've no idea how to 
> tell the firewall to allow the traffic which squid initiate.

You spoke earlier about Squid being inside a VM. In the above sentence
do you mean "same machine" as in hardware machine, or both are in the VM
? there is an important difference, and for this setup to work you need
to treat them as if they were different hardware communicating via
TCP/IP over a LAN.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] large downloads got interrupted

2016-06-28 Thread Amos Jeffries
On 28/06/2016 8:46 p.m., Eugene M. Zheganin wrote:
> Hi,
> 
> recently I started to get the problem when large downloads via squid are
> often interrupted. I tried to investigate it, but, to be honest, got
> nowhere. However, I took two tcpdump captures, and it seems to me that
> for some reason squid sends FIN to it's client and correctly closes the
> connection (wget reports that connection is closed), and in the same
> time for some reason it sends like tonns of RSTs towards the server. No
> errors in logs are reported (at least on a  ALL,1 loglevel).
> 

It sounds like a timeout or such has happened inside Squid. We'd need to
see your squid.conf to see if that was it.

What version are you using? there have been a few bugs found that can
cause unrelated connections to be closed early like this.

> Screenshots of wireshark interpreting the tcpdump capture are here:
> 

?? URL sems not to have made it to the mailing list.


> Squid(2a00:7540:1::4) to target server(2a02:6b8::183):
> 
> http://static.enaza.ru/userupload/gyazo/e5b976bf6f3d0cb666f0d504de04.png
> (here you can see that all of a sudden squid starts sending RSTs, that
> come long way down the screen, then connection reestablishes (not on the
> screenshot taken))

Screen dump of packet capture does not usually help. We usually only ask
for packet captures when one of the dev needs to personally analyse the
full traffic behaviour.

A cache.log trace at debug level 11,2 shows all the HTTP messages going
through in an easier format to read. There might be hints in there, but
if it is a timeout like I suspect probably not.

> 
> Squid(fd00::301) to client(fd00::73d):
> 
> http://static.enaza.ru/userupload/gyazo/ccf4982593dc6047edb5d734160e.png  
> (here
> you can see the client connection got closed)

So Squid is closing both connections from the middle. That is pointing
strongly at a timeout, bug, or error in the data transfer.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 3.5.19 how to find banking server name for no bump

2016-06-28 Thread Stanford Prescott
I forgot to mention, I am using squid 3.5.19

On Tue, Jun 28, 2016 at 6:47 PM, Stanford Prescott 
wrote:

> When I enter .wellsfargo.com in
>
> *acl tls_s1_connect at_step SslBump1*
> *acl tls_s2_client_hello at_step SslBump2*
> *acl tls_s3_server_hello at_step SslBump3*
>
> *acl tls_server_name_is_ip ssl::server_name_regex
> ^[0-9]+.[0-9]+.[0-9]+.[0-9]+n*
> *acl tls_allowed_hsts ssl::server_name .akamaihd.net *
> *acl tls_server_is_bank ssl::server_name .wellsfargo.com
> *
> *acl tls_to_splice any-of tls_allowed_hsts tls_server_is_bank*
>
> *ssl_bump peek tls_s1_connect all*
> *ssl_bump splice tls_s2_client_hello tls_to_splice*
> *ssl_bump stare tls_s2_client_hello all*
> *ssl_bump bump tls_s3_server_hello all*
>
>
> it appears that the banking site is still getting bumped i.e.like in this
> access.log snippet
>
> *1467156887.817257 10.40.40.100 TAG_NONE/200 0 CONNECT
> 54.149.224.177:443  -
> ORIGINAL_DST/54.149.224.177  -*
> *1467156888.008 94 10.40.40.100 TCP_MISS/200 213 POST
> https://tiles.services.mozilla.com/v2/links/view
>  -
> ORIGINAL_DST/54.149.224.177  application/json*
> *1467156893.774 75 10.40.40.100 TAG_NONE/200 0 CONNECT
> 172.230.102.185:443  -
> ORIGINAL_DST/172.230.102.185  -*
> *1467156893.847117 10.40.40.100 TAG_NONE/200 0 CONNECT
> 172.230.102.185:443  -
> ORIGINAL_DST/172.230.102.185  -*
> *1467156893.875120 10.40.40.100 TAG_NONE/200 0 CONNECT
> 172.230.221.75:443  -
> ORIGINAL_DST/172.230.221.75  -*
> *1467156893.875111 10.40.40.100 TAG_NONE/200 0 CONNECT
> 172.230.102.185:443  -
> ORIGINAL_DST/172.230.102.185  -*
> *1467156893.875117 10.40.40.100 TAG_NONE/200 0 CONNECT
> 172.230.221.75:443  -
> ORIGINAL_DST/172.230.221.75  -*
> *1467156893.875117 10.40.40.100 TAG_NONE/200 0 CONNECT
> 172.230.221.75:443  -
> ORIGINAL_DST/172.230.221.75  -*
> *1467156893.875112 10.40.40.100 TAG_NONE/200 0 CONNECT
> 172.230.102.185:443  -
> ORIGINAL_DST/172.230.102.185  -*
> *1467156893.875111 10.40.40.100 TAG_NONE/200 0 CONNECT
> 172.230.102.185:443  -
> ORIGINAL_DST/172.230.102.185  -*
> *1467156894.109307 10.40.40.100 TAG_NONE/200 0 CONNECT
> 172.230.102.185:443  -
> ORIGINAL_DST/172.230.102.185  -*
> *1467156894.109306 10.40.40.100 TAG_NONE/200 0 CONNECT
> 172.230.102.185:443  -
> ORIGINAL_DST/172.230.102.185  -*
> *1467156894.109307 10.40.40.100 TAG_NONE/200 0 CONNECT
> 172.230.102.185:443  -
> ORIGINAL_DST/172.230.102.185  -*
> *1467156894.109308 10.40.40.100 TAG_NONE/200 0 CONNECT
> 172.230.102.185:443  -
> ORIGINAL_DST/172.230.102.185  -*
> *1467156895.488 72 10.40.40.100 TAG_NONE/200 0 CONNECT
> 216.58.194.98:443  - ORIGINAL_DST/216.58.194.98
>  -*
> *1467156895.513 98 10.40.40.100 TAG_NONE/200 0 CONNECT
> 216.58.194.70:443  - ORIGINAL_DST/216.58.194.70
>  -*
> *1467156895.648 66 10.40.40.100 TCP_MISS/302 739 GET
> https://googleads.g.doubleclick.net/pagead/viewthroughconversion/974108101/?value=0=ON=0===
> 
> - ORIGINAL_DST/216.58.194.98  image/gif*
> *1467156895.664 82 10.40.40.100 TCP_MISS/200 649 GET
> https://ad.doubleclick.net/activity;src=2549153;type=allv40;cat=all_a00;u1=11201507281102291611922021;ord=6472043235332.808
> ?
> - ORIGINAL_DST/216.58.194.70  image/gif*
> *1467156895.920250 10.40.40.100 TAG_NONE/200 0 CONNECT
> 24.155.92.60:443  - ORIGINAL_DST/24.155.92.60
>  -*
> *1467156896.061 79 10.40.40.100 TCP_MISS/200 503 GET
> https://www.google.com/ads/user-lists/974108101/?script=0=2433874630
> 
> - ORIGINAL_DST/24.155.92.60  image/gif*
> *1467156899.837   5727 10.40.40.100 TAG_NONE/200 0 CONNECT
> 159.45.66.156:443  - HIER_NONE/- -*
> *1467156899.837   5587 10.40.40.100 TCP_TUNNEL/200 165 

Re: [squid-users] Squid 3.5.19 how to find banking server name for no bump

2016-06-28 Thread Stanford Prescott
When I enter .wellsfargo.com in

*acl tls_s1_connect at_step SslBump1*
*acl tls_s2_client_hello at_step SslBump2*
*acl tls_s3_server_hello at_step SslBump3*

*acl tls_server_name_is_ip ssl::server_name_regex
^[0-9]+.[0-9]+.[0-9]+.[0-9]+n*
*acl tls_allowed_hsts ssl::server_name .akamaihd.net *
*acl tls_server_is_bank ssl::server_name .wellsfargo.com
*
*acl tls_to_splice any-of tls_allowed_hsts tls_server_is_bank*

*ssl_bump peek tls_s1_connect all*
*ssl_bump splice tls_s2_client_hello tls_to_splice*
*ssl_bump stare tls_s2_client_hello all*
*ssl_bump bump tls_s3_server_hello all*


it appears that the banking site is still getting bumped i.e.like in this
access.log snippet

*1467156887.817257 10.40.40.100 TAG_NONE/200 0 CONNECT
54.149.224.177:443  -
ORIGINAL_DST/54.149.224.177  -*
*1467156888.008 94 10.40.40.100 TCP_MISS/200 213 POST
https://tiles.services.mozilla.com/v2/links/view
 -
ORIGINAL_DST/54.149.224.177  application/json*
*1467156893.774 75 10.40.40.100 TAG_NONE/200 0 CONNECT
172.230.102.185:443  -
ORIGINAL_DST/172.230.102.185  -*
*1467156893.847117 10.40.40.100 TAG_NONE/200 0 CONNECT
172.230.102.185:443  -
ORIGINAL_DST/172.230.102.185  -*
*1467156893.875120 10.40.40.100 TAG_NONE/200 0 CONNECT
172.230.221.75:443  -
ORIGINAL_DST/172.230.221.75  -*
*1467156893.875111 10.40.40.100 TAG_NONE/200 0 CONNECT
172.230.102.185:443  -
ORIGINAL_DST/172.230.102.185  -*
*1467156893.875117 10.40.40.100 TAG_NONE/200 0 CONNECT
172.230.221.75:443  -
ORIGINAL_DST/172.230.221.75  -*
*1467156893.875117 10.40.40.100 TAG_NONE/200 0 CONNECT
172.230.221.75:443  -
ORIGINAL_DST/172.230.221.75  -*
*1467156893.875112 10.40.40.100 TAG_NONE/200 0 CONNECT
172.230.102.185:443  -
ORIGINAL_DST/172.230.102.185  -*
*1467156893.875111 10.40.40.100 TAG_NONE/200 0 CONNECT
172.230.102.185:443  -
ORIGINAL_DST/172.230.102.185  -*
*1467156894.109307 10.40.40.100 TAG_NONE/200 0 CONNECT
172.230.102.185:443  -
ORIGINAL_DST/172.230.102.185  -*
*1467156894.109306 10.40.40.100 TAG_NONE/200 0 CONNECT
172.230.102.185:443  -
ORIGINAL_DST/172.230.102.185  -*
*1467156894.109307 10.40.40.100 TAG_NONE/200 0 CONNECT
172.230.102.185:443  -
ORIGINAL_DST/172.230.102.185  -*
*1467156894.109308 10.40.40.100 TAG_NONE/200 0 CONNECT
172.230.102.185:443  -
ORIGINAL_DST/172.230.102.185  -*
*1467156895.488 72 10.40.40.100 TAG_NONE/200 0 CONNECT
216.58.194.98:443  - ORIGINAL_DST/216.58.194.98
 -*
*1467156895.513 98 10.40.40.100 TAG_NONE/200 0 CONNECT
216.58.194.70:443  - ORIGINAL_DST/216.58.194.70
 -*
*1467156895.648 66 10.40.40.100 TCP_MISS/302 739 GET
https://googleads.g.doubleclick.net/pagead/viewthroughconversion/974108101/?value=0=ON=0===

- ORIGINAL_DST/216.58.194.98  image/gif*
*1467156895.664 82 10.40.40.100 TCP_MISS/200 649 GET
https://ad.doubleclick.net/activity;src=2549153;type=allv40;cat=all_a00;u1=11201507281102291611922021;ord=6472043235332.808
?
- ORIGINAL_DST/216.58.194.70  image/gif*
*1467156895.920250 10.40.40.100 TAG_NONE/200 0 CONNECT 24.155.92.60:443
 - ORIGINAL_DST/24.155.92.60 
-*
*1467156896.061 79 10.40.40.100 TCP_MISS/200 503 GET
https://www.google.com/ads/user-lists/974108101/?script=0=2433874630

- ORIGINAL_DST/24.155.92.60  image/gif*
*1467156899.837   5727 10.40.40.100 TAG_NONE/200 0 CONNECT
159.45.66.156:443  - HIER_NONE/- -*
*1467156899.837   5587 10.40.40.100 TCP_TUNNEL/200 165 CONNECT
connect.secure.wellsfargo.com:443
 - ORIGINAL_DST/159.45.66.156
 -*
*1467156899.837   5679 10.40.40.100 TAG_NONE/200 0 CONNECT
159.45.66.156:443  - HIER_NONE/- -*
*1467156899.837   5587 10.40.40.100 

Re: [squid-users] Websocket content adaptation

2016-06-28 Thread Ozgur Batur
On Tue, Jun 28, 2016 at 4:48 PM, Alex Rousskov <
rouss...@measurement-factory.com> wrote:

> On 06/28/2016 06:43 AM, Ozgur Batur wrote:
> > On Mon, Jun 27, 2016 at 7:57 PM, Alex Rousskov wrote:
> > FWIW, several things are needed to move forward, including:
> >
> > 1. Adequate development time and skills (or sponsorship to pay for
> >them). The development of an essentially new adaptation vectoring
> >point is not a trivial project.
>
>
> > I have involved in development of several ICAP services around Squid but
> > have not had the chance to work on Squid code base directly. We may
> > attempt implement a proof of concept with a few friends to better
> > specify the task at hand current and learn about adaptation
> > infrastructure of Squid.
>
> I recommend starting with the proposal described in my item #2. A proof
> of concept is usually needed when there is doubt that the concept can
> work in principle. There is no such doubt here IMO.
>

That's great to hear that it is doable. But a solid proposal will require
some preliminary work, at least if it comes from my side :)


>
> > 2. A specific proposal on how to map raw/tunnel data to HTTP messages
> >that eCAP and ICAP interfaces expect. The biggest difficulty here
> >may be mapping server-speaks-first protocols.
>
>
> > I am not sure if it is possible to map websocket data to current
> > adaptation services.
>
> It is possible to map virtually anything to HTTP messages and, hence, to
> eCAP/ICAP. For example, Squid maps FTP transactions to HTTP messages! A
> better mapping would preserve more information, make it easier to access
> that information by services that understand the protocol being mapped,
> have lower overhead, etc. However, it is clearly "possible" to come up
> with some mapping.
>
>
This is http://wiki.squid-cache.org/Features/FtpRelay feature right?
Previously Squid only supported FTP over HTTP, great improvement!


>
> > Actually it may or may not be related but I am
> > curious how Squid handles Comet(Ajax/HTTP Server Push) during ICAP
> > processing.
>
> Squid proxies regular HTTP/1 and FTP. That excludes pure server push
> until we add HTTP/2 support.
>
>
> > About server first protocols, current ICAP services expecting
> > encapsulated valid HTTP responses for requests will break of course.
>
> I do not think so. A proper mapping could present that spontaneous
> from-server message as an HTTP response to a trivial fake request
> (enough to identify the protocol and the server address).
>
> Needless to say, the WebSockets:HTTP mapping (and overall Squid
> functionality) can be improved if Squid understands WebSockets (as Amos
> has noted in his response), but I do not think such understanding is
> _required_ to accommodate many useful adaptation use cases.
>
>
> > Maybe a mechanism like Allow 204 negotiation can be implemented between
> > adaptation service and proxy. If adaptation service does not support
> > server first pushes it can be bypassed.
>
> It is always possible to extend ICAP and eCAP, but, with all other
> factors being equal, extending should be a last-resort solution because
> most adaptation services will not support the extension and many of
> those services could work fine without that extension.
>
>
> HTH,
>
> Alex.
>
>
Thank you very much Alex, Amos. I will discuss this issue with other
interested people, maybe I can find some resources. Also I need to do some
homework.

Best Regards,

Ozgur
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Websocket content adaptation

2016-06-28 Thread Ozgur Batur
Thank you very much for explanation Amos.

On Tue, Jun 28, 2016 at 4:52 PM, Amos Jeffries  wrote:

> On 29/06/2016 12:43 a.m., Ozgur Batur wrote:
> > On Mon, Jun 27, 2016 at 7:57 PM, Alex Rousskov wrote:
> >
> >> 2. A specific proposal on how to map raw/tunnel data to HTTP messages
> >>that eCAP and ICAP interfaces expect. The biggest difficulty here
> >>may be mapping server-speaks-first protocols.
> >>
> >
> > I am not sure if it is possible to map websocket data to current
> adaptation
> > services. Actually it may or may not be related but I am curious how
> Squid
> > handles Comet(Ajax/HTTP Server Push) during ICAP processing.
>
> Last time I looked at those they were just using regular HTTP
> long-pollinng techniques. Though some may have moved to WebSockets or
> HTTP/2 now.
>
> Squid does not have to do anything for those. The clients and server
> involved do the mapping of their data into various HTTP requests and
> replies. So as far as Squid is concerned they are just regular long
> duration GET or POST requests. Which since they are HTTP messages can be
> passed to the ICAP service in the normal ways.
>
> Amos
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 3.5.19 how to find banking server name for no bump

2016-06-28 Thread Amos Jeffries
On 29/06/2016 2:02 a.m., Stanford Prescott wrote:
> I have the proper peek and splice and bump configuration of acls setup in
> my squid.conf file for no-bump of some web sites. I need help how to enter
> the banking hosts and or server names in a way that the peek and splice
> configuration will determine it is a banking site that I don't want bumped.
> 
> For example, if a user enters www.wellsfargo.com for online banking my
> current config still bumps wellsfargo.com. What would I need to enter for
> wellsfargo.com so that banking server will not be bumped?
> 

Depends on what you mean by "enter".

Are you asking for the ACL value?
  .wellfargo.com

Are you asking for the ACL definition?
 acl banks ssl::server_name .wellsfargo.com

Or are you asking for a whole SSL-Bump configuration example?
  has a few.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Conditional IPv6 usage

2016-06-28 Thread Amos Jeffries
On 28/06/2016 11:32 p.m., Stefan Hölzle wrote:
> Hello,
> 
> I inserted an iptables rule which rejects outgoing tcp packets from the
> default IPv4 address to the ip of somedomain.asdf.
> This causes Squid to fall back to IPv6.
> 
> I'd like to change Squid's behavior in this case to immediately fall
> back to IPv6 instead of falling back to the default IPv4 address first.

Falling back to something "first" does not make any sense. The first has
to be tried and fail before a fallback is needed.


> Can this behavior easily be changed in the source code ?

There is an old patch in the feature request that might help you:


YMMV on whether it applies cleanly. It has been a long while since it
was created.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Problems with ACL's using squid as intercept proxy

2016-06-28 Thread C. L. Martinez
Hi all,

 I am trying to configure a second squid proxy as an intercept proxy but this 
time under FreeBSD instead of OpenBSD. Doing my first tests I have a problem 
with acl's that I don't understand. To isolate the problem, I have started with 
a simple squid.conf file:

#
# Recommended minimum configuration:
#

# Example rule allowing access from your local networks.
# Adapt to list your (internal) IP networks from where browsing
# should be allowed
acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
acl localnet src fc00::/7   # RFC 4193 local private network range
acl localnet src fe80::/10  # RFC 4291 link-local (directly plugged) 
machines

acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT

#
# Recommended minimum Access Permission configuration:
#
# Deny requests to certain unsafe ports
http_access deny !Safe_ports

# Deny CONNECT to other than secure SSL ports
http_access deny CONNECT !SSL_ports

# Only allow cachemgr access from localhost
http_access allow localhost manager
http_access deny manager

# We strongly recommend the following be uncommented to protect innocent
# web applications running on the proxy server who think the only
# one who can access services on "localhost" is a local user
#http_access deny to_localhost

#
# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
#

# Example rule allowing access from your local networks.
# Adapt localnet in the ACL section to list your (internal) IP networks
# from where browsing should be allowed
http_access allow localnet
http_access allow localhost

# And finally deny all other access to this proxy
http_access deny all

# Squid normally listens to port 3128
http_port 3128
http_port 127.0.0.1:5144 intercept

# Uncomment and adjust the following to add a disk cache directory.
#cache_dir ufs /var/squid/cache 100 16 256

# Leave coredumps in the first cache dir
coredump_dir /var/squid/cache

#
# Add any of your own refresh_pattern entries above these.
#
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .   0   20% 4320

 As you can see, I have configured only one port for intercept http requests 
... I have configured new PF rules in this new FreeBSD host:

rdr pass on $vpnif proto tcp from $int_network to any port http tag 
intlans-to-inet -> lo0 port 5144

 .. And the result is:

1467122773.928  0 127.0.0.1 TCP_MISS/403 4357 GET http://www.osnews.com/ - 
HIER_NONE/- text/html
1467122773.928 35 172.22.55.1 TCP_MISS/403 4489 GET http://www.osnews.com/ 
- ORIGINAL_DST/127.0.0.1 text/html
1467122774.068  0 172.22.55.1 TCP_MEM_HIT/200 13096 GET 
http://fbsdprx.my.domain.com:3128/squid-internal-static/icons/SN.png - 
HIER_NONE/- image/png
1467122774.102  0 127.0.0.1 TCP_MISS/403 4314 GET 
http://www.osnews.com/favicon.ico - HIER_NONE/- text/html
1467122774.103  2 172.22.55.1 TCP_MISS/403 4446 GET 
http://www.osnews.com/favicon.ico - ORIGINAL_DST/127.0.0.1 text/html

 .. What is the problem?? Are ACL's wrong?? Why?? At first stage, I was 
thinking about a problem with the pf rules ... but, now, I am not sure because 
packets arrives to squid ...

 Any idea??

Thanks.

-- 
Greetings,
C. L. Martinez
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Websocket content adaptation

2016-06-28 Thread Amos Jeffries
On 29/06/2016 12:43 a.m., Ozgur Batur wrote:
> On Mon, Jun 27, 2016 at 7:57 PM, Alex Rousskov wrote:
> 
>> 2. A specific proposal on how to map raw/tunnel data to HTTP messages
>>that eCAP and ICAP interfaces expect. The biggest difficulty here
>>may be mapping server-speaks-first protocols.
>>
> 
> I am not sure if it is possible to map websocket data to current adaptation
> services. Actually it may or may not be related but I am curious how Squid
> handles Comet(Ajax/HTTP Server Push) during ICAP processing.

Last time I looked at those they were just using regular HTTP
long-pollinng techniques. Though some may have moved to WebSockets or
HTTP/2 now.

Squid does not have to do anything for those. The clients and server
involved do the mapping of their data into various HTTP requests and
replies. So as far as Squid is concerned they are just regular long
duration GET or POST requests. Which since they are HTTP messages can be
passed to the ICAP service in the normal ways.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Running squid on a machine with only one network interface.

2016-06-28 Thread Ataro
Hi and thanks for your help.

as for your request, here's the content of my IPFW rules and my squid 
configuration:

IPFW rules:

ipfw -f flush
ipfw add 50 pass all from any to any via lo0
ipfw add 100 pass all from any to any proto udp
ipfw add 150 pass icmp from any to any
ipfw add 200 fwd 127.0.0.1,3128 tag  tcp from me to any
ipfw add 250 pass all from 10.0.2.15 to any tagged 

squid.conf:

acl my_machine src 10.0.2.15 # this is the ip of my machine.
http_access allow my_machine

acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 172.16.0.0/12 # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
acl localnet src fc00::/7 # RFC 4193 local private network range
acl localnet src fe80::/10 # RFC 4291 link-local (directly plugged) machines

acl SSL_ports port 443
acl Safe_ports port 80 # http
acl Safe_ports port 21 # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70 # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535 # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT

http_access deny !Safe_ports

http_access deny CONNECT !SSL_ports


http_access allow localhost manager
http_access deny manager

visible_hostname mynet.mydomain
acl MYSITE dstdomain cnn.com
acl MYSITE dstdomain 10.0.2.15
http_access allow MYSITE

http_access allow localnet
http_access allow localhost

http_access deny all

http_port 127.0.0.1:3128 intercept
http_port 3129

coredump_dir /var/squid/cache

refresh_pattern ^ftp: 1440 20% 10080
refresh_pattern ^gopher: 1440 0% 1440
refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
refresh_pattern . 0 20% 4320

I'm almost surely that the problem is that as other people said here, the 
firewall redirect the traffic originated from the squid server back to squid 
and hence the forwarding loop.

I've tried to allow the traffic originated from the squid server by using the 
"tag/tagged" feature in the IPFW rules but this doesn't work, apparently 
because squid issue a new connection that is not tagged.
since squid and the firewall resides on the same machine I've no idea how to 
tell the firewall to allow the traffic which squid initiate.

Regards,

Ataro.


 Original Message 
Subject: Re: [squid-users] Running squid on a machine with only one network 
interface.
Local Time: June 27, 2016 11:56 PM
UTC Time: June 27, 2016 8:56 PM
From: antony.st...@squid.open.source.it
To: at...@protonmail.ch

On Monday 27 June 2016 at 22:45:19, Ataro wrote:

> Hi there,
>
> I've set up a FreeBSD machine inside a VirtualBox machine and used IPFW to
> forward all the requests to the internet through a squid server running on
> the same machine in port 3128 in intercept mode.

Please show us your IPFW rules.

> The problem is that I get 403 http responses on every site I try to access
> to, even on the sites that I've explicitly allowed in the squid.conf file.

Maybe show us your squid.conf as well (without comments or blank lines).

> I also get a warning message on the tty that squid is running on (I've run
> squid in no daemon mode) which says: Warning: Forwarding loop detected
> for:.

So, NAT is not working correctly...

> I guess that this error occurs since the squid server and the IPFW firewall
> are running on the same machine which have only one network interface.
>
> Am I right?

Not in the sense that "you can't do this with only one interface", no.

However, quite possibly in the sense that you haven't told IPFW how to
distinguish between requests in from your clients, and requests out from your
squid instance.

The former need to go to squid, the latter need to go to the Internet.


Give us a bit more information and we might be able to give you a bit more
help.



Antony.

--
I don't know, maybe if we all waited then cosmic rays would write all our
software for us. Of course it might take a while.

- Ron Minnich, Los Alamos National Laboratory

Please reply to the list;
please *don't* CC me.___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Websocket content adaptation

2016-06-28 Thread Alex Rousskov
On 06/28/2016 06:43 AM, Ozgur Batur wrote:
> On Mon, Jun 27, 2016 at 7:57 PM, Alex Rousskov wrote:
> FWIW, several things are needed to move forward, including:
> 
> 1. Adequate development time and skills (or sponsorship to pay for
>them). The development of an essentially new adaptation vectoring
>point is not a trivial project.


> I have involved in development of several ICAP services around Squid but
> have not had the chance to work on Squid code base directly. We may
> attempt implement a proof of concept with a few friends to better
> specify the task at hand current and learn about adaptation
> infrastructure of Squid.

I recommend starting with the proposal described in my item #2. A proof
of concept is usually needed when there is doubt that the concept can
work in principle. There is no such doubt here IMO.


> 2. A specific proposal on how to map raw/tunnel data to HTTP messages
>that eCAP and ICAP interfaces expect. The biggest difficulty here
>may be mapping server-speaks-first protocols.


> I am not sure if it is possible to map websocket data to current
> adaptation services.

It is possible to map virtually anything to HTTP messages and, hence, to
eCAP/ICAP. For example, Squid maps FTP transactions to HTTP messages! A
better mapping would preserve more information, make it easier to access
that information by services that understand the protocol being mapped,
have lower overhead, etc. However, it is clearly "possible" to come up
with some mapping.


> Actually it may or may not be related but I am
> curious how Squid handles Comet(Ajax/HTTP Server Push) during ICAP
> processing. 

Squid proxies regular HTTP/1 and FTP. That excludes pure server push
until we add HTTP/2 support.


> About server first protocols, current ICAP services expecting
> encapsulated valid HTTP responses for requests will break of course.

I do not think so. A proper mapping could present that spontaneous
from-server message as an HTTP response to a trivial fake request
(enough to identify the protocol and the server address).

Needless to say, the WebSockets:HTTP mapping (and overall Squid
functionality) can be improved if Squid understands WebSockets (as Amos
has noted in his response), but I do not think such understanding is
_required_ to accommodate many useful adaptation use cases.


> Maybe a mechanism like Allow 204 negotiation can be implemented between
> adaptation service and proxy. If adaptation service does not support
> server first pushes it can be bypassed. 

It is always possible to extend ICAP and eCAP, but, with all other
factors being equal, extending should be a last-resort solution because
most adaptation services will not support the extension and many of
those services could work fine without that extension.


HTH,

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Strange NTLM problem.

2016-06-28 Thread Amos Jeffries
On 28/06/2016 6:14 p.m., drcimino drcimino wrote:
> Dear all,
> 
> 
> 
> 
> 
> i have a strange problem with my squid 3.5.19 and authentication NTLM.
> 
> 
> On my configuration i have 2 auth method:
> 
> 
> 
> 
> 
> NTLM negotiated with ntlm_auth from samba 3
> 
> 
> 
> 
> 
> auth_param ntlm program /usr/local/samba/bin/ntlm_auth
> --helper-protocol=squid-2.5-ntlmssp
> 
> auth_param ntlm children 200 startup=100 idle=10 concurrency=0
> 
> auth_param ntlm keep_alive on
> 
> 
> 
> 
> and as a fallback basic ntlm
> 

Just to be clear. There is no such thing as "basic ntlm".

What you have configured is Basic authentication (user:password in the
clear over the network.
It just happens that the Samba helper is called "ntlm_auth". That name
does not make it NTLM protocol in any way.

> 
> 
> 
> 
> auth_param basic program /usr/local/samba/bin/ntlm_auth
> --helper-protocol=squid-2.5-basic
> 
> auth_param basic children 25 startup=15 idle=5 concurrency=0
> 
> auth_param basic realm PROXY AUTHORIZATION REQUIRED
> 
> auth_param basic credentialsttl 30 minutes
> 
> 
> 
> 
> 
> TTL
> 
> 
> 
> 
> authenticate_cache_garbage_interval 1 hours
> 
> authenticate_ttl 30 minutes
> 
> authenticate_ip_ttl 30 minutes
> 
> 
> 
> 
> Groups identification with LDAPS
> 
> 
> 
> 
> 
> external_acl_type NAV children-max=200 children-startup=100 children-idle=10
> ttl=1800 %LOGIN
> 
> /usr/local/squid/libexec/ext_ldap_group_acl -s sub -b "dc=domain,dc=xxx" -D
> "cn=squid,cn=Users,dc
> 
> =domain,dc=xxx" -w "password" -f
> "((objectclass=person)(sAMAccountName=%v)(membero
> 
> f=cn=%a,ou=INTERNET,ou=AAA,dc=domain,dc=xxx))" -S -K -H
> ldaps://domain.xxx:3269
> 
> 
> 
> 
> 
> ... and all work very well.
> 
> 
> Sometimes and randomly, my users reported to me that squid cannot do ntlm
> transparent authentication and request for user/password pair (falling back
> to ntlm basic).

No. It is falling back *past* the Basic authentication to user input.


> Entering right credential does not work and to proceed further users
> need to click on "abort" button many times.
> 

One popup for each connection which the browser has opened and not been
able to authenticate.

NTLM is both slow and has a limited number of connections that it can
make to AD simultaneously for authentication. All those popups for one
user, multiplied by the number of users currently doing authentication
across the whole time period that NTLM handshakes take up and its easy
to get very large numbers of concurrent authentication actions.
 And when the system gets bogged down, users start to feel the impact
either in longer latency or outright ejected logins.


> 
> On my cache.log i see:
> 
> Login for user [DOMAIN]\[userx]@[PC_XXX] failed due to [Access denied]
> 
> NTLMSSP BH: NT_STATUS_ACCESS_DENIED
> 
> 2016/06/27 22:59:06 kid1| ERROR: NTLM Authentication validating user.
> Result: {result=BH, notes={mes
> 
> sage: NT_STATUS_ACCESS_DENIED; }}

Means the credentials were not correct like you said. It can be tricky
since the browser does not give much in the way of hints about which
auth protocol the popup details will be used in. You could need to enter
the Basic user + password, or Basic DOMAIN\user + password, or NTLM user
+ password, or NTLM DOMAIN\user + password, or NTLM MACHINE\user + password.
 About the only thing that you can use to provide any hints which one is
needed is the "realm" string Squid provides for each auth type.

> 
> every times a user receive credential request.
> 
> 
> After aborting each requests squid do, users can surf the internet without
> problems and i cannot replicate the issue.
> 
> 
> Trying to close the browser, clear cache, and going to the same site does
> not produce same error.
> 
> 
> Stopping squid, remove cache, starting squid does not produce same error.
> 
> 
> It's totally random and i'm going mad to understand why.
> 
> 
> Can someone help me to debug and understand the problem?
> 

You will likely need to enable debugging on the helper to see what it
has to say about the rejection.


Bruno already mentioned Kerberos. I second that. Kerberos can be a bit
of a learning curve, but is worth it for the extra speed and security
gained.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Strange NTLM problem.

2016-06-28 Thread Amos Jeffries
On 29/06/2016 12:45 a.m., Bruno de Paula Larini wrote:
> Em 28/06/2016 03:14, drcimino drcimino escreveu:
>> Dear all,
>> i have a strange problem with my squid 3.5.19 and authentication NTLM.
>> On my configuration i have 2 auth method:
>> NTLM negotiated with ntlm_auth from samba 3
>> auth_param ntlm program /usr/local/samba/bin/ntlm_auth
>> --helper-protocol=squid-2.5-ntlmssp
>> auth_param ntlm children 200 startup=100 idle=10 concurrency=0
>> auth_param ntlm keep_alive on
>>
>> and as a fallback basic ntlm
>> auth_param basic program /usr/local/samba/bin/ntlm_auth
>> --helper-protocol=squid-2.5-basic
>> auth_param basic children 25 startup=15 idle=5 concurrency=0
>> auth_param basic realm PROXY AUTHORIZATION REQUIRED
>> auth_param basic credentialsttl 30 minutes
>> TTL
>>
>> authenticate_cache_garbage_interval 1 hours
>> authenticate_ttl 30 minutes
>> authenticate_ip_ttl 30 minutes
>>
>> Groups identification with LDAPS
>> external_acl_type NAV children-max=200 children-startup=100
>> children-idle=10 ttl=1800 %LOGIN
>> /usr/local/squid/libexec/ext_ldap_group_acl -s sub -b
>> "dc=domain,dc=xxx" -D "cn=squid,cn=Users,dc
>> =domain,dc=xxx" -w "password" -f
>> "(&(objectclass=person)(sAMAccountName=%v)(membero
>> f=cn=%a,ou=INTERNET,ou=AAA,dc=domain,dc=xxx))" -S -K -H
>> ldaps://domain.xxx:3269
> 
> I've been using the helper "ext_wbinfo_group_acl" to work with AD groups
> and transparent authentication for domain members. The config below also
> makes the auth pop-up to show when the machine isn't member of the
> domain - no need to use the fallback part. You just have to configure
> Kerberos, Samba, join the Squid machine to the domain with "net ads
> join" and enable winbind.

You are not using Negotiate/Kerberos, so I'm not sure why that is related.

Winbind is needed for the particular wbinfo helper. Note that winbind
has much more several issues with concurrent number of connections to AD
being only 256.


> 
> auth_param ntlm program /usr/bin/ntlm_auth
> --helper-protocol=squid-2.5-ntlmssp --domain=MYDOMAIN
> --enable-external-acl-helpers="ext_wbinfo_group_acl"
> auth_param ntlm children 10 startup=0 idle=2
> 
> external_acl_type NTGroup children-startup=10 children-idle=2
> children-max=50 %LOGIN /usr/lib64/squid/ext_wbinfo_group_acl
> 
> acl authenticated proxy_auth REQUIRED
> 
> acl ad_group external NTGroup MYDOMAIN\AD_Group
> acl denied_websites dstdom_regex -i "/etc/squid/denied-websites.txt"
> http_access deny ad_group denied_websites
> 
> In my set of acls, the pop-up was also appearing in specific sites.
> Changing the order of acls made it stop appearing for me.
> This:
> 
> http_access allow website_list user_list
> 
> seems to work differently from this:
> 
> http_access allow user_list website_list
> 

Not seem to. It does. Intentionally.


Having something like website_list, which I guess is a dstdomain or such
ACL at the end of the line prevents auth or group test mis-matches from
re-authenticating to get credentials that might pass the ACL test.

Preventing these ACLs triggering authentication activity is probably a
large part of what actually got fixed in your situation. NTLM related
auth takes a relatively long time so reducing the number of auth and
re-auth tests needed to check a users access permissions can be a big win.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid's cache management

2016-06-28 Thread Eduardo Carneiro
Thank you very much Antony! You answer was very helpful.



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-s-cache-management-tp4678255p4678259.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Strange NTLM problem.

2016-06-28 Thread Bruno de Paula Larini

Em 28/06/2016 03:14, drcimino drcimino escreveu:

Dear all,
i have a strange problem with my squid 3.5.19 and authentication NTLM.
On my configuration i have 2 auth method:
NTLM negotiated with ntlm_auth from samba 3
auth_param ntlm program /usr/local/samba/bin/ntlm_auth 
--helper-protocol=squid-2.5-ntlmssp

auth_param ntlm children 200 startup=100 idle=10 concurrency=0
auth_param ntlm keep_alive on

and as a fallback basic ntlm
auth_param basic program /usr/local/samba/bin/ntlm_auth 
--helper-protocol=squid-2.5-basic

auth_param basic children 25 startup=15 idle=5 concurrency=0
auth_param basic realm PROXY AUTHORIZATION REQUIRED
auth_param basic credentialsttl 30 minutes
TTL

authenticate_cache_garbage_interval 1 hours
authenticate_ttl 30 minutes
authenticate_ip_ttl 30 minutes

Groups identification with LDAPS
external_acl_type NAV children-max=200 children-startup=100 
children-idle=10 ttl=1800 %LOGIN
/usr/local/squid/libexec/ext_ldap_group_acl -s sub -b 
"dc=domain,dc=xxx" -D "cn=squid,cn=Users,dc
=domain,dc=xxx" -w "password" -f 
"(&(objectclass=person)(sAMAccountName=%v)(membero
f=cn=%a,ou=INTERNET,ou=AAA,dc=domain,dc=xxx))" -S -K -H 
ldaps://domain.xxx:3269


I've been using the helper "ext_wbinfo_group_acl" to work with AD groups 
and transparent authentication for domain members. The config below also 
makes the auth pop-up to show when the machine isn't member of the 
domain - no need to use the fallback part. You just have to configure 
Kerberos, Samba, join the Squid machine to the domain with "net ads 
join" and enable winbind.



auth_param ntlm program /usr/bin/ntlm_auth 
--helper-protocol=squid-2.5-ntlmssp --domain=MYDOMAIN 
--enable-external-acl-helpers="ext_wbinfo_group_acl"

auth_param ntlm children 10 startup=0 idle=2

external_acl_type NTGroup children-startup=10 children-idle=2 
children-max=50 %LOGIN /usr/lib64/squid/ext_wbinfo_group_acl


acl authenticated proxy_auth REQUIRED

acl ad_group external NTGroup MYDOMAIN\AD_Group
acl denied_websites dstdom_regex -i "/etc/squid/denied-websites.txt"
http_access deny ad_group denied_websites

In my set of acls, the pop-up was also appearing in specific sites. 
Changing the order of acls made it stop appearing for me.

This:

http_access allow website_list user_list

seems to work differently from this:

http_access allow user_list website_list


Bruno
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Websocket content adaptation

2016-06-28 Thread Ozgur Batur
On Mon, Jun 27, 2016 at 7:57 PM, Alex Rousskov <
rouss...@measurement-factory.com> wrote:

> On 06/27/2016 10:23 AM, Ozgur Batur wrote:
>
> > ICAP handles plain HTTP very well but it is not possible to
> > filter/change or even log content of websocket communication after
> > websocket upgrade over HTTP as far as I know. Is there any plan or
> > interest in developing some capability for Squid to control websocket
> > communication content?
>
> There is interest but no specific plan or sponsor.
>
>
> > There is no defined request/response protocol since websocket is
> > basically a socket but regexp matching in incoming and outgoing
> > content(json, xml,raw) with URL and client metadata info may have some
> > application like data leak prevention or achieving in corporate
> environment.
>
> I am not sure regex would be a good idea in general, but passing
> tunneled traffic to eCAP/ICAP services is indeed useful in several
> environments, including WebSocket tunnels. The adaptation service will
> decide whether to use regex or something else to match raw data. Some
> existing services simply log (or relay/replay via TCP) received traffic
> without analyzing it so regex is just one of many possibilities here.
>
> FWIW, several things are needed to move forward, including:
>
> 1. Adequate development time and skills (or sponsorship to pay for
>them). The development of an essentially new adaptation vectoring
>point is not a trivial project.
>
>
I have involved in development of several ICAP services around Squid but
have not had the chance to work on Squid code base directly. We may attempt
implement a proof of concept with a few friends to better specify the task
at hand current and learn about adaptation infrastructure of Squid.


> 2. A specific proposal on how to map raw/tunnel data to HTTP messages
>that eCAP and ICAP interfaces expect. The biggest difficulty here
>may be mapping server-speaks-first protocols.
>

I am not sure if it is possible to map websocket data to current adaptation
services. Actually it may or may not be related but I am curious how Squid
handles Comet(Ajax/HTTP Server Push) during ICAP processing. Maybe server
data push can be mapped like Comet responses. About server first protocols,
current ICAP services expecting encapsulated valid HTTP responses for
requests will break of course. Maybe a mechanism like Allow 204 negotiation
can be implemented between adaptation service and proxy. If adaptation
service does not support server first pushes it can be bypassed.

>
> 3. A project lead to organize/manage the project and guide the results
>through the Squid Project review. This person could be the
>primary developer and/or the specs writer, but does not have to be.
>
> Alex.
>

Thanks,

Ozgur
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid's cache management

2016-06-28 Thread Antony Stone
On Tuesday 28 June 2016 at 13:58:14, Eduardo Carneiro wrote:

> I'm using squid 3.5.19 with dynamic cache content with url rewrite. My
> cache directory is 90% full. I noticed that it doesn't exceed the value
> set in cache_dir. This is a good thing.
> 
> My doubt is: How squid manages that? What is the criterion to delete these
> files?

http://www.squid-cache.org/Doc/config/cache_replacement_policy/


Antony.

-- 
I want to build a machine that will be proud of me.

 - Danny Hillis, creator of The Connection Machine

   Please reply to the list;
 please *don't* CC me.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Squid's cache management

2016-06-28 Thread Eduardo Carneiro
Hello everyone. 

First of all, sorry my english. It's not very good.

I'm using squid 3.5.19 with dynamic cache content with url rewrite. My cache
directory is 90% full. I noticed that it doesn't exceed the value set in
cache_dir. This is a good thing.

My doubt is: How squid manages that? What is the criterion to delete these
files?

Sorry if the answer is here in another topic. I searched but I did not find.

Best regards,
Eduardo Carneiro



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-s-cache-management-tp4678255.html
Sent from the Squid - Users mailing list archive at Nabble.com.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] flickr.com redirect error

2016-06-28 Thread Eliezer Croitoru
Hey,

 

Can you test if the details at bug 4253:

http://bugs.squid-cache.org/show_bug.cgi?id=4253#c13

 

Helps you to resolve the issue?


Eliezer

 



  Eliezer Croitoru
Linux System Administrator
Mobile: +972-5-28704261
Email: elie...@ngtech.co.il



 

From: squid-users [mailto:squid-users-boun...@lists.squid-cache.org] On Behalf 
Of Ozgur Batur
Sent: Monday, June 27, 2016 6:02 PM
To: Amos Jeffries
Cc: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] flickr.com redirect error

 

Browser i used to test runs on same machine with squid,  i changed it to 
explicit mode(no intercept - I set proxy ip in browser) during my attempts for 
ssl interception. Sorry I forgot to mention that in my last post of logs. So 
xff localhost is normal I guess. Here is the request log with  port info:

--

2016/06/27 15:49:40.909 kid1| 11,2| http.cc(2234) sendRequest: HTTP Server 
local=10.100.136.56:47772   
remote=188.125.93.100:443   FD 47 flags=1

2016/06/27 15:49:40.909 kid1| 11,2| http.cc(2235) sendRequest: HTTP Server 
REQUEST:

-

GET / HTTP/1.1

Accept: 
text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8

Upgrade-Insecure-Requests: 1

User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like 
Gecko) Ubuntu Chromium/50.0.2661.102 Chrome/50.0.2661.102 Safari/537.36

Accept-Encoding: gzip, deflate, sdch

Accept-Language: tr,en-US;q=0.8,en;q=0.6

..

Host: www.flickr.com  

Via: 1.1 ubuntuozgen (squid/3.5.19)

Surrogate-Capability: ubuntuozgen="Surrogate/1.0 ESI/1.0"

X-Forwarded-For: ::1

Cache-Control: max-age=259200

Connection: keep-alive

 

 

On Mon, Jun 27, 2016 at 2:27 PM, Amos Jeffries  > wrote:

On 27/06/2016 11:01 p.m., Ozgur Batur wrote:
> Yes that is much easier, thank you.
>
> Rafaels line is response header, I received the same. Here is the related
> cachelog:
>

What is the content of the line above this one. With the IP:port details ?

> 2016/06/27 13:52:49.194 kid1| 11,2| http.cc(2235) sendRequest: HTTP Server
> REQUEST:
> GET / HTTP/1.1
> Accept:
> text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
> Upgrade-Insecure-Requests: 1
> User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like
> Gecko) Ubuntu Chromium/50.0.2661.102 Chrome/50.0.2661.102 Safari/537.36
> Accept-Encoding: gzip, deflate, sdch
> Accept-Language: tr,en-US;q=0.8,en;q=0.6
> ...
> Host: www.flickr.com  
> Via: 1.1 ubuntuozgen (squid/3.5.19)
> Surrogate-Capability: ubuntuozgen="Surrogate/1.0 ESI/1.0"
> X-Forwarded-For: ::1

You said this was using interception. But Squid XFF is telling Yahoo
that its receiving localhost traffic.

Try "forwarded_for transparent" in your squid.conf, and find out why
that ::1 is happening on an intercepted proxy. There may be a bug in
your NAT or routing configuration.



> Cache-Control: max-age=0
> Connection: keep-alive
>
> ..
> 2016/06/27 13:52:49.477 kid1| 11,2| http.cc(751) processReplyHeader: HTTP
> Server REPLY:
> -
> HTTP/1.1 301 Moved Permanently
> X-Frame-Options: SAMEORIGIN
> X-Content-Type-Options: nosniff
> X-XSS-Protection: 1; mode=block
> X-Served-By: pprd1-node552-lh1.manhattan.bf1.yahoo.com 
>  
> X-Instance: flickr.v1.production.manhattan.bf1.yahoo.com 
>  
> Cache-Control: no-cache, max-age=0, must-revalidate, no-store
> Pragma: no-cache
> X-Request-Id: 36e709a2
> Location: https://www.flickr.com/
> Vary: Accept
> Content-Type: text/html; charset=utf-8
> Content-Length: 102
> Server: ATS
> Date: Mon, 27 Jun 2016 10:52:40 GMT
> Age: 0
> Via: http/1.1 fts111.flickr.bf1.yahoo.com 
>   (ApacheTrafficServer [cMs f ]),
> http/1.1 r11.ycpi.dea.yahoo.net   
> (ApacheTrafficServer [cMs f ])
> Connection: keep-alive
> ..
>
> And this repeats on and on. As I understand disabling Via header is an
> acceptable solution. If I could disable the header only for problematic
> domains that would be better of course.

Okay. Unfortunately not possible. If that forwarded_for change works it
would be better than disabling Via.

Amos





 

-- 

H Özgür Batur

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] large downloads got interrupted

2016-06-28 Thread Eugene M. Zheganin
Hi,

recently I started to get the problem when large downloads via squid are
often interrupted. I tried to investigate it, but, to be honest, got
nowhere. However, I took two tcpdump captures, and it seems to me that
for some reason squid sends FIN to it's client and correctly closes the
connection (wget reports that connection is closed), and in the same
time for some reason it sends like tonns of RSTs towards the server. No
errors in logs are reported (at least on a  ALL,1 loglevel).

Screenshots of wireshark interpreting the tcpdump capture are here:

Squid(2a00:7540:1::4) to target server(2a02:6b8::183):

http://static.enaza.ru/userupload/gyazo/e5b976bf6f3d0cb666f0d504de04.png
(here you can see that all of a sudden squid starts sending RSTs, that
come long way down the screen, then connection reestablishes (not on the
screenshot taken))

Squid(fd00::301) to client(fd00::73d):

http://static.enaza.ru/userupload/gyazo/ccf4982593dc6047edb5d734160e.png  (here
you can see the client connection got closed)
I'm open to any idea that will help me to get rid of this issue.

Thanks.
Eugene.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid-users Digest, Vol 22, Issue 136

2016-06-28 Thread Yuri
34 10.192.0.12 TCP_MISS/200 10828 POST
https://login.live.com/RST2.srf - ORIGINAL_DST/131.253.61.68
application/soap+xml
1467029287.998 56 10.192.0.12 TAG_NONE/200 0 CONNECT
157.55.133.204:443 - ORIGINAL_DST/157.55.133.204 -
1467029288.051 40 10.192.0.12 TAG_NONE/200 0 CONNECT
157.55.133.204:443 - ORIGINAL_DST/157.55.133.204 -
1467029288.204 46 10.192.0.12 TCP_MISS/302 538 GET
http://go.microsoft.com/fwlink/? - ORIGINAL_DST/23.66.120.244 -
1467029288.389147 10.192.0.12 TCP_MISS/302 1786 GET
http://www.microsoft.com/security/encyclopedia/adlpackages.aspx? -
ORIGINAL_DST/23.203.90.59 text/html
1467029288.422 48 10.192.0.12 TAG_NONE/200 0 CONNECT
13.90.208.215:443 - ORIGINAL_DST/13.90.208.215 -
1467029288.882311 10.192.0.12 TAG_NONE/200 0 CONNECT
104.41.32.78:443 - ORIGINAL_DST/104.41.32.78 -












Any Help 

Finally. Where is you specify following parameters in squid.conf:

sslproxy_cafile /usr/local/squid/etc/ca-bundle.crt
sslproxy_foreign_intermediate_certs 
/usr/local/squid/etc/intermediate_ca.pem


???


*
*


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

-- next part --
An HTML attachment was scrubbed...
URL: 
<http://lists.squid-cache.org/pipermail/squid-users/attachments/20160628/91929761/attachment.html>


--

Subject: Digest Footer

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


--

End of squid-users Digest, Vol 22, Issue 136



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid-users Digest, Vol 22, Issue 136

2016-06-28 Thread Anand Palani
ORIGINAL_DST/157.55.133.204 -
1467029288.051 40 10.192.0.12 TAG_NONE/200 0 CONNECT
157.55.133.204:443 - ORIGINAL_DST/157.55.133.204 -
1467029288.204 46 10.192.0.12 TCP_MISS/302 538 GET
http://go.microsoft.com/fwlink/? - ORIGINAL_DST/23.66.120.244 -
1467029288.389147 10.192.0.12 TCP_MISS/302 1786 GET
http://www.microsoft.com/security/encyclopedia/adlpackages.aspx? -
ORIGINAL_DST/23.203.90.59 text/html
1467029288.422 48 10.192.0.12 TAG_NONE/200 0 CONNECT
13.90.208.215:443 - ORIGINAL_DST/13.90.208.215 -
1467029288.882311 10.192.0.12 TAG_NONE/200 0 CONNECT
104.41.32.78:443 - ORIGINAL_DST/104.41.32.78 -












Any Help 

Finally. Where is you specify following parameters in squid.conf:

sslproxy_cafile /usr/local/squid/etc/ca-bundle.crt
sslproxy_foreign_intermediate_certs /usr/local/squid/etc/intermediate_ca.pem

???


*
*


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

-- next part --
An HTML attachment was scrubbed...
URL: 
<http://lists.squid-cache.org/pipermail/squid-users/attachments/20160628/91929761/attachment.html>

--

Subject: Digest Footer

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


--

End of squid-users Digest, Vol 22, Issue 136



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid with HTTPS and some APPs not working

2016-06-28 Thread Yuri



28.06.2016 13:39, --Ahmad-- пишет:

Hi ,
i have squid that is working on 3.5 .

traffic of t 80 and 443 traffic to Squid via IPTables.

Squid then passes traffic to ClamAV via C-ICAP. Squid is configured to 
intercept all SSL traffic and PKI has been setup and distributed to 
all clients.


we have a problem in  Skype of Business (Office 365) and Slack (Chat 
app)  seems its broken from squid intercept.



i tried to do exception for ssl for the domains that shown on the 
ACCess.log file when i use the APPs , but no luck


i tried to execlide the websites below :

skype.com 
lync.com
todyl.com
fastly\.net
.slack-msgs.com
.amazonaws.com
.slack.com 


#
but  it still not working and the APPS (( Skype of Business (Office 
365) and Slack (Chat app))) are not working .


again , here is my nobump file :


cat /opt/etc/squid.doms.nobump

\.skype\.com$
\.lync\.com$
\.todyl\.com$
\.fastly\.net$
\.slack-msgs\.com$
\.amazonaws\.com$
\.slack\.com$

##

current versions we have :

·Squid 3.5.19

·C-ICAP 0.4.2

·SquidclamAV 6.15

·ClamAV 0.99.2

##

  here is squid.conf :

# Example rule allowing access from your local networks.
# Adapt to list your (internal) IP networks from where browsing
# should be allowed
acl localnet src 10.0.0.0/8# RFC1918 possible internal network

# Example rule allowing access from your local networks.
# Adapt localnet in the ACL section to list your (internal) IP networks
# from where browsing should be allowed
http_access allow localnet
http_access allow localhost
http_access allow localhost manager
http_access deny manager

# Squid normally listens to port 3128
http_port 3127
http_port 3128 intercept

# Leave coredumps in the first cache dir
coredump_dir /var/cache/squid

visible_hostname shield.TodylInc.shield

cache_log /opt/var/log/squid/cache_log
cache_access_log /opt/var/log/squid/access_log

#user and group
cache_effective_user squid
cache_effective_group squid

acl todyl dstdomaintodyl.com 
request_header_add X-TODYL-GUID 1e46dccd2 todyl

#Custom Error Pages
error_directory /opt/www/squid

# Squid listen Port
https_port 3129 intercept ssl-bump generate-host-certificates=on 
dynamic_cert_mem_cache_size=4MB key=/opt/etc/pki/squid/ca-key.pem 
cert=/opt/etc/pki/squid/ca.pem options=NO_SSLv2,NO_SSLv3,SINGLE_DH_USE


Search list "Skype issue" thread, some day ago.



# SSL Bump Config
always_direct allow all
ssl_bump server-first all
sslcrtd_program /opt/libexec/ssl_crtd -s /opt/lib/ssl_db -M 4MB
sslcrtd_children 32 startup=5 idle=1

##
acl DiscoverSNIHost at_step SslBump1
acl NoSSLIntercept ssl::server_name_regex -i "/opt/etc/squid.doms.nobump"
ssl_bump splice NoSSLIntercept
ssl_bump peek DiscoverSNIHost
ssl_bump bump all
##

#Hardening
sslproxy_options NO_SSLv2,NO_SSLv3,SINGLE_DH_USE,SINGLE_ECDH_USE


#SINGLE_ECDH_USE
#  Enable ephemeral ECDH key exchange.
#  The adopted curve should be specified
#  using the tls-dh option.


#   tls-dh=[curve:]file
#File containing DH parameters for temporary/ephemeral DH key
#exchanges, optionally prefixed by a curve for ephemeral ECDH
#key exchanges.
#See OpenSSL documentation for details on how to create the
#DH parameter file. Supported curves for ECDH can be listed
#using the "openssl ecparam -list_curves" command.
#WARNING: EDH and EECDH ciphers will be silently disabled if
# this option is not set.

sslproxy_cipher 
EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH+aRSA+RC4:EECDH:EDH+aRSA:!RC4:!aNULL:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS


#   tls-dh=[curve:]file
#File containing DH parameters for temporary/ephemeral DH key
#exchanges, optionally prefixed by a curve for ephemeral ECDH
#key exchanges.
#See OpenSSL documentation for details on how to create the
#DH parameter file. Supported curves for ECDH can be listed
#using the "openssl ecparam -list_curves" command.
#WARNING: EDH and EECDH ciphers will be silently disabled if
# this option is not set.



# TUNING
cache_dir aufs /var/cache/squid 4 16 256
store_dir_select_algorithm round-robin
minimum_object_size 0 KB
maximum_object_size 96 MB
memory_pools off
quick_abort_min 0 KB
quick_abort_max 0 KB
log_icp_queries off
client_db off
cache_mem 1500 MB
buffered_logs on
half_closed_clients off

dns_nameservers 10.192.0.1
##


here is squid -k parse :

[root@1e46dccd2 var]# squid -k 

[squid-users] squid with HTTPS and some APPs not working

2016-06-28 Thread --Ahmad--
Hi ,
i have squid that is working on 3.5 .
traffic of t 80 and 443 traffic to Squid via IPTables.

Squid then passes traffic to ClamAV via C-ICAP. Squid is configured to 
intercept all SSL traffic and PKI has been setup and distributed to all clients.

we have a problem in  Skype of Business (Office 365) and Slack (Chat app)  
seems its broken from squid intercept.


i tried to do exception for ssl for the domains that shown on the ACCess.log 
file when i use the APPs , but no luck 

i tried to execlide the websites below :

skype.com
lync.com
todyl.com
fastly\.net
.slack-msgs.com
.amazonaws.com
.slack.com 


#
but  it still not working and the APPS (( Skype of Business (Office 365) and 
Slack (Chat app))) are not working .

again , here is my nobump file :


 cat /opt/etc/squid.doms.nobump

\.skype\.com$
\.lync\.com$
\.todyl\.com$
\.fastly\.net$
\.slack-msgs\.com$
\.amazonaws\.com$
\.slack\.com$

##

current versions we have :
·   Squid 3.5.19

·   C-ICAP 0.4.2

·   SquidclamAV 6.15

·   ClamAV 0.99.2

##

  here is squid.conf :

# Example rule allowing access from your local networks.
# Adapt to list your (internal) IP networks from where browsing
# should be allowed
acl localnet src 10.0.0.0/8 # RFC1918 possible internal network

# Example rule allowing access from your local networks.
# Adapt localnet in the ACL section to list your (internal) IP networks
# from where browsing should be allowed
http_access allow localnet
http_access allow localhost
http_access allow localhost manager
http_access deny manager

# Squid normally listens to port 3128
http_port 3127
http_port 3128 intercept

# Leave coredumps in the first cache dir
coredump_dir /var/cache/squid

visible_hostname shield.TodylInc.shield

cache_log /opt/var/log/squid/cache_log
cache_access_log /opt/var/log/squid/access_log

#user and group
cache_effective_user squid
cache_effective_group squid

acl todyl dstdomain todyl.com
request_header_add X-TODYL-GUID 1e46dccd2 todyl

#Custom Error Pages
error_directory /opt/www/squid

# Squid listen Port
https_port 3129 intercept ssl-bump generate-host-certificates=on 
dynamic_cert_mem_cache_size=4MB key=/opt/etc/pki/squid/ca-key.pem 
cert=/opt/etc/pki/squid/ca.pem options=NO_SSLv2,NO_SSLv3,SINGLE_DH_USE

# SSL Bump Config
always_direct allow all
ssl_bump server-first all 
sslcrtd_program /opt/libexec/ssl_crtd -s /opt/lib/ssl_db -M 4MB
sslcrtd_children 32 startup=5 idle=1

##
acl DiscoverSNIHost at_step SslBump1
acl NoSSLIntercept ssl::server_name_regex -i "/opt/etc/squid.doms.nobump"
 
ssl_bump splice NoSSLIntercept
ssl_bump peek DiscoverSNIHost
ssl_bump bump all
 
##

#Hardening
sslproxy_options NO_SSLv2,NO_SSLv3,SINGLE_DH_USE,SINGLE_ECDH_USE
sslproxy_cipher 
EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH+aRSA+RC4:EECDH:EDH+aRSA:!RC4:!aNULL:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS

# TUNING
cache_dir aufs /var/cache/squid 4 16 256
store_dir_select_algorithm round-robin
minimum_object_size 0 KB
maximum_object_size 96 MB
memory_pools off
quick_abort_min 0 KB
quick_abort_max 0 KB
log_icp_queries off
client_db off
cache_mem 1500 MB
buffered_logs on
half_closed_clients off

dns_nameservers 10.192.0.1
##


here is squid -k parse :

[root@1e46dccd2 var]# squid -k parse
2016/06/27 08:06:08| Startup: Initializing Authentication Schemes ...
2016/06/27 08:06:08| Startup: Initialized Authentication Scheme 'basic'
2016/06/27 08:06:08| Startup: Initialized Authentication Scheme 'digest'
2016/06/27 08:06:08| Startup: Initialized Authentication Scheme 'negotiate'
2016/06/27 08:06:08| Startup: Initialized Authentication Scheme 'ntlm'
2016/06/27 08:06:08| Startup: Initialized Authentication.
2016/06/27 08:06:08| Processing Configuration File: /opt/etc/squid.conf (depth 
0)
2016/06/27 08:06:08| Processing: acl localnet src 10.0.0.0/8 # RFC1918 possible 
internal network
2016/06/27 08:06:08| Processing: http_access allow localnet
2016/06/27 08:06:08| Processing: http_access allow localhost
2016/06/27 08:06:08| Processing: http_access allow localhost manager
2016/06/27 08:06:08| Processing: http_access deny manager
2016/06/27 08:06:08| Processing: http_port 3127
2016/06/27 08:06:08| Processing: http_port 3128 intercept
2016/06/27 08:06:08| Starting Authentication on port [::]:3128
2016/06/27 08:06:08| Disabling Authentication on port [::]:3128 (interception 
enabled)
2016/06/27 08:06:08| Processing: coredump_dir /var/cache/squid
2016/06/27 08:06:08| Processing: visible_hostname shield.TodylInc.shield
2016/06/27 08:06:08| Processing: cache_log /opt/var/log/squid/cache_log
2016/06/27 08:06:08|