[squid-users] cache-peer and tls

2019-08-03 Thread Eugene M. Zheganin

Hello,


I'm using squid 4.6 and I need to TLS-encrypt the session to the parent 
proxy. I have in config:



cache_peer proxy.foo.bar parent 3129 3130 tls 
tls-cafile=/usr/local/etc/squid/certs/le.pem 
sslcert=/usr/local/etc/letsencrypt/live/vpn.enazadev.ru/cert.pem 
sslkey=/usr/local/etc/letsencrypt/live/vpn.enazadev.ru/privkey.pem 
sslflags=DONT_VERIFY_DOMAIN,DONT_VERIFY_PEER



But no matter what I'm doing, squid keeps telling in logs that he 
doesn't like the peer certificate:



2019/08/03 18:42:24 kid1| ERROR: negotiating TLS on FD 23: 
error:14090086:SSL routines:ssl3_get_server_certificate:certificate 
verify failed (1/-1/0)
2019/08/03 18:42:24 kid1| temporary disabling (Service Unavailable) 
digest from proxy.foo.bar


and then he's going directly bypassing the peer. :/


Is there any way to tell him that I don't care ?

I've also tried to actually tell him about the CA cert with 
tls-cafile=/usr/local/etc/squid/certs/le.pem above, this doesn't work 
either.



Thanks.

Eugene.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] iOS 10.x, https and squid

2016-11-01 Thread Eugene M. Zheganin

Hi.

Does anyone have issues with iOS 10.x devices connecting through proxy 
(3.5.x) to the https-enabled sites ? Because I do. Non-https sites work 
just fine, but https ones just stuck on loading. First I thought that 
this is a problem with sslBump and disabled it, but this didn't help. I 
got in access log this:


1478024222.324 48 192.168.243.10 TCP_DENIED/407 4388 CONNECT 
www.cisco.com:443 - HIER_NONE/- text/html
1478024222.373  0 192.168.243.10 TCP_DENIED/407 4649 CONNECT 
www.cisco.com:443 - HIER_NONE/- text/html
1478024222.468 53 192.168.243.10 TCP_TUNNEL/200 0 CONNECT 
www.cisco.com:443 emz HIER_DIRECT/2a02:26f0:18:185::90 -


and when requesting http version:

1478024355.685 69 192.168.243.10 TCP_MISS/200 14297 GET 
http://www.cisco.com/ emz HIER_DIRECT/2a02:26f0:18:19e::90 text/html
1478024355.885 47 192.168.243.10 TCP_MISS/304 335 GET 
http://www.cisco.com/etc/designs/cdc/clientlibs/responsive/css/cisco-sans.min.css 
emz HIER_DIRECT/2a02:26f0:18:19e::90 text/css
1478024355.910 45 192.168.243.10 TCP_REFRESH_UNMODIFIED/304 341 GET 
http://players.brightcove.net/1384193102001/NJgI8K0ie_default/index.min.js 
emz HIER_DIRECT/2.22.40.126 application/javascript
1478024355.942  0 192.168.243.10 TCP_DENIED/407 6611 GET 
http://www.cisco.com/etc/designs/catalog/ps/clientlib-all/custom-fonts/cisco-sans.min.css 
- HIER_NONE/- text/html
1478024355.969 60 192.168.243.10 TCP_MISS/304 335 GET 
http://www.cisco.com/etc/designs/catalog/ps/clientlib-all/css/cisco-sans.min.css 
emz HIER_DIRECT/2a02:26f0:18:19e::90 text/css


[...lots of other access stuff...]

Some may think "dude, you just misconfigured your squid". But the thing 
is, other browsers just work (and I don't have MacBook to test if 
laptops will), I have a couple of iPhones, they don't work. Funny thing: 
with disabled authentication (when my iphone IP is allowed) the browser 
on iOS loads https sites just fine.


Thanks.

Eugene.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] connections from particular users sometimes get stuck

2016-09-28 Thread Eugene M. Zheganin
Hi.

On 28.09.2016 01:36, Alex Rousskov wrote:
> On 09/27/2016 02:02 PM, Eugene M. Zheganin wrote:
>
>> I guess squid
>> didn't get a way to increase debug level on the fly ? 
> "squid -k debug" (or sending an equivalent signal) does that:
> http://wiki.squid-cache.org/SquidFaq/BugReporting#Detailed_Debug_Output
>
> You will not get ALL,9 this way, unfortunately, but ALL,7 might be enough.
>
>
I took the debug trace and both the tcpdump client-side and server-side
(towards the internet) capturea.
Since the debug log is way heavy, I decided to put all of the three
files on the web-server. Here they are:

Squid debug log (ALL,7):

http://zhegan.in/files/squid/cache.log.debug

tcpdump client-side capture (windump -s 0 -w
squid-stuck-reference-client.pcap -ni 1):

http://zhegan.in/files/squid/squid-stuck-reference-client.pcap

tcpdump server-side capture, towards the outer world, empty - obviously,
server didn't send anything outside (tcpdump -s 0 -w
squid-stuck-reference-server.pcap -ni vlan23 host 217.112.35.75):

http://zhegan.in/files/squid/squid-stuck-reference-server.pcap

Test sequence:

client - 192.168.3.215
squid - 192.168.3.1:3128
URL - http://www.ru/index.html

I requested a http://www.ru/index.html from a client machine Chrome. No
other applications were requesting this URL at this time there (however,
capture does contain a lot of traffic, including HTTP sessions). Then I
waited about a minute (loader in Chrome was spinning), and aborted both
captures, then aborted the request. The aborted request probably made it
to the squid log.

Eugene.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] connections from particular users sometimes get stuck

2016-09-27 Thread Eugene M. Zheganin

Hi.

On 28.09.2016 0:29, Alex Rousskov wrote:

Since you can reproduce this, I suggest collecting ALL,9 log for the
stuck master transaction:

http://wiki.squid-cache.org/SquidFaq/BugReporting#Debugging_a_single_transaction

If collecting a debugging trace is impossible for some reason, then
collect the corresponding TCP packets on the Squid to origin server link
and post actual packets (not screenshots of packet summaries) from both
connections. The debugging trace will most likely have the answer. The
packet trace might have the answer.

You may need to change user credentials for this test or after posting
the details requested above.

Well... I cannot reproduce it on purpose, I'm just saying it is 
self-reproducible for almost a year, in certain moments of time. 
Collecting a debug trace isn't hard by itself, but I'm pretty sure the 
restart will clear this state for a current machine (I guess squid 
didn't get a way to increase debug level on the fly ? at least I'm not 
aware of it; so I will need to restart it to set ALL,9), and I'll have 
to run with ALL,9 for quite some time, which is, obviously, not good for 
production, because it will create enormous amounts of logging in cache 
log. So I will post the tcpdump containing both exchanges, and if it 
will be still unclear I'll think about running in a debug mode.


Thanks.
Eugene.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] connections from particular users sometimes get stuck

2016-09-27 Thread Eugene M. Zheganin

Hi.

I have a weird problem. I run a squid cache 3.5.19 on FreeBSD/amd64, 
with about 300 active users, lots of authentication, external helpers 
(yeah, it's usually the place when one starts to post configs, but let 
me get to the point), and everything basically works just fine, but 
sometimes one particular user (don't know, may be it's one particular 
machine or some other entity) starts to have troubles. Usual trouble 
looks like the following:


- around 299 users are working and authenticatiing just fine

- one particular user starts experiencing connection stucking: his 
browser requests a web page, it starts to load and then some random 
object on it blocks indefinitely.


- this happens every time on one machine, for the time given. This 
machine is permanent for a given issue, until it's gone. Then it's some 
another machine, and I cannot figure out the pattern.


- this machine may be locked in this malfuctioning state for days. This 
state is usually cleared by the squid restart, or it may clear itself.


- after a month or so the issue appears on another machine. and it 
persists on a new machine for quite some time.


On a l3 level this looks simple: browser requests an object, gets 407 
answer, replies with proper credentials set and then this connection 
goes indefinitely into a keepalived state: the squid and the browser 
send keepalives to each other, but nothing happens other than 
keepalives. User sees the spinning loader on a browser tab, and some 
content inside the tab, depending on how many objects the browses has 
received. In the same time new connections to squid are opening from 
this machine just fine, and the basic connectivity is normal for both 
the squid and the troubled machine. Furthermore, I'm sure that this 
problem isn't caused by bottlenecks on the squid machine: because it 
this way all the users would have eventually this problem, not only one. 
In the same time these aren't bottlenecks on the user machine: while the 
browser is stuck, other applications are working fine. If I switch the 
proxy to a backup squid (on another server) this machine is able to 
browse the internet.


I really need to solve this, but I have no idea where to start. The 
error log show nothing suspicious.


The wireshark screen where the issue is isolated for one particular 
connection can be found here - 
https://gyazo.com/fdec1d9d7c31a75afc7d4676abb83d15 (it's really a simple 
picture: TCP connection establishing, then GET -> 407 -> GET and bunch 
of keepalives, not a rocket science).


Any ideas ?

Thanks.

Eugene.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] large downloads got interrupted

2016-08-11 Thread Eugene M. Zheganin
Hi.

On 30.06.16 17:19, Amos Jeffries wrote:
>
> Okay, I wasn't suggesting you post it here. Its likely to be too big for
> that.
>
> I would look for the messages about the large object, and its FD. Then,
> for anthing about why it was closed by Squid. Not sure what tha would be
> at this point though.
> There are some scripts in the Squid sources scripts/ directory that
> might help wade through the log. Or the grep tool.
>
>
I enabled logLevel 2 for all squid facilities, but so far I didn't
fugura out any pattern from log. The only thing I noticed - is that for
large download the Recv-Q value reported by the netstat for a particular
squid-to-server connection is extremely high, so is the Send-Q value for
a connection from squid to client. I don't know if it's a cause or a
consequence, but from my point of view this may indicate that buffers
are overflown for some reason, I think this may cause, in turn, RSTs and
connection closing - am I right ?. I still don't know whether it's a
squid fault of may be it's local OS misconfiguration.

Eugene.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] NOTICE: Authentication not applicable on intercepted requests.

2016-06-30 Thread Eugene M. Zheganin

Hi.

On 30.06.2016 17:04, Amos Jeffries wrote:

On 30/06/2016 9:21 p.m., Eugene M. Zheganin wrote:

Hi,

Could this message be moved on loglevel 2 instead of 1 ?
I think that this message does 95% of the logs of the intercept-enabled
caches with authentication.

At least some switch would be nice, to switch this off instead of
switching the while facility to 0.

This message only happens when your proxy is misconfigured.

Well, it may be.


Use a myportname ACL to prevent Squid attempting impossible things like
authentication on intercepted traffic.


Sorry, but I still didn't get the idea. I have one port that squid is 
configured to intercept traffic on, and another for plain proxy requests. How 
do I tell squid not to authenticate anyone on the intercept one ? From what I 
know, squid will send the authentication sequence as soon as it encounters the 
authentication-related ACL in the ACL list for the request given. Do have to 
add myportname ACL with non-intercepting port for all the occurences of the 
auth-enabled ACLs, or may be there's a simplier way ?

Thanks.
Eugene.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] NOTICE: Authentication not applicable on intercepted requests.

2016-06-30 Thread Eugene M. Zheganin
Hi,

Could this message be moved on loglevel 2 instead of 1 ?
I think that this message does 95% of the logs of the intercept-enabled
caches with authentication.

At least some switch would be nice, to switch this off instead of
switching the while facility to 0.

Thanks.
Eugene.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] large downloads got interrupted

2016-06-29 Thread Eugene M. Zheganin
Hi.

On 29.06.16 05:26, Amos Jeffries wrote:
> On 28/06/2016 8:46 p.m., Eugene M. Zheganin wrote:
>> Hi,
>>
>> recently I started to get the problem when large downloads via squid are
>> often interrupted. I tried to investigate it, but, to be honest, got
>> nowhere. However, I took two tcpdump captures, and it seems to me that
>> for some reason squid sends FIN to it's client and correctly closes the
>> connection (wget reports that connection is closed), and in the same
>> time for some reason it sends like tonns of RSTs towards the server. No
>> errors in logs are reported (at least on a  ALL,1 loglevel).
>>
> It sounds like a timeout or such has happened inside Squid. We'd need to
> see your squid.conf to see if that was it.
Well... it quite long, since it's at large production site. I guess you
don't need the acl and auth lines, so without them it's as follows
(nothing secret in them, just that they are really numerous):

===Cut===
# cat /usr/local/etc/squid/squid.conf | grep -v http_access | grep -v
acl | grep -v http_reply_access | egrep -v '^#' | egrep -v '^$'
visible_hostname proxy1.domain1.com
debug_options ALL,1
http_port [fd00::301]:3128 ssl-bump
cert=/usr/local/etc/squid/certs/squid.cert.pem
generate-host-certificates=on dynamic_cert_mem_cache_size=4MB
dhparams=/usr/local/etc/squid/certs/dhparam.pem
http_port [fd00::316]:3128 ssl-bump
cert=/usr/local/etc/squid/certs/squid.cert.pem
generate-host-certificates=on dynamic_cert_mem_cache_size=4MB
dhparams=/usr/local/etc/squid/certs/dhparam.pem
http_port 192.168.3.1:3128 ssl-bump
cert=/usr/local/etc/squid/certs/squid.cert.pem
generate-host-certificates=on dynamic_cert_mem_cache_size=4MB
dhparams=/usr/local/etc/squid/certs/dhparam.pem
http_port 127.0.0.1:3128 ssl-bump
cert=/usr/local/etc/squid/certs/squid.cert.pem
generate-host-certificates=on dynamic_cert_mem_cache_size=4MB
dhparams=/usr/local/etc/squid/certs/dhparam.pem
http_port 127.0.0.1:3129 intercept
http_port [::1]:3128
http_port [::1]:3129 intercept
https_port 127.0.0.1:3131 intercept ssl-bump
cert=/usr/local/etc/squid/certs/squid.cert.pem
generate-host-certificates=on dynamic_cert_mem_cache_size=4MB
dhparams=/usr/local/etc/squid/certs/dhparam.pem
https_port [::1]:3131 intercept ssl-bump
cert=/usr/local/etc/squid/certs/squid.cert.pem
generate-host-certificates=on dynamic_cert_mem_cache_size=4MB
dhparams=/usr/local/etc/squid/certs/dhparam.pem
icp_port 3130
dns_v4_first off
shutdown_lifetime 5 seconds
workers 2
no_cache deny QUERY
cache_mem 256 MB
cache_dir rock /var/squid/cache 1100
cache_access_log stdio:/var/log/squid/access.fifo
cache_log /var/log/squid/cache.log
cache_store_log none
cache_peer localhost parent 8118 0 no-query defaultauth_param negotiate
program /usr/local/libexec/squid/negotiate_wrapper_auth --ntlm
/usr/local/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp --kerberos
/usr/local
authenticate_ip_ttl 60 seconds
positive_dns_ttl 20 minutes
negative_dns_ttl 120 seconds
negative_ttl 30 seconds
pid_filename /var/run/squid/squid.pid
ftp_user anonymous
ftp_passive on
ipcache_size 16384
fqdncache_size 16384
redirect_children 10
refresh_pattern -i . 0 20% 4320
sslcrtd_program /usr/local/libexec/squid/ssl_crtd -s /var/squid/ssl -M 4MB
sslcrtd_children 15
auth_param negotiate program
/usr/local/libexec/squid/negotiate_wrapper_auth --ntlm
/usr/local/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp --kerberos
/usr/local/libexec/squid/negotiate_kerberos_auth -s
HTTP/proxy1.domain1@domain.com
auth_param negotiate children 40 startup=5 idle=5
auth_param negotiate keep_alive on
auth_param ntlm program /usr/local/bin/ntlm_auth -d 0
--helper-protocol=squid-2.5-ntlmssp
auth_param ntlm children 60
auth_param basic program /usr/local/libexec/squid/basic_pam_auth
auth_param basic children 35 startup=5 idle=2
auth_param basic realm Squid
auth_param basic credentialsttl 10 minute
auth_param basic casesensitive off
authenticate_ttl 10 minute
authenticate_cache_garbage_interval 10 minute
snmp_access allow fromintranet
snmp_access allow localhost
snmp_access deny all
snmp_port 340${process_number}
snmp_incoming_address 192.168.3.22
tcp_outgoing_address 192.168.3.22 intranet
tcp_outgoing_address fd00::316 intranet6
tcp_outgoing_address 86.109.196.3 ad-megafon
redirector_access deny localhost
redirector_access deny SSL_ports
icp_access allow children
icp_access deny all
always_direct deny fuck-the-system-dstdomain
always_direct deny fuck-the-system
always_direct deny onion
always_direct allow all
never_direct allow fuck-the-system-dstdomain
never_direct allow fuck-the-system
never_direct allow onion
never_direct deny all
miss_access allow manager
miss_access allow all
cache_mgr e...@domain1.com
cache_effective_user squid
cache_effective_group squid
sslproxy_cafile /usr/local/etc/squid/certs/ca.pem
sslproxy_cert_error allow all
sslproxy_flags DONT_VERIFY_PEER
deny_info ERR_NO_BANNER banner
deny_info ERR_UNAUTHORIZED unauthori

[squid-users] large downloads got interrupted

2016-06-28 Thread Eugene M. Zheganin
Hi,

recently I started to get the problem when large downloads via squid are
often interrupted. I tried to investigate it, but, to be honest, got
nowhere. However, I took two tcpdump captures, and it seems to me that
for some reason squid sends FIN to it's client and correctly closes the
connection (wget reports that connection is closed), and in the same
time for some reason it sends like tonns of RSTs towards the server. No
errors in logs are reported (at least on a  ALL,1 loglevel).

Screenshots of wireshark interpreting the tcpdump capture are here:

Squid(2a00:7540:1::4) to target server(2a02:6b8::183):

http://static.enaza.ru/userupload/gyazo/e5b976bf6f3d0cb666f0d504de04.png
(here you can see that all of a sudden squid starts sending RSTs, that
come long way down the screen, then connection reestablishes (not on the
screenshot taken))

Squid(fd00::301) to client(fd00::73d):

http://static.enaza.ru/userupload/gyazo/ccf4982593dc6047edb5d734160e.png  (here
you can see the client connection got closed)
I'm open to any idea that will help me to get rid of this issue.

Thanks.
Eugene.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] ext_kerberos_ldap_group_acl and Kerberos cache

2016-05-18 Thread Eugene M. Zheganin

Hi.

On 18.05.2016 16:29, Amos Jeffries wrote:


I don't know what you mean by "the main tree". But The feature you
describe does not qualify for adding to the 3.5 production release
series. The only features added to a series after is goes to "stable"
production releases are ones which resolve non-feature bugs or can be
done without affecting existing installations.
Well, you can treat kerberos cache in the kerberos group ACL helper as 
both. It doesn't affect current installations in any way: neither it 
doesn't change the configuration syntax, nor adds new caveats. In the 
same way it can be considered as a bugfix: as far as I know it was 
supposed to exist in the helper from the start, but was misimplemented. 
All it adds is the cache: it caches the credentials up to their TTL, 
which is defined by the ticket (not by squid, not by helper).

By changing the helper behaviour in all cases this clearly affects
existing installations. So only qualifies for including into the next
series, which is Squid-4.

It doesn't change helper behaviour, it fixes it.

Eugene.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] ext_kerberos_ldap_group_acl and Kerberos cache

2016-05-17 Thread Eugene M. Zheganin
Hi.

I've just checked that squid 3.5.19 sources, and discovered the
following fact that is really disturbing:
(first some explanation)
Markus Moeller, the author of the external kerberos group helper, has
implemented the Kerberos credentials cache in the
ext_kerberos_ldap_group_acl  helper back in the 2014. The idea is to
cache the credentials inside the helper instance, so when it encounters
a request with user id and group that are already in the cache, the
helper can skip the kerberos initialization sequence for this set of
credentials. This cached version is times faster than original one, that
doesn't use the cache.

(now the disturbing fact)
Surprisingly, the cached version didn't make it to the main tree for 2
past years.
Could this situation be corrected please ?

Thanks.
Eugene.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] squid, SMP and authentication and service regression over time

2016-05-16 Thread Eugene M. Zheganin

Hi.

I'm using squid for a long time, I'm using it to authenticate/authorize 
users accessing the Internet with LDAP in a Windows corporate 
enviromnent (Basic/NTLM/GSS-SPNEGO) and recently (about several months 
ago) I had to switch to the SMP scheme, because one process started to 
eat the whole core sometimes, thus bottlenecking users on it. Situation 
with CPU effectiveness improved, however I discovered several issues. 
The first I was aware of, it's the non-functional SNMP (since there's no 
solution, I just had to sacrifice it). But the second one is more 
disturbing. I discovered that after a several uptime (usually couple of 
weeks, a month at it's best) squid somehow degrades and stops 
authorizing users. I have about active 600 users on my biggest site 
(withount SNMP I'm not sure how many simultaneous users I got) but 
usually this starts like this: someone (this starts with one person) 
complains that he lost his access to the internet - not entirely, no. At 
first the access is very slow, and the victim has to wait several 
minutes for the page to load. Others are unaffected at this time. From 
time to time the victim is able to load one of two tabs in the browser, 
eventually, but at the end of the day this becomes unuseable, and my 
support has to come in. Then this gots escalated to me. First I was 
debugging various kerberos stuff, NTLM, victim's machine domain 
membership and so on. But today I managed to figure out that all I have 
to do is just restart squid, yeah (sounds silly, but I don't like to 
restart things, like in the "IT Crowd" TV Series, this is kinda last 
resort measure, when I'm desperate). If I'm stubborn enough to continue 
the investigation, soon I got 2 users complaining, then 3, then more. 
During previous outages eventually I used to restart squid (to change 
the domain controller in kerberos config, if I blame one; to disable the 
external Kerberos/LDAP helper connection pooling, if I blame one) - so 
each time there was a candidate to blame. But this time I just decided 
to restart squid, since I started to think it's the main reason, et 
voila. I should also mention that I run this AAA scheme in squid for 
years, and I didn't have this issue previously. I also have like dozen 
of other squids running same (very similar) config, - same AAA stuff - 
Basic/NTLM/GSS-SpNego, same AD group checking, but only for the 
different groups membership - and none of it has this issue. I'm 
thinking there's SMP involved, really.


I realize this is a poor problem report. "Something degrades, I restart 
squid, please help, I think it's SMP-related". But the thing is - I 
don't know where to start to narrow this stuff. If anyone's having a 
good idea please let me know.


Thanks.
Eugene.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Assign multiple IP Address to squid

2015-12-29 Thread Eugene M. Zheganin
Hi.

On 29.12.2015 17:05, Reet Vyas wrote:
> Hi
>
> I have working squid3.5.4 configuration with ssl bump, I am using this
> squid machine as router and have external IP to it and have a leased
> line connection but with leased line I have 10 extra IP address and I
> want to NAT those external ip to local ip on same network, like we do
> in our router, so that I can assign those IP ip my machines having
> webservers.
>
> Please suggest me way to configure it.
>
This has nothing to do with squid.

Eugene.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] sslBump adventures in enterprise production environment

2015-12-28 Thread Eugene M. Zheganin
Hi.

On 16.11.2015 0:39, Alex Rousskov wrote:
> On 11/15/2015 12:03 PM, Eugene M. Zheganin wrote:
>> It's not even a HTTPS, its a tunneled HTTP CONNECT. But
>> squid for some reason thinks there shoudl be a HTTPS inside.
> Hello Eugene,
>
>  Squid currently supports two kinds of CONNECT tunnels:
>
> 1. A regular opaque tunnel, as intended by HTTP specifications.
>
> 2. An inspected tunnel containing SSL/TLS-encrypted HTTP traffic.
>
> Opaque tunnels are the default. Optional SslBump-related features allow
> the admin to designate admin-selected CONNECT tunnels for HTTPS
> inspections (of various depth). This distinction explains why and when
> Squid expects "HTTPS inside".
>
> There is currently no decent support for inspecting CONNECT tunnels
> other than SSL/TLS-encrypted HTTP (i.e., HTTPS) tunnels.
>
> Splicing a tunnel at SslBump step1 converts a to-be-inspected tunnel
> into an opaque tunnel before inspection starts.
>
> The recently added on_unsupported_protocol directive can automatically
> convert being-inspected non-HTTPS tunnels into opaque ones in some
> common cases, but it needs more work to cover more cases.
>
>
> AFAICT, you assume that "splicing" turns off all tunnel inspection. This
> is correct for step1 (as I mentioned above). This is not correct for
> other steps because they happen after some inspection already took
> place. Inspection errors that on_unsupported_protocol cannot yet handle,
> may result in connection termination and other problems.
>
>
> If Squid behavior contradicts some of the above rules, it is probably a
> bug we should fix. Otherwise, it is likely to be a missing feature.
>
>
> Finally, if Squid kills your ICQ (non-HTTPS) client tunnels, you need to
> figure out whether those connections are inspected (i.e., go beyond
> SslBump step1). If they are inspected, then this is not a Squid bug but
> a misconfiguration (unless the ACL code itself is buggy!). If they are
> not inspected, then it is probably a Squid bug. I do not have enough
> information to distinguish between those cases, but I hope that others
> on the mailing list can guide you towards a resolution given the above
> information.
>

Thanks a lot for this explicit explanation.
I managed to solve the problem with ICQ using the information above, no
matter what port, 5190 or 443 it's tunneled into. Even
"on_unsupported_protocol" isn't needed, so the whole thing works just
fine on 3.5.x. In case someone will need this too, I decided to post my
config part:

#
# Minimum ICQ configuration,
# works for QIP 2012 and squid/ssl_bump, login.icq.com port should be
either 443 or 5190
#

acl icq dstdomain login.icq.com
acl icqport port 443
acl icqport port 5190

# mail.ru network where ICQ servers reside
acl icqip dst 178.237.16.0/20

acl step1 at_step SslBump1

#
# http_access part is needed; not shown here since it's ordinary, for
qip or web clients to work
#

# this should be somewhere near the top of the ssl_bump directives piece
ssl_bump splice step1 icq
ssl_bump splice step1 icqip icqport
[...other ssl_bump directives...]

Thanks.
Eugene.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] sslBump, squid in transparent mode

2015-12-28 Thread Eugene M. Zheganin
Hi.

I'm still trying to figure out why I get certificate generated for IP
address instead of hostname when the HTTPS traffic is intercepted bu
sllBump-enable squid. I'm using iptables to do this:

rdr on $iifs inet proto tcp from 192.168.0.0/16 to ! port 443
-> 127.0.0.1 port 3131
rdr on vpn inet proto tcp from 192.168.0.0/16 to ! port 443 ->
127.0.0.1 port 3131

and the port is configured as follows:

https_port 127.0.0.1:3131 intercept ssl-bump
cert=/usr/local/etc/squid/certs/squid.cert.pem
generate-host-certificates=on dynamic_cert_mem_cache_size=4MB
dhparams=/usr/local/etc/squid/certs/dhparam.pem
https_port [::1]:3131 intercept ssl-bump
cert=/usr/local/etc/squid/certs/squid.cert.pem
generate-host-certificates=on dynamic_cert_mem_cache_size=4MB
dhparams=/usr/local/etc/squid/certs/dhparam.pem

This way I'm getting a waring in browser (https://youtube.com is opened
in the example below):

===Cut===
youtube.com uses an invalid security certificate.

The certificate is not trusted because the issuer certificate is unknown.
The server might not be sending the appropriate intermediate certificates.
An additional root certificate may need to be imported.
The certificate is only valid for 173.194.71.91

(Error code: sec_error_unknown_issuer)
===Cut===

And the tcpdump capture clearly shows that client browser did sent an SNI:

https://gyazo.com/c1ba348fb4ee56c6c30f3e22ff9877f8

I'll apreciate any help.

Thanks.
Eugene.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] squid authentication mechs

2015-12-16 Thread Eugene M. Zheganin

Hi.

Is there a way to limit the number of available authentication 
mechanisms (for a client browser) basing on certain squid IP which this 
browser connects to, like, using http_port configuration directive ? For 
example this is needed when one need to allow the non-domain machines to 
pass through authentication/authorization checks using squid with 
full-fledged AD integraion (or Kerberos/NTLM, anyway), otherwise they 
are unable to do it. Once they were, for example using Chrome < 41, but 
since >41 Chrome has removed all the options to exclude certain 
authentication methods from it's CLI sequence (I still wander what a 
genious proposed this).


If not(and I believe there isn't) could this message be treated as a 
feature request ?


Thanks.
Eugene.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Fwd: NTLM LDAP authentication problem

2015-11-16 Thread Eugene M. Zheganin
Hi,

On 16.11.2015 19:51, Matej Kotras wrote:
> Thank you for your response, as this is my first try with Squid, and
> fairly newb in Linux.
> I do not understand at all differences between basic/ntlm/gss-spnego
> auths so I will do my homework and read about them. I've managed to
> get this working after few weeks of "trial and error" method (I know,
> I know, but I gotta start somewhere rite) following multiple guides.
>
The usual issue with all those copy/paste tutorials is that they tend to
teach how to do everything at once, instead of moving from simple things
to more difficult ones. This order of simplicity/difficulty is the
following:

- adding Basic authentication, all authenticated users are authorized to
use proxy
- adding NTLM authentication, all authenticated users are authorized to
use proxy
- adding group-based authorization, authenticated users are authorized
to use proxy basing on the group membership, using simple helper like
squid_group_ldap
- adding GSS-SPNEGO authentication
- adding full-fledged GSS-SPNEGO group authorization helper.

You can try my article,
http://squidquotas.hq.norma.perm.ru/squid-auth.shtml. Though it's not
perfect and still lacks two last steps, at least it tries to follow that
approach.

Eugene.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Active Directory Authentication failing at the browser

2015-11-16 Thread Eugene M. Zheganin
Hi.

On 16.11.2015 18:46, dol...@ihcrc.org wrote:
>
> Squid Version:  Squid 3.4.8
>
> OS Version:  Debian 8 (8.2)
>
>  
>
> I have installed Squid on a server using Debian 8 and seem to have the
> basics operating, at least when I start the squid service, I have am
> no longer getting any error messages.  At this time, the goal is to
> authenticate users from Active Directory and log the user and the
> websites they are accessing.
>
>  
>
> The problem I am having is, when I set Firefox 35.0.1 on my Windows 7
> workstation to use the Squid proxy, I am getting the log in page
> (image below).
>
>  
>
> imap://e...@mail.norma.perm.ru:143/fetch%3EUID%3E/INBOX/maillists/squid-users%3E58459?header=quotebody=1.1.2=image001.png
>
>  
>
> I have tried entering my user name in various form EXAMPLE/USERID,
> USERID, EXAMPLE/ADMINISTRATOR, ADMINISTRATOR and the password and I
> have not had a successful at this time.
>
>  
>
> I have attached the squid.conf, smb.conf, krb5.conf, and access.log
> files for review.  If you would like to see the cache.log file, please
> contact me as the file is too large to include in this post.
>
>  
>
>
I suggest you first make Basic and NTLM working with active directory,
and only then, having these 2 schemes working, you move to the
GSS-SPNEGO scheme. This is because GSS-SPNEGO scheme is overcomplicated
and difficult to debug, as it uses lots of components and can fall apart
easily on any stage.

Eugene.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Fwd: NTLM LDAP authentication problem

2015-11-16 Thread Eugene M. Zheganin
On 16.11.2015 14:29, Matej Kotras wrote:
> Hi guys
>
> I've managed squid to work with AD, and authorize users based on what
> AD group they are in. I use Squid-Analyzer for doing reports from
> access.log. I've found 2 anomalies with authorization so far. In
> access log, I see that user is authorized based on his PC name(not
> desired) and not on the user account name. I've just enabled debugging
> on negotiate wrapper, so I will monitor these logs also.
>
> But in the meantime, have you got any idea why could this happen ?
>
> *PC NAME AUTH:*
> 1447562119.348  0 10.13.34.31 TCP_DENIED/407 3834 CONNECT
> clients2.google.com:443  -
> HIER_NONE/- text/html
> 1447562119.374  2 10.13.34.31 TCP_DENIED/407 4094 CONNECT
> clients2.google.com:443  -
> HIER_NONE/- text/html
> 1447562239.350 119976 10.13.34.31 TCP_MISS/200   4200 CONNECT
> clients2.google.com:443  icz800639-03$
> HIER_DIRECT/173.194.116.231  -
>
> *USER NAME AUTH:*
> 1447562039.176  0 10.13.34.31 TCP_DENIED/407 3850 CONNECT
> lyncwebext.inventec.com:443  -
> HIER_NONE/- text/html
> 1447562039.215 27 10.13.34.31 TCP_DENIED/407 4110 CONNECT
> lyncwebext.inventec.com:443  -
> HIER_NONE/- text/html
> 1447562041.118   2702 10.13.34.31 TCP_MISS/200   6213 CONNECT
> lyncwebext.inventec.com:443 
> icz800639 HIER_DIRECT/10.8.100.165  -
Does't seem like you have working GSS-SPNEGO scheme. Unless you have
username fields in log with realm set which yyou didn't post here.

>
>
> *Squid.conf*
> #
> #Enable KERBEROS authentication#
> #
>
> auth_param negotiate program /usr/local/bin/negotiate_wrapper -d
> --ntlm /usr/bin/ntlm_auth --diagnostics
> --helper-protocol=squid-2.5-ntlmssp --domain=ICZ --kerberos
> /usr/lib64/squid/negotiate_kerberos_auth -s GSS_C_NO_NAME
> auth_param negotiate children 20 startup=0 idle=1
> auth_param negotiate keep_alive off
>
>
> #
> #Enable NTLM authentication#
> #
>
> #auth_param ntlm program /usr/bin/ntlm_auth --diagnostics
> --helper-protocol=squid-2.5-ntlmssp --domain=ICZ
> #auth_param ntlm children 10
> #auth_param ntlm keep_alive off
So you disable the explicit NTLM authentication. That's bad. This far
you only have GSS-SPNEGO failover to NTLM.
>
>
> #
> # ENABLE LDAP AUTH#
> #
>
> auth_param basic program /usr/lib64/squid/basic_ldap_auth -R -b
> "dc=icz,dc=inventec" -D squid@icz.inventec -W /etc/squid/ldappass.txt
> -f sAMAccountName=%s -h icz-dc-1.icz.inventec
> auth_param basic children 10
> auth_param basic realm Please enter user name to access the internet
> auth_param basic credentialsttl 1 hour
This is pure basic.
>
> external_acl_type ldap_group ttl=3600 negative_ttl=0 children-max=50
> children-startup=10  %LOGIN /usr/lib64/squid/ext_wbinfo_group_acl
>
The part with http_access is missing, it's hard to tell why you have
TCP_MISS for machine accounts.

Eugene.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] sslBump adventures in enterprise production environment

2015-11-15 Thread Eugene M. Zheganin
Hi.

On 16.11.2015 00:14, Yuri Voinov wrote:

> It's common knowledge. Squid is unable to pass an unknown protocol on
> the standard port. Consequently, the ability to proxy this protocol does
> not exist.
>
> If it was simply a tunneling ... It is not https. And not just
> HTTP-over-443. This is more complicated and very marginal protocol.
>
I'm really sorry to tell you that, but you are perfectly wrong. These
non-HTTPS tunnels have been working for years. And this isn't JTTPS
because of:

# openssl s_client -connect login.icq.com:443
CONNECTED(0003)
34379270680:error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown
protocol:/usr/src/secure/lib/libssl/../../../crypto/openssl/ssl/s23_clnt.c:782:
---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 7 bytes and written 297 bytes
---
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
---

Eugene.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] sslBump adventures in enterprise production environment

2015-11-15 Thread Eugene M. Zheganin
Hi.

On 15.11.2015 0:43, Walter H. wrote:
> On 13.11.2015 14:53, Yuri Voinov wrote:
>> There is no solution for ICQ with Squid now.
>>
>> You can only bypass proxying for ICQ clients.
> from where do the ICQ clients get the trusted root certificates?
> maybe this is the problem, that e.g. the squid CA cert is only 
> installed in FF
> and nowhere else ...
From nowhere. It's not even a HTTPS, its a tunneled HTTP CONNECT. But
squid for some reason thinks there shoudl be a HTTPS inside.

Eugene.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] sslBump adventures in enterprise production environment

2015-11-14 Thread Eugene M. Zheganin
Hi.

On 13.11.2015 18:53, Yuri Voinov wrote:
> There is no solution for ICQ with Squid now.
>
> You can only bypass proxying for ICQ clients.
>
There is: I can disable sslBump, and I did it already. It doesn't look
production-ready anyway.

Eugene.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] sslBump adventures in enterprise production environment

2015-11-13 Thread Eugene M. Zheganin
Hi.

Today I discovered that a bunch of old legacy ICQ clients that some
people till use have lost the ability to use HTTP CONNECT tunneling with
sslBump. No matter what I tried to allow direct splicing for them, all
was useless:

- arranging them by dst ACL, and splicing that ACL
- arranging them by ssl::server_name ACL, and splicing it

So I had to turn of sslBumping. Looks like it somehow interferes with
HTTP CONNECT even when splicing it.
Last version of sslBump part in the config was looking like that:


acl icqssl ssl::server_name login.icq.com
acl icqssl ssl::server_name go.icq.com
acl icqssl ssl::server_name ars.oscar.aol.com
acl icqssl ssl::server_name webim.qip.ru
acl icqssl ssl::server_name cb.icq.com
acl icqssl ssl::server_name wlogin.icq.com
acl icqssl ssl::server_name storage.qip.ru
acl icqssl ssl::server_name new.qip.ru

acl icqlogin dst 178.237.20.58
acl icqlogin dst 178.237.19.84
acl icqlogin dst 94.100.186.23

ssl_bump splice children
ssl_bump splice sbol
ssl_bump splice icqlogin
ssl_bump splice icqssl icqport
ssl_bump splice icqproxy icqport

ssl_bump bump interceptedssl

ssl_bump peek step1
ssl_bump bump unauthorized
ssl_bump bump entertainmentssl
ssl_bump splice all

I'm not sure that ICQ clients use TLS, but in my previous experience
they were configured to use proxy, and to connect through proxy to the
login.icq.com host on port 443.
Sample log for unsuccessful attempts:

1447400500.311 21 192.168.2.117 TAG_NONE/503 0 CONNECT
login.icq.com:443 solodnikova_k HIER_NONE/- -
1447400560.301 23 192.168.2.117 TAG_NONE/503 0 CONNECT
login.icq.com:443 solodnikova_k HIER_NONE/- -
1447400624.832359 192.168.2.117 TCP_TUNNEL/200 0 CONNECT
login.icq.com:443 solodnikova_k HIER_DIRECT/178.237.20.58 -
1447400631.038108 192.168.2.117 TCP_TUNNEL/200 0 CONNECT
login.icq.com:443 solodnikova_k HIER_DIRECT/178.237.20.58 -

Thanks.
Eugene.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] sslBump and intercept

2015-11-12 Thread Eugene M. Zheganin
Hi.

This question is unrelated directly to my yesterday's one.

I decided to intercept the HTTPS traffic on my production squids from
proxy-unware clients to be able to tell them there's a proxy and they
should configure one.
So I'm doing it like (the process of forwarding using FreeBSD pf is not
shown here):

===Cut===
acl unauthorized proxy_auth stringthatwillnevermatch
acl step1 at_step sslBump1

https_port 127.0.0.1:3131 intercept ssl-bump
cert=/usr/local/etc/squid/certs/squid.cert.pem
generate-host-certificates=on dynamic_cert_mem_cache_size=4MB
dhparams=/usr/local/etc/squid/certs/dhparam.pem
https_port [::1]:3131 intercept ssl-bump
cert=/usr/local/etc/squid/certs/squid.cert.pem
generate-host-certificates=on dynamic_cert_mem_cache_size=4MB
dhparams=/usr/local/etc/squid/certs/dhparam.pem

ssl_bump peek step1
ssl_bump bump unauthorized
ssl_bump splice all
===Cut===

Almost everything works, except that squid for some reason is generating
certificates in this case for IP addresses, not names, so the browser
shows a warning abount certificate being valid only for IP, and not name.

Am I doing something wrong ?

Thanks.
Eugene.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] sslBump and intercept

2015-11-12 Thread Eugene M. Zheganin
Hi.

On 12.11.2015 17:04, Steve Hill wrote:
>
> proxy_auth won't work on intercepted traffic and will therefore always
> return false, so as far as I can see you're always going to peek and
> then splice.  i.e. you're never going to bump, so squid should never
> be generating a forged certificate.
Yup, I know that, and my fault is that I forgot to mention it, and to
explain that this sample config contains parts that handle user
authentication. So, yes, I'm aware that intercepted SSL traffic will
look to squid like anonymous, and that's the idea.
>
> You say that Squid _is_ generating a forged certificate, so something
> else is going on to cause it to do that.  My first guess is that Squid
> is generating some kind of error page due to some http_access rules
> which you haven't listed, and is therefore bumping.
This is exactly what's happening.
>
> Two possibilities spring to mind for the certificate being for the IP
> address rather than for the name:
> 1. The browser isn't bothering to include an SNI in the SSL handshake
> (use wireshark to confirm).  In this case, Squid has no way to know
> what name to stick in the cert, so will just use the IP instead.
> 2. The bumping is happening in step 1 instead of step 2 for some
> reason.  See:  http://bugs.squid-cache.org/show_bug.cgi?id=4327
Thanks, I'll try to investigate.

Eugene.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] sslBump and intercept

2015-11-12 Thread Eugene M. Zheganin
Hi,

On 12.11.2015 17:48, Yuri Voinov wrote:

> More probably this is bug
> http://bugs.squid-cache.org/show_bug.cgi?id=4188.
>
Page said it's fixed, and applied to 3.5. If it's already in 3.5.11,
then it's not it - I just tested 3.5.11, and the behavior is the same.

Thanks.
Eugene.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] sslBump somehow interferes with authentication

2015-11-11 Thread Eugene M. Zheganin
Hi.

I have configured simple ssl peek/splice on squid 3.5.10 for some simple
cases, but in my production, where configs are complicated, it doesn't
work as expected - somehow it interferes with authentication.

Suppose we have a config like:

===Cut===
acl freetime time MTWHF 18:00-24:00

acl foo dst 192.168.0.0/16
acl bar dstdomain .bar.tld

acl users proxy_auth steve
acl users proxy_auth mike
acl users proxy_auth bob

acl unauthorized proxy_auth stringthatwillnevermatch

acl block dstdomain "block.acl"
acl blockssl ssl::server_name "block.acl"

http_access allow foo
http_access allow bar

http_access deny unauthorized

http_access allow blockssl users freetime
http_access allow block users freetime
http_access deny blockssl users
http_access deny block users
http_access allow users
http_access deny all
===Cut===

This is a part of an actually working config (with some local names
modification, just to read it easily). This config is straightforward:
- foo and bar are allowed without authentication
- then an explicit authentication occurs ('http_access deny
unauthorized' looks redundant, and yes, the config will be work without
it, but the thing is that this ACL 'unauthorized' is used to display a
specific deny_info page for the users who failed to authorize).
- it allows to browse some usually blocked sites at some amounts of time
called 'freetime'.
- this config is sslBump-ready, a 'blockssl' ACL exists, which matches
site names on SNI.

Now I'm adding sslBump:

===Cut===
acl step1 at_step SslBump1
ssl_bump peek step1
ssl_bump bump blockssl
ssl_bump splice all
===Cut===

As soon as I add sslBump, everything that is bumped, starts to be
blocking by 'http_access deny unauthorized' (everything that's spliced
works as intended). And I completely cannot understand why. Yes, I can
remove this line, but this way I'm loosing deny_info for specific cases
when someone fails to authorize, and plus - without sslBump it was
working, right ? Please help me understand this and solve the issue.

Thanks.
Eugene.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] sslBump somehow interferes with authentication

2015-11-11 Thread Eugene M. Zheganin
Hi.

On 11.11.2015 23:44, Amos Jeffries wrote:
> Proxy-authentication cannot be performed on MITM'd traffic. That
> includes SSL-bump decrypted messages.
>
> However, unlike the other methods SSL-bump CONNECT wrapper messages in
> explicit-proxy traffic can be authenticated and their credentials
> inherited by the messages decrypted. Squid should be doing that. But
> again cannot do it for the fake/synthetic ones it generates itself on
> intercepted port 443 traffic.
>
> So the question becomes, why are foo and bar ACLs not matching?
>  http_access rules are applied separately to the CONNECT wrapper message
> and to the decrypted non-CONNECT HTTP message(s).
>
>
Yeah, completely my fault - I forgot to tell what URL user is trying to
browse and what matches when.
Once again.

===Cut===
acl freetime time MTWHF 18:00-24:00

acl foo dst 192.168.0.0/16
acl bar dstdomain .bar.tld

acl users proxy_auth steve
acl users proxy_auth mike
acl users proxy_auth bob

acl unauthorized proxy_auth stringthatwillnevermatch

acl block dstdomain "block.acl"
acl blockssl ssl::server_name "block.acl"

http_access allow foo
http_access allow bar

http_access deny unauthorized

http_access allow blockssl users freetime
http_access allow block users freetime
http_access deny blockssl users
http_access deny block users
http_access allow users
http_access deny all
===Cut===

So, the user starts it's browser and opens the URL 'https://someurl'.
And this URL matches both 'block' and 'blockssl' ACLs, one I created for
you know... usual matching and one - for sslBump, since dstdomain ACLs
cannot work there. So, the main idea here is to actually show some
information to the user, when he's trying to visit some blocked site via
TLS and that site isn't allowed - because all the user sees in such
situation are various browser-depending error pages, like "Proxy server
refusing connections" (Firefox) or some other brief error (cannot
remember it exactly)  in Chrome - so user thinks it's technical error
and starts bothering tech support. Can this goal be achieved for a
configuration with user authentication ? ACL 'foo' and ACL 'bar' don't
match 'somesite' because they are created to match some traffic that is
allowed to all proxy users, regardless of their authentication, and I
listed these ACLs here to give proper representation of my ACL structure
- there's a part without authentication, and there's a part with.

Thanks.
Eugene.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] sslBump somehow interferes with authentication

2015-11-11 Thread Eugene M. Zheganin
Hi.

On 12.11.2015 0:06, Eugene M. Zheganin wrote:
> So, the user starts it's browser and opens the URL 'https://someurl'.
> And this URL matches both 'block' and 'blockssl' ACLs, one I created for
> you know... usual matching and one - for sslBump, since dstdomain ACLs
> cannot work there. So, the main idea here is to actually show some
> information to the user, when he's trying to visit some blocked site via
> TLS and that site isn't allowed - because all the user sees in such
> situation are various browser-depending error pages, like "Proxy server
> refusing connections" (Firefox) or some other brief error (cannot
> remember it exactly)  in Chrome - so user thinks it's technical error
> and starts bothering tech support. Can this goal be achieved for a
> configuration with user authentication ? ACL 'foo' and ACL 'bar' don't
> match 'somesite' because they are created to match some traffic that is
> allowed to all proxy users, regardless of their authentication, and I
> listed these ACLs here to give proper representation of my ACL structure
> - there's a part without authentication, and there's a part with.
>
Follow-up: the traffic isn't intercepted proxy traffic, it's a traffic
between a browser and a proxy, configured in that browser. If I remove
the line

http_access deny unauthorized

I'm receiving an sslBumped traffic from the sites that match the
'blockssl' ACL, and this traffic goes through the authentication chain.
The question is - why this line above makes the whole scheme to fall apart.

Thanks.
Eugene.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] mmap() in squid

2015-03-27 Thread Eugene M. Zheganin
Hi.

Squid uses mmap() call from 3.4.x, and mmap() on FreeBSD it has one
specific flag - MAP_NOSYNC, which prevents dirtied pages from being
flushed on disk:

MAP_NOSYNCCauses data dirtied via this VM map to be flushed to
   physical media only when necessary (usually by the
   pager) rather than gratuitously.  Typically this pre-
   vents the update daemons from flushing pages dirtied
   through such maps and thus allows efficient
sharing of
   memory across unassociated processes using a file-
   backed shared memory map.  Without this option any VM
   pages you dirty may be flushed to disk every so often
   (every 30-60 seconds usually) which can create
perfor-
   mance problems if you do not need that to occur (such
   as when you are using shared file-backed mmap regions
   for IPC purposes).  Note that VM/file system
coherency
   is maintained whether you use MAP_NOSYNC or not. 
This
   option is not portable across UNIX platforms (yet),
   though some may implement the same behavior by
default.

   WARNING!  Extending a file with ftruncate(2),
thus cre-
   ating a big hole, and then filling the hole by
modify-
   ing a shared mmap() can lead to severe file
fragmenta-
   tion.  In order to avoid such fragmentation you
should
   always pre-allocate the file's backing store by
   write()ing zero's into the newly extended area
prior to
   modifying the area via your mmap().  The
fragmentation
   problem is especially sensitive to MAP_NOSYNC pages,
   because pages may be flushed to disk in a totally
ran-
   dom order.

   The same applies when using MAP_NOSYNC to implement a
   file-based shared memory store.  It is
recommended that
   you create the backing store by write()ing zero's to
   the backing file rather than ftruncate()ing it.  You
   can test file fragmentation by observing the KB/t
   (kilobytes per transfer) results from an ``iostat 1''
   while reading a large file sequentially, e.g. using
   ``dd if=filename of=/dev/null bs=32k''.

   The fsync(2) system call will flush all dirty
data and
   metadata associated with a file, including dirty
NOSYNC
   VM data, to physical media.  The sync(8) command and
   sync(2) system call generally do not flush dirty
NOSYNC
   VM data.  The msync(2) system call is obsolete since
   BSD implements a coherent file system buffer cache.
   However, it may be used to associate dirty VM pages
   with file system buffers and thus cause them to be
   flushed to physical media sooner rather than later.

Last year there was an issue with PostgreSQL, which laso started to use
mmap() in it's 9.3 release, and it had a huge regression issue on
FreeBSD. One of the measures to fight this regression (but not the only)
was adding MAP_NOSYNC to the postgresql port. So I decided to do the
same for my local squid. I created a patch, where both of two
occurencies of mmap() were supplied with this flag. I'm using squid
3.4.x patched this way about a half-a-year. Couple of days ago I sent
this patch to the FreeBSD ports system, and squid port maintainer asks
me if I'm sure squid on FreeBSD does need this. Since I'm not a skilled
programmer (though I think using mmap() with MAP_NOSYNC is a good
thing), I decided to ask here - is this flag worth bothering, since
squid isn't a database engine ?

Thanks.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid SMP and SNMP

2015-03-19 Thread Eugene M. Zheganin
Hi.

On 18.03.2015 19:02, Amos Jeffries wrote:
 Process kid3 (SMP coordinator) is attempting to respond.

 Since you configured:
   snmp_port 340${process_number}

 and the coordinator is process number 3 I think it will be using port
 3403 for that response.


Nobody is listening on these ports:

[root@taiga:local/squidquotas]# netstat -an | grep udp | grep
340  
udp46  0  0 *.3401 *.*   
udp46  0  0 *.3402 *.*   
[root@taiga:local/squidquotas]#

Eugene.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] squid SMP and SNMP

2015-03-18 Thread Eugene M. Zheganin
Hi.

I'm gathering statistics from squid using SNMP. When I use single
process everything is fine, but when it comes to multiple workers - SNMP
doesn't work - I got timeout when trying to read data with snmpwalk.

I'm using the following tweak:

snmp_port 340${process_number}

both workers bind on ports 3401 and 3402 indeed, but then I got this
timeout.
Does anyone have a success story about squid SMP and SNMP ?

I wrote a message about this problem about a year or so, it was 3.3.x,
but situation didn't change.
Should I report this as a bug ?

Thanks.
Eugene.



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid SMP and SNMP

2015-03-18 Thread Eugene M. Zheganin
Hi.

On 18.03.2015 16:04, Amos Jeffries wrote:

 SNMP is on the list of SMP-aware features.

 The worker receiving the SNMP request will contact other workers to
 fetch the data for producing the SNMP response. This may take some time.

Yeah, but it seems like it doesn't happen. Plus, I'm getting the errors
in the cache.log on each attempt:

[root@taiga:etc/squid]# snmpwalk localhost:3402
1.3.6.1.4.1.3495.1.2.1.0 
Timeout: No Response from localhost:3402

and in the log:

2015/03/18 18:48:26 kid3| comm_udp_sendto: FD 34, (family=2)
127.0.0.1:46682: (22) Invalid argument
2015/03/18 18:48:49 kid3| comm_udp_sendto: FD 34, (family=2)
127.0.0.1:36623: (22) Invalid argument
2015/03/18 18:48:50 kid3| comm_udp_sendto: FD 34, (family=2)
127.0.0.1:36623: (22) Invalid argument
2015/03/18 18:48:51 kid3| comm_udp_sendto: FD 34, (family=2)
127.0.0.1:36623: (22) Invalid argument
2015/03/18 18:48:52 kid3| comm_udp_sendto: FD 34, (family=2)
127.0.0.1:36623: (22) Invalid argument
2015/03/18 18:48:53 kid3| comm_udp_sendto: FD 34, (family=2)
127.0.0.1:36623: (22) Invalid argument
2015/03/18 18:48:54 kid3| comm_udp_sendto: FD 34, (family=2)
127.0.0.1:36623: (22) Invalid argument

Thanks.
Eugene.

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Memory Leak Squid 3.4.9 on FreeBSD 10.0 x64

2015-01-15 Thread Eugene M. Zheganin
Hi.

On 12.01.2015 19:06, Amos Jeffries wrote:

 I am confident that those types of leaks do not exist at al in Squid 3.4.

 These rounds of mmory exhaustion problems are caused by pseudo-leaks,
 where Squid incorrectly holds onto memory (has not forgotten it
 though) far longer than it should be.

Could you please clarify for me what is the Long Strings pool and how
can I manage it's size ?
After start the largest consuming pool is the mem_node one, but it
usually stops increasing after a few days (somewhere around the
cache_memory border - don't know if it's it, or just a coincedence).
Long Strings, however, keep raising and raising, and after some days
it becomes the largest one.

I'm using the following settings:
cache_mem 512 MB
cache_dir diskd /var/squid/cache 1100 16 256

after few days SNMP reports that the clients amount is around 1700.

Thanks.
Eugene.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Memory Leak Squid 3.4.9 on FreeBSD 10.0 x64

2015-01-12 Thread Eugene M. Zheganin
Hi.

On 09.01.2015 06:12, Amos Jeffries wrote:
 Grand total:
   = 9.5 GB of RAM just for Squid.

 .. then there is whatever memory the helper programs, other software
 on the server and operating system all need.

I'm now also having a strong impression that squid is leaking memory.
Now, when 3.4.x is able to handle hundreds of users during several hours
I notice that it's memory usage is constantly increasing. My patience
always ends at the point of 1.5 Gigs memory usage, where server memory
starts to be exhausted (squid is running with lots of other stuff) and I
restart it. This is happening on exactly the same config the 3.3.13 was
running, so ... I have cache_mem set to 512 megs, diskd, medium sized
cache_dir and lots of users. Is something changed drastically in 3.4.x
comparing to the 3.3.13, or is it, as it seems, a memory leak ?

Thanks.
Eugene.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] 3.3.x - 3.4.x: huge performance regression

2015-01-12 Thread Eugene M. Zheganin
Hi.

On 12.01.2015 16:03, Eugene M. Zheganin wrote:
 Hi.

 Just to point this out in the correct thread - to all the people who
 replied here - Steve Hill has provided a patch for a 3.4.x that solves
 the most performance degradation issue. 3.4.x is still performing poorly
 comparing to the 3.3.x branch, but I guess this is due to major code
 changes. As of now my largest production installation (1.2K clients,
 300-400 active usernames) is running 3.4.9.
... and massively leaking, yeah.

Eugene.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Memory Leak Squid 3.4.9 on FreeBSD 10.0 x64

2015-01-12 Thread Eugene M. Zheganin
Hi.

On 12.01.2015 16:41, Eugene M. Zheganin wrote:
 I'm now also having a strong impression that squid is leaking memory.
 Now, when 3.4.x is able to handle hundreds of users during several
 hours I notice that it's memory usage is constantly increasing. My
 patience always ends at the point of 1.5 Gigs memory usage, where
 server memory starts to be exhausted (squid is running with lots of
 other stuff) and I restart it. This is happening on exactly the same
 config the 3.3.13 was running, so ... I have cache_mem set to 512
 megs, diskd, medium sized cache_dir and lots of users. Is something
 changed drastically in 3.4.x comparing to the 3.3.13, or is it, as it
 seems, a memory leak ?
Squid 3.4 on FreeBSD is by default compiling with the
--enable-debug-cbdata option and when 45th log selector is at it's
default 1, cache.log is filling with CBData memory leaking alarms. Here
is the list for the last 40 minutes, sorted with the occurrence count:

104136 Checklist.cc:160
81438 Checklist.cc:187
177226 Checklist.cc:320
84861 Checklist.cc:45
89151 CommCalls.cc:21
22069 DiskIO/DiskDaemon/DiskdIOStrategy.cc:353
 120 UserRequest.cc:166
  29 UserRequest.cc:172
55814 clientStream.cc:235
5966 client_side_reply.cc:93
4516 client_side_request.cc:134
5568 dns_internal.cc:1131
4859 dns_internal.cc:1140
  86 event.cc:90
7770 external_acl.cc:1426
1548 fqdncache.cc:340
7467 helper.cc:856
39905 ipcache.cc:353
11880 store.cc:1611
181959 store_client.cc:154
256951 store_client.cc:337
6835 ufs/UFSStoreState.cc:333

are those all false alarms ?

Thanks.
Eugene.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] 3.3.x - 3.4.x: huge performance regression

2014-10-22 Thread Eugene M. Zheganin
Hi.

I was using the 3.4.x branch for quite some time, it was working just
fine on small installations.
Yesterday I upgraded my largest cache installation from 3.3.13 to 3.4.8
(same config, diskd, NTLM/GSS-SPNEGO auth helpers, external helpers).
Today morning I noticed that squid is spiking to 100% of CPU and almost
isn't serving any traffic. Restart didn't help, squid is serving pages
while continuing to consume CPU, load grows, until it's at 100%, and
after some time my users are unable to open any page from Internet. This
is sad, so I downgraded to 3.3.13. CPU consumption went back to 20-35%
and everything is back to normal.

In order to understand what's happening I did some dtrace profiling to
see what is squid busy with, taking the consideration, that measuring
the same amount of connect()/socket() syscalls should give same amount
of squid work, but the results were totally different on one number of
such syscalls.

Anyone to comment ?

Thanks.
Eugene.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] assertion failed: lm_request-waiting

2014-10-21 Thread Eugene M. Zheganin

Hi.

Is someone getting this too ? I get this with sad regularity:

# grep lm_request /var/log/squid/cache.log
2014/10/06 14:32:12 kid1| assertion failed: UserRequest.cc:229: 
lm_request-waiting
2014/10/07 16:06:10 kid1| assertion failed: UserRequest.cc:229: 
lm_request-waiting
2014/10/16 16:28:48 kid1| assertion failed: UserRequest.cc:229: 
lm_request-waiting
2014/10/17 14:32:34 kid1| assertion failed: UserRequest.cc:229: 
lm_request-waiting
2014/10/17 14:33:09 kid1| assertion failed: UserRequest.cc:229: 
lm_request-waiting
2014/10/21 12:25:18 kid1| assertion failed: UserRequest.cc:229: 
lm_request-waiting


each time squid crashes.
I filed a http://bugs.squid-cache.org/show_bug.cgi?id=4104, but noone 
got interesed.
I accept, this happens only on one of many installations. Probably 
someone knows a workaround ?


Thanks.
Eugene.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid, Kerberos and FireFox (Was: Re: leaking memory in squid 3.4.8 and 3.4.7.)

2014-10-19 Thread Eugene M. Zheganin
Hi.

On 19.10.2014 13:32, Victor Sudakov wrote:

 Hopefully I can interest our Windows admin to enable Kerberos event
 logging per KB262177.

 But for the present I have found an ugly workaround. In squid's keytab, I
 created another principal called 'squiduser' with the same hex key and
 kvno as that of the principal 'HTTP/proxy.sibptus.transneft.ru.'

(This may sound like a dumb question, but anyway) Did you initially map
any AD user to the SPN with a hostname that clients know your proxy under ?

Eugene.
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid, Kerberos and FireFox (Was: Re: leaking memory in squid 3.4.8 and 3.4.7.)

2014-10-16 Thread Eugene M. Zheganin
Hi.

On 17.10.2014 11:02, Victor Sudakov wrote:

 I am attaching a traffic dump.

 Please look at Frame No. 36, where a ticket is requested for
 HTTP/proxy.sibptus.transneft.ru, and then at Frame No. 39, where
 the ticket is granted, but for the wrong principal name.

The thing is, valid exchange should not and does not contain the
KRB5KRB_AP_ERR_MODIFIED error, and yours does. This indicates something
is wrong between these two hosts (as I understand, 10.14.134.4 is a
Windows Server, and .122 is a workstation). You need to investigate on
your DC what's happening, Probably these are the etype errors (may be
not). If your DC is really w2k (not w2k3 or w2k8) and the workstation is
of different generation, this can happen. Also, lots of howtos spread
around the Internet, make an engineer believe that he should kreate the
keytab with only one encryption type for squid, insted kreating the
keytab with all of available on the DC ciphers, This can also lead to
complicated situations.

There's also a decent article there:
http://blogs.technet.com/b/askds/archive/2008/06/11/kerberos-authentication-problems-service-principal-name-spn-issues-part-3.aspx

Could help you as it did help me one day.

Eugene.
//
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users