RE: [squid-users] squid 3.2 and error_map equivalent

2013-03-26 Thread Martin Sperl
Hi Amos!

I hear what you are saying (especially about the http_response_access), but:

The thing with the config I have sent is that as soon as I have icap-service 
down the http_access deny all triggers and I get Status 403 plus in the body: 
you are not allowed to access.
But with this http_access deny all removed/commented out - I get Status 500 
instead plus the corresponding error message trouble with ICAP in the body.

So your explaination that http_access gets handled BEFORE icap gets called is 
not (100% true), as otherwise I would see the 500 error in both cases.
Thing is that it might get evaluated a second time on an error condition and 
then with modified Request data and then the ACLS do no longer match...

Another observation is that if I have conditional logging enabled (based on 
ACLs) then the respective 403 by the DENY all does NOT get logged to that file 
thus another indication that something strange happens to those ACLs on an ICAP 
error.

I will try to enable the debug logging on full and will report on the ACL 
matching that happens whe icap is down...

Ciao,
Martin


P.s: Here again (relevant parts of ) the config:
# some ACLs:
acl HTTP proto HTTP
acl GETPOST method GET
acl GETPOST method POST

# imagine several of these blocks
icap_service mib_request_XX  reqmod_precache  0 icap://127.0.0.1:1344/XX/reqmod
icap_service mib_response_XX respmod_precache 0 icap://127.0.0.1:1344/XX/respmod

adaptation_service_set modify_request_XX  mib_request_XX
adaptation_service_set modify_response_XX mib_response_XX

acl hosts_allowed_XX dstdomain /file/with/list/of/vhosts.txt

http_access allow HTTP GETPOST hosts_allowed_XX

adaptation_access modify_request_XX  allow HTTP GETPOST hosts_allowed_XX
adaptation_access modify_response_XX allow HTTP GETPOST response_adaption_XX

# deny everything else - the final line
http_access deny all # in the case of ICAP port being down this also matches 
and returns a DENY/403 instead of 500 - remove this line and I get 500

-Original Message-

* ICAP errors *do not* map directly to HTTP errors. Usually one 502
means *multiple* ICAP services are havign problems - posisbley very
different problems. Trying to make one error page which represents *the*
issue ... results in 502 Bad Gateway.

* http_access is all tested well before ICAP gets involved. So this is
the wrong place to integrate anything about status codes however you
slice it.


This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,
you may review at http://www.amdocs.com/email_disclaimer.asp


RE: [squid-users] squid 3.2 and error_map equivalent

2013-03-26 Thread Martin Sperl
Stupid me - I forgot the following ACLs:
aclerror500http_status 500
http_reply_access  denyerror500
(but I had removed the deny_info  error500 component).

And that http_reply_access triggers a reset of the previous ICAP_ERROR and 
moves it to ACCESS_DENIED instead...
Removing the http_reply_access I get the expected error 500 while retaining the 
http_access deny all

Which in the end means that squid is generating the error pages twice(as seen 
in the debug log)  - once for the ICAP and then for the ACCESS_DENIED on 
response-delivery.

So this means that using the deny_info trick as above essentially makes us 
lose the info on the icap error - with no means to recover it.

OK, the debug.log shows that ICAP writes a note, but as there are no ACLs that 
allow for triggering on notes, it does not help either.
Nor is there an ACL that could match on those error fields that we could use 
instead...

So in the end means I have no means to identify the error with the deny_info, 
so I am left in a state where I cannot modify the redirect to include the 
root-cause and modify the page based on that...

Any more ideas, besides a patch to help achieve what I need? I think I now have 
covered everything that is possible (and learned a lot in that phase) as well...

Martin



-Original Message-
From: Martin Sperl 
Sent: Dienstag, 26. März 2013 08:29
To: Amos Jeffries; squid-users@squid-cache.org
Subject: RE: [squid-users] squid 3.2 and error_map equivalent

Hi Amos!

I hear what you are saying (especially about the http_response_access), but:

The thing with the config I have sent is that as soon as I have icap-service 
down the http_access deny all triggers and I get Status 403 plus in the body: 
you are not allowed to access.
But with this http_access deny all removed/commented out - I get Status 500 
instead plus the corresponding error message trouble with ICAP in the body.

So your explaination that http_access gets handled BEFORE icap gets called is 
not (100% true), as otherwise I would see the 500 error in both cases.
Thing is that it might get evaluated a second time on an error condition and 
then with modified Request data and then the ACLS do no longer match...

Another observation is that if I have conditional logging enabled (based on 
ACLs) then the respective 403 by the DENY all does NOT get logged to that file 
thus another indication that something strange happens to those ACLs on an ICAP 
error.

I will try to enable the debug logging on full and will report on the ACL 
matching that happens whe icap is down...

Ciao,
Martin


P.s: Here again (relevant parts of ) the config:
# some ACLs:
acl HTTP proto HTTP
acl GETPOST method GET
acl GETPOST method POST

# imagine several of these blocks
icap_service mib_request_XX  reqmod_precache  0 icap://127.0.0.1:1344/XX/reqmod
icap_service mib_response_XX respmod_precache 0 icap://127.0.0.1:1344/XX/respmod

adaptation_service_set modify_request_XX  mib_request_XX
adaptation_service_set modify_response_XX mib_response_XX

acl hosts_allowed_XX dstdomain /file/with/list/of/vhosts.txt

http_access allow HTTP GETPOST hosts_allowed_XX

adaptation_access modify_request_XX  allow HTTP GETPOST hosts_allowed_XX
adaptation_access modify_response_XX allow HTTP GETPOST response_adaption_XX

# deny everything else - the final line
http_access deny all

-Original Message-

* ICAP errors *do not* map directly to HTTP errors. Usually one 502 
means *multiple* ICAP services are havign problems - posisbley very 
different problems. Trying to make one error page which represents *the* 
issue ... results in 502 Bad Gateway.

* http_access is all tested well before ICAP gets involved. So this is 
the wrong place to integrate anything about status codes however you 
slice it.


This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,
you may review at http://www.amdocs.com/email_disclaimer.asp


RE: [squid-users] squid 3.2 and error_map equivalent

2013-03-26 Thread Martin Sperl
Hi Amos!

I had a final ide, which works: Checking on the response header for squid 
errors and matching them in the http_reply_access path...

acl ICAPERROR   rep_header  X-Squid-Error   ERR_ICAP
http_reply_access   denyICAPERROR

BUT, I would have to do this kind of thing for EVERY possible error 
(errors/template currently shows 41 such variations) - which would increase 
processing overhead tremendously...

So if I would propose a patch for 3.2 that implements either:
* keeping an older squid error info on ACL match deny, that could get used in 
the deny_info config as a new pattern - say %C for historic error
or
* allow several variables in error_directory to get selected? (this would be 
potentially a variation on the language selector for error pages, but this 
one seems to be a bit more complex and potentially could inflict collateral 
damage)

Would you accept that? (I would tend towards the first, as it seem simpler 
without a code-review)

Thanks,
Martin

-Original Message-
From: Martin Sperl 
Sent: Dienstag, 26. März 2013 09:27
To: Amos Jeffries; squid-users@squid-cache.org
Subject: RE: [squid-users] squid 3.2 and error_map equivalent

Stupid me - I forgot the following ACLs:
aclerror500http_status 500
http_reply_access  denyerror500
(but I had removed the deny_info  error500 component).

And that http_reply_access triggers a reset of the previous ICAP_ERROR and 
moves it to ACCESS_DENIED instead...
Removing the http_reply_access I get the expected error 500 while retaining the 
http_access deny all

Which in the end means that squid is generating the error pages twice(as seen 
in the debug log)  - once for the ICAP and then for the ACCESS_DENIED on 
response-delivery.

So this means that using the deny_info trick as above essentially makes us 
lose the info on the icap error - with no means to recover it.

OK, the debug.log shows that ICAP writes a note, but as there are no ACLs that 
allow for triggering on notes, it does not help either.
Nor is there an ACL that could match on those error fields that we could use 
instead...

So in the end means I have no means to identify the error with the deny_info, 
so I am left in a state where I cannot modify the redirect to include the 
root-cause and modify the page based on that...

Any more ideas, besides a patch to help achieve what I need? I think I now have 
covered everything that is possible (and learned a lot in that phase) as well...

Martin



-Original Message-
From: Martin Sperl 
Sent: Dienstag, 26. März 2013 08:29
To: Amos Jeffries; squid-users@squid-cache.org
Subject: RE: [squid-users] squid 3.2 and error_map equivalent

Hi Amos!

I hear what you are saying (especially about the http_response_access), but:

The thing with the config I have sent is that as soon as I have icap-service 
down the http_access deny all triggers and I get Status 403 plus in the body: 
you are not allowed to access.
But with this http_access deny all removed/commented out - I get Status 500 
instead plus the corresponding error message trouble with ICAP in the body.

So your explaination that http_access gets handled BEFORE icap gets called is 
not (100% true), as otherwise I would see the 500 error in both cases.
Thing is that it might get evaluated a second time on an error condition and 
then with modified Request data and then the ACLS do no longer match...

Another observation is that if I have conditional logging enabled (based on 
ACLs) then the respective 403 by the DENY all does NOT get logged to that file 
thus another indication that something strange happens to those ACLs on an ICAP 
error.

I will try to enable the debug logging on full and will report on the ACL 
matching that happens whe icap is down...

Ciao,
Martin


P.s: Here again (relevant parts of ) the config:
# some ACLs:
acl HTTP proto HTTP
acl GETPOST method GET
acl GETPOST method POST

# imagine several of these blocks
icap_service mib_request_XX  reqmod_precache  0 icap://127.0.0.1:1344/XX/reqmod
icap_service mib_response_XX respmod_precache 0 icap://127.0.0.1:1344/XX/respmod

adaptation_service_set modify_request_XX  mib_request_XX
adaptation_service_set modify_response_XX mib_response_XX

acl hosts_allowed_XX dstdomain /file/with/list/of/vhosts.txt

http_access allow HTTP GETPOST hosts_allowed_XX

adaptation_access modify_request_XX  allow HTTP GETPOST hosts_allowed_XX
adaptation_access modify_response_XX allow HTTP GETPOST response_adaption_XX

# deny everything else - the final line
http_access deny all

-Original Message-

* ICAP errors *do not* map directly to HTTP errors. Usually one 502 
means *multiple* ICAP services are havign problems - posisbley very 
different problems. Trying to make one error page which represents *the* 
issue ... results in 502 Bad Gateway.

* http_access is all tested well before ICAP gets involved. So this is 
the wrong place to integrate 

Re: [squid-users] squid 3.2 and error_map equivalent

2013-03-26 Thread Amos Jeffries

On 26/03/2013 9:57 p.m., Martin Sperl wrote:

Hi Amos!

I had a final ide, which works: Checking on the response header for squid 
errors and matching them in the http_reply_access path...

acl ICAPERROR   rep_header  X-Squid-Error   ERR_ICAP
http_reply_access   denyICAPERROR

BUT, I would have to do this kind of thing for EVERY possible error 
(errors/template currently shows 41 such variations) - which would increase 
processing overhead tremendously...

So if I would propose a patch for 3.2 that implements either:
* keeping an older squid error info on ACL match deny, that could get used in the 
deny_info config as a new pattern - say %C for historic error
or
* allow several variables in error_directory to get selected? (this would be potentially 
a variation on the language selector for error pages, but this one seems to 
be a bit more complex and potentially could inflict collateral damage)

Would you accept that? (I would tend towards the first, as it seem simpler 
without a code-review)


I'm not acceping patches for new error page macros unless there is a 
clear generic need for it. In squid-3 the plan is to migrate the error 
page macros to the logformat codes which are a lot more flexible and 
have better coverage of state information. It just has not been 
completed yet.


(That is not to say you can't use it internally until the macro update 
is done. Just that its very unlikely to get accepted upstream.)



Thanks,
Martin

-Original Message-
From: Martin Sperl

Stupid me - I forgot the following ACLs:
aclerror500http_status 500
http_reply_access  denyerror500
(but I had removed the deny_info  error500 component).

And that http_reply_access triggers a reset of the previous ICAP_ERROR and 
moves it to ACCESS_DENIED instead...
Removing the http_reply_access I get the expected error 500 while retaining the 
http_access deny all

Which in the end means that squid is generating the error pages twice(as seen 
in the debug log)  - once for the ICAP and then for the ACCESS_DENIED on 
response-delivery.

So this means that using the deny_info trick as above essentially makes us 
lose the info on the icap error - with no means to recover it.


Which is what I understood you wanted in the beginning. Since that is 
exactly what error_map does.



OK, the debug.log shows that ICAP writes a note, but as there are no ACLs that 
allow for triggering on notes, it does not help either.
Nor is there an ACL that could match on those error fields that we could use 
instead...


Yes, we are still integrating notes for sharing between most components 
of Squid. This will be possible one the error page macros are merged 
with logformat which already supports %note. But that is some time off.




So in the end means I have no means to identify the error with the deny_info, 
so I am left in a state where I cannot modify the redirect to include the 
root-cause and modify the page based on that...

Any more ideas, besides a patch to help achieve what I need? I think I now have 
covered everything that is possible (and learned a lot in that phase) as well...


Amos



Martin



-Original Message-
From: Martin Sperl
Sent: Dienstag, 26. März 2013 08:29
To: Amos Jeffries; squid-users@squid-cache.org
Subject: RE: [squid-users] squid 3.2 and error_map equivalent

Hi Amos!

I hear what you are saying (especially about the http_response_access), but:

The thing with the config I have sent is that as soon as I have icap-service down the 
http_access deny all triggers and I get Status 403 plus in the body: you are not 
allowed to access.
But with this http_access deny all removed/commented out - I get Status 500 instead 
plus the corresponding error message trouble with ICAP in the body.

So your explaination that http_access gets handled BEFORE icap gets called is 
not (100% true), as otherwise I would see the 500 error in both cases.
Thing is that it might get evaluated a second time on an error condition and 
then with modified Request data and then the ACLS do no longer match...

Another observation is that if I have conditional logging enabled (based on 
ACLs) then the respective 403 by the DENY all does NOT get logged to that file 
thus another indication that something strange happens to those ACLs on an ICAP 
error.

I will try to enable the debug logging on full and will report on the ACL 
matching that happens whe icap is down...

Ciao,
Martin


P.s: Here again (relevant parts of ) the config:
# some ACLs:
acl HTTP proto HTTP
acl GETPOST method GET
acl GETPOST method POST

# imagine several of these blocks
icap_service mib_request_XX  reqmod_precache  0 icap://127.0.0.1:1344/XX/reqmod
icap_service mib_response_XX respmod_precache 0 icap://127.0.0.1:1344/XX/respmod

adaptation_service_set modify_request_XX  mib_request_XX
adaptation_service_set modify_response_XX mib_response_XX

acl hosts_allowed_XX dstdomain 

[squid-users] investigate squid eating 100% CPU

2013-03-26 Thread Youssef Ghorbal
Hello,

We have a Squid 3.1.23 running on a FreeBSD 8.3 (amd64)
The proxy is used to handle web access for ~2500 workstations and in 
pure proxy/filter (squidGaurd) mode with no cache (all disk caching is disabled)
It's not a tranparent/intercepting proxy, just a plain explicit proxy 
mode.

What we see, is that the squid process is using 100% of CPU (userland 
CPU usage, not kernel) all the time. Even in late night when the whole traffic 
is very minimalistic.

What I'm looking for is some advice on how to track down what is 
causing this CPU misbehaviour. Maybe it's some stupid config option not 
suitable for this kind of setup, maybe a bug etc.
What would be the tools/methodology that I can use to profile the 
running process.

Any help/suggestion would be really appreciated.

Youssef Ghorbal

squid -v
Squid Cache: Version 3.1.23
configure options:  '--with-default-user=squid' '--bindir=/usr/local/sbin' 
'--sbindir=/usr/local/sbin' '--datadir=/usr/local/etc/squid' 
'--libexecdir=/usr/local/libexec/squid' '--localstatedir=/var/squid' 
'--sysconfdir=/usr/local/etc/squid' '--with-logdir=/var/log/squid' 
'--with-pidfile=/var/run/squid/squid.pid' '--enable-removal-policies=lru heap' 
'--disable-linux-netfilter' '--disable-linux-tproxy' '--disable-epoll' 
'--disable-translation' '--disable-ecap' '--disable-loadable-modules' 
'--enable-auth=basic digest negotiate ntlm' '--enable-basic-auth-helpers=DB 
NCSA PAM MSNT SMB squid_radius_auth LDAP YP' 
'--enable-digest-auth-helpers=password ldap' 
'--enable-external-acl-helpers=ip_user session unix_group wbinfo_group 
ldap_group' '--enable-ntlm-auth-helpers=smb_lm' '--enable-storeio=ufs diskd 
aufs' '--enable-disk-io=AIO Blocking DiskDaemon DiskThreads' 
'--enable-delay-pools' '--enable-icap-client' '--enable-kqueue' 
'--with-large-files' '--enable-stacktraces' '--disable-optimizations' 
'--prefix=/usr/local' '--mandir=/usr/local/man' '--infodir=/usr/local/info/' 
'--build=amd64-portbld-freebsd8.3' 'build_alias=amd64-portbld-freebsd8.3' 
'CC=cc' 'CFLAGS=-pipe -I/usr/local/include -g -g -DLDAP_DEPRECATED' 'LDFLAGS= 
-L/usr/local/lib' 'CPPFLAGS=' 'CXX=c++' 'CXXFLAGS=-pipe -I/usr/local/include -g 
-g -DLDAP_DEPRECATED' 'CPP=cpp' 
--with-squid=/wrkdirs/usr/ports/www/squid31/work/squid-3.1.23 
--enable-ltdl-convenience



Re: [squid-users] investigate squid eating 100% CPU

2013-03-26 Thread Alexandre Chappaz
Hi,

you can activate the full debug
launch
squid -k debug
with the service running, and check what comes in the cache.log.

squid -k parse will audit your config file. Look for WARNING in the
output of this command.

the cachemanager can be usefull to see the actual activity of your squid :

squidclient localhost mgr:5min

gives you the last 5 min stats. (see if the n° of req/s is coherent
with what you expect )


Bonne chance
Alex




2013/3/26 Youssef Ghorbal d...@pasteur.fr:
 Hello,

 We have a Squid 3.1.23 running on a FreeBSD 8.3 (amd64)
 The proxy is used to handle web access for ~2500 workstations and in 
 pure proxy/filter (squidGaurd) mode with no cache (all disk caching is 
 disabled)
 It's not a tranparent/intercepting proxy, just a plain explicit proxy 
 mode.

 What we see, is that the squid process is using 100% of CPU (userland 
 CPU usage, not kernel) all the time. Even in late night when the whole 
 traffic is very minimalistic.

 What I'm looking for is some advice on how to track down what is 
 causing this CPU misbehaviour. Maybe it's some stupid config option not 
 suitable for this kind of setup, maybe a bug etc.
 What would be the tools/methodology that I can use to profile the 
 running process.

 Any help/suggestion would be really appreciated.

 Youssef Ghorbal

 squid -v
 Squid Cache: Version 3.1.23
 configure options:  '--with-default-user=squid' '--bindir=/usr/local/sbin' 
 '--sbindir=/usr/local/sbin' '--datadir=/usr/local/etc/squid' 
 '--libexecdir=/usr/local/libexec/squid' '--localstatedir=/var/squid' 
 '--sysconfdir=/usr/local/etc/squid' '--with-logdir=/var/log/squid' 
 '--with-pidfile=/var/run/squid/squid.pid' '--enable-removal-policies=lru 
 heap' '--disable-linux-netfilter' '--disable-linux-tproxy' '--disable-epoll' 
 '--disable-translation' '--disable-ecap' '--disable-loadable-modules' 
 '--enable-auth=basic digest negotiate ntlm' '--enable-basic-auth-helpers=DB 
 NCSA PAM MSNT SMB squid_radius_auth LDAP YP' 
 '--enable-digest-auth-helpers=password ldap' 
 '--enable-external-acl-helpers=ip_user session unix_group wbinfo_group 
 ldap_group' '--enable-ntlm-auth-helpers=smb_lm' '--enable-storeio=ufs diskd 
 aufs' '--enable-disk-io=AIO Blocking DiskDaemon DiskThreads' 
 '--enable-delay-pools' '--enable-icap-client' '--enable-kqueue' 
 '--with-large-files' '--enable-stacktraces' '--disable-optimizations' 
 '--prefix=/usr/local' '--mandir=/usr/local/man' '--infodir=/usr/local/info/' 
 '--build=amd64-portbld-freebsd8.3' 'build_alias=amd64-portbld-freebsd8.3' 
 'CC=cc' 'CFLAGS=-pipe -I/usr/local/include -g -g -DLDAP_DEPRECATED' 'LDFLAGS= 
 -L/usr/local/lib' 'CPPFLAGS=' 'CXX=c++' 'CXXFLAGS=-pipe -I/usr/local/include 
 -g -g -DLDAP_DEPRECATED' 'CPP=cpp' 
 --with-squid=/wrkdirs/usr/ports/www/squid31/work/squid-3.1.23 
 --enable-ltdl-convenience



Fwd: [squid-users] Re: Re: kerberos auth failing behind a load balancer

2013-03-26 Thread Sean Boran
Hi,

FYI ...  I got the two squids working behind the (Kemp) load balancer
with kerberos auth

Procedure:
0. myproxy.vptt.ch points to the IP of the load balancer. This is
referenced in wpad.dat or browser settings. Squid runs on port 80, so
the URL of the proxy is http://myproxy.ch:80

1. create an AD service account account
  lets call it my-kerb
2. add an SPN for the LB to that AD account. Did this on windows:
setspn -S http/myproxy.ch my-kerb

3. create a keytab on each squid
rm /etc/krb5.keytab
net ads keytab CREATE HTTP -U my-kerb

ktutil
ktutil:  rkt /etc/krb5.keytab
addent -password -p HTTP/myproxy.ch -k 5 -e rc4-hmac  (use the my-kerb passwd)
ktutil:  wkt /etc/krb5.keytab

chmod 644 /etc/krb5.keytab   (or use a group to allow the squid user
to read it).


Regards,

Sean Boran


Re: [squid-users] investigate squid eating 100% CPU

2013-03-26 Thread FredB

Are you using delay_pool ? 



Re: [squid-users] investigate squid eating 100% CPU

2013-03-26 Thread Amos Jeffries
The first step in debugging any problem like this is to upgrade to the 
latest version and see if it has been resolved.

The current latest is Squid-3.3.3.

Amos

On 27/03/2013 1:33 a.m., Alexandre Chappaz wrote:

Hi,

you can activate the full debug
launch
squid -k debug
with the service running, and check what comes in the cache.log.

squid -k parse will audit your config file. Look for WARNING in the
output of this command.

the cachemanager can be usefull to see the actual activity of your squid :

squidclient localhost mgr:5min

gives you the last 5 min stats. (see if the n° of req/s is coherent
with what you expect )


Bonne chance
Alex




2013/3/26 Youssef Ghorbal d...@pasteur.fr:

Hello,

 We have a Squid 3.1.23 running on a FreeBSD 8.3 (amd64)
 The proxy is used to handle web access for ~2500 workstations and in 
pure proxy/filter (squidGaurd) mode with no cache (all disk caching is disabled)
 It's not a tranparent/intercepting proxy, just a plain explicit proxy 
mode.

 What we see, is that the squid process is using 100% of CPU (userland 
CPU usage, not kernel) all the time. Even in late night when the whole traffic 
is very minimalistic.

 What I'm looking for is some advice on how to track down what is 
causing this CPU misbehaviour. Maybe it's some stupid config option not 
suitable for this kind of setup, maybe a bug etc.
 What would be the tools/methodology that I can use to profile the 
running process.

 Any help/suggestion would be really appreciated.

Youssef Ghorbal

squid -v
Squid Cache: Version 3.1.23
configure options:  '--with-default-user=squid' '--bindir=/usr/local/sbin' 
'--sbindir=/usr/local/sbin' '--datadir=/usr/local/etc/squid' 
'--libexecdir=/usr/local/libexec/squid' '--localstatedir=/var/squid' 
'--sysconfdir=/usr/local/etc/squid' '--with-logdir=/var/log/squid' 
'--with-pidfile=/var/run/squid/squid.pid' '--enable-removal-policies=lru heap' 
'--disable-linux-netfilter' '--disable-linux-tproxy' '--disable-epoll' 
'--disable-translation' '--disable-ecap' '--disable-loadable-modules' 
'--enable-auth=basic digest negotiate ntlm' '--enable-basic-auth-helpers=DB 
NCSA PAM MSNT SMB squid_radius_auth LDAP YP' 
'--enable-digest-auth-helpers=password ldap' 
'--enable-external-acl-helpers=ip_user session unix_group wbinfo_group 
ldap_group' '--enable-ntlm-auth-helpers=smb_lm' '--enable-storeio=ufs diskd 
aufs' '--enable-disk-io=AIO Blocking DiskDaemon DiskThreads' 
'--enable-delay-pools' '--enable-icap-client' '--enable-kqueue' 
'--with-large-files' '--enable-stacktraces' '--disable-optimizations' 
'--prefix=/usr/local' '--mandir=/usr/local/man' '--infodir=/usr/local/info/' 
'--build=amd64-portbld-freebsd8.3' 'build_alias=amd64-portbld-freebsd8.3' 
'CC=cc' 'CFLAGS=-pipe -I/usr/local/include -g -g -DLDAP_DEPRECATED' 'LDFLAGS= 
-L/usr/local/lib' 'CPPFLAGS=' 'CXX=c++' 'CXXFLAGS=-pipe -I/usr/local/include -g 
-g -DLDAP_DEPRECATED' 'CPP=cpp' 
--with-squid=/wrkdirs/usr/ports/www/squid31/work/squid-3.1.23 
--enable-ltdl-convenience






[squid-users] Squid-3.3.3 fails to compile..

2013-03-26 Thread Odhiambo Washington
On FreeBSD 9.
Anyone knows why my compile fails viz:

mv -f .deps/Address.Tpo .deps/Address.Plo
/bin/sh ../../libtool --tag=CXX   --mode=compile g++ -DHAVE_CONFIG_H
-I../.. -I../../include -I../../lib -I../../src -I../../include
-I/usr/inc
lude  -I/usr/include  -I../../libltdl  -I/usr/include
-I/usr/local/include/libxml2  -I/usr/include  -I/usr/include
-I/usr/local/include/libxml2
-Wall -Wpointer-arith -Wwrite-strings -Wcomments -Werror -pipe
-D_REENTRANT -g -O2 -I/usr/local/include -MT Intercept.lo -MD -MP -MF
.deps/Interc
ept.Tpo -c -o Intercept.lo Intercept.cc
libtool: compile:  g++ -DHAVE_CONFIG_H -I../.. -I../../include
-I../../lib -I../../src -I../../include -I/usr/include -I/usr/include
-I../../libl
tdl -I/usr/include -I/usr/local/include/libxml2 -I/usr/include
-I/usr/include -I/usr/local/include/libxml2 -Wall -Wpointer-arith
-Wwrite-strings
-Wcomments -Werror -pipe -D_REENTRANT -g -O2 -I/usr/local/include -MT
Intercept.lo -MD -MP -MF .deps/Intercept.Tpo -c Intercept.cc  -fPIC
-DPIC -
o .libs/Intercept.o
Intercept.cc: In member function 'bool
Ip::Intercept::IpfInterception(const Comm::ConnectionPointer, int)':
Intercept.cc:210: error: 'enter_suid' was not declared in this scope
Intercept.cc:217: error: 'leave_suid' was not declared in this scope
gmake[3]: *** [Intercept.lo] Error 1
gmake[3]: Leaving directory
`/usr/home/wash/Tools/Squid/3.3/squid-3.3.3-20130326-r12517/src/ip'
gmake[2]: *** [all-recursive] Error 1
gmake[2]: Leaving directory
`/usr/home/wash/Tools/Squid/3.3/squid-3.3.3-20130326-r12517/src'
gmake[1]: *** [all] Error 2
gmake[1]: Leaving directory
`/usr/home/wash/Tools/Squid/3.3/squid-3.3.3-20130326-r12517/src'
gmake: *** [all-recursive] Error 1



My configure options:

!/bin/sh
./configure --prefix=/opt/squid33 \
--enable-removal-policies=lru heap \
--disable-linux-netfilter \
--disable-linux-tproxy \
--disable-epoll \
--enable-auth \
--enable-basic-auth-helpers=DB NCSA PAM MSNT YP PAM POP3 SMB
SSPI MSNT \
--enable-digest-auth-helpers=password \
--enable-external-acl-helpers=ip_user session unix_group
wbinfo_group file_userip eDirectory_userip \
--enable-ntlm-auth-helpers=smb_lm SSPI \
--with-pthreads \
--enable-storeio=ufs diskd aufs \
--enable-delay-pools \
--enable-snmp  \
--with-openssl=/usr \
--enable-forw-via-db \
--enable-cache-digests \
--enable-wccpv2 \
--enable-referer-log \
--enable-useragent-log \
--enable-arp-acl \
--enable-follow-x-forwarded-for \
--with-large-files \
--enable-large-cache-files \
--enable-err-languages=English French \
--enable-default-err-language=English \
--enable-esi \
--enable-kqueue \
--enable-icap-client \
--enable-kill-parent-hack \
--enable-ssl \
--enable-leakfinder \
--enable-ssl-crtd \
--enable-url-rewrite-helpers \
--enable-xmalloc-statistics \
--enable-stacktraces \
--enable-auth-negotiate=SSPI kerberos \
--enable-zph-qos \
--enable-eui \
--enable-pf-transparent \
--enable-ipf-transparent

--
Best regards,
Odhiambo WASHINGTON,
Nairobi,KE
+254733744121/+254722743223
I can't hear you -- I'm using the scrambler.


Re: [squid-users] investigate squid eating 100% CPU

2013-03-26 Thread Youssef Ghorbal
On Mar 26, 2013, at 1:50 PM, FredB fredbm...@free.fr wrote:

 
 Are you using delay_pool ? 

Nope, we are not using delay_pools.


Re: [squid-users] investigate squid eating 100% CPU

2013-03-26 Thread Youssef Ghorbal
The current FreeBSD ports available for squid are squid31 and squid32
I'll be able to upgrade to the latest 3.2 but not further.

Youssef
-
On Mar 26, 2013, at 1:50 PM, Amos Jeffries squ...@treenet.co.nz wrote:

 The first step in debugging any problem like this is to upgrade to the latest 
 version and see if it has been resolved.
 The current latest is Squid-3.3.3.
 
 Amos
 
 On 27/03/2013 1:33 a.m., Alexandre Chappaz wrote:
 Hi,
 
 you can activate the full debug
 launch
 squid -k debug
 with the service running, and check what comes in the cache.log.
 
 squid -k parse will audit your config file. Look for WARNING in the
 output of this command.
 
 the cachemanager can be usefull to see the actual activity of your squid :
 
 squidclient localhost mgr:5min
 
 gives you the last 5 min stats. (see if the n° of req/s is coherent
 with what you expect )
 
 
 Bonne chance
 Alex
 
 
 
 
 2013/3/26 Youssef Ghorbal d...@pasteur.fr:
 Hello,
 
 We have a Squid 3.1.23 running on a FreeBSD 8.3 (amd64)
 The proxy is used to handle web access for ~2500 workstations and 
 in pure proxy/filter (squidGaurd) mode with no cache (all disk caching is 
 disabled)
 It's not a tranparent/intercepting proxy, just a plain explicit 
 proxy mode.
 
 What we see, is that the squid process is using 100% of CPU 
 (userland CPU usage, not kernel) all the time. Even in late night when the 
 whole traffic is very minimalistic.
 
 What I'm looking for is some advice on how to track down what is 
 causing this CPU misbehaviour. Maybe it's some stupid config option not 
 suitable for this kind of setup, maybe a bug etc.
 What would be the tools/methodology that I can use to profile the 
 running process.
 
 Any help/suggestion would be really appreciated.
 
 Youssef Ghorbal
 
 squid -v
 Squid Cache: Version 3.1.23
 configure options:  '--with-default-user=squid' '--bindir=/usr/local/sbin' 
 '--sbindir=/usr/local/sbin' '--datadir=/usr/local/etc/squid' 
 '--libexecdir=/usr/local/libexec/squid' '--localstatedir=/var/squid' 
 '--sysconfdir=/usr/local/etc/squid' '--with-logdir=/var/log/squid' 
 '--with-pidfile=/var/run/squid/squid.pid' '--enable-removal-policies=lru 
 heap' '--disable-linux-netfilter' '--disable-linux-tproxy' 
 '--disable-epoll' '--disable-translation' '--disable-ecap' 
 '--disable-loadable-modules' '--enable-auth=basic digest negotiate ntlm' 
 '--enable-basic-auth-helpers=DB NCSA PAM MSNT SMB squid_radius_auth LDAP 
 YP' '--enable-digest-auth-helpers=password ldap' 
 '--enable-external-acl-helpers=ip_user session unix_group wbinfo_group 
 ldap_group' '--enable-ntlm-auth-helpers=smb_lm' '--enable-storeio=ufs diskd 
 aufs' '--enable-disk-io=AIO Blocking DiskDaemon DiskThreads' 
 '--enable-delay-pools' '--enable-icap-client' '--enable-kqueue' 
 '--with-large-files' '--enable-stacktraces' '--disable-optimizations' 
 '--prefix=/usr/local' '--mandir=/usr/local/man' 
 '--infodir=/usr/local/info/' '--build=amd64-portbld-freebsd8.3' 
 'build_alias=amd64-portbld-freebsd8.3' 'CC=cc' 'CFLAGS=-pipe 
 -I/usr/local/include -g -g -DLDAP_DEPRECATED' 'LDFLAGS= -L/usr/local/lib' 
 'CPPFLAGS=' 'CXX=c++' 'CXXFLAGS=-pipe -I/usr/local/include -g -g 
 -DLDAP_DEPRECATED' 'CPP=cpp' 
 --with-squid=/wrkdirs/usr/ports/www/squid31/work/squid-3.1.23 
 --enable-ltdl-convenience
 
 
 



[squid-users] Squid Child process restarting....

2013-03-26 Thread Farooq Bhatti
Hi All,

My squid is restarting due to following as shown in cache log.

2013/03/26 20:14:02| TunnelStateData::Connection::error: FD 1767: read/write
failure: (32) Broken pipe
2013/03/26 20:14:12| TunnelStateData::Connection::error: FD 3896: read/write
failure: (32) Broken pipe
2013/03/26 20:14:15| Reconfiguring Squid Cache (version 3.1.10)...
2013/03/26 20:14:15| FD 14 Closing HTTP connection
2013/03/26 20:14:15| FD 17 Closing HTTP connection
2013/03/26 20:14:15| assertion failed: disk.cc:377: fd = 0

And in message log I got this:

Mar 26 20:14:15 stdproxy squid[1960]: Squid Parent: child process 23669
exited due to signal 6 with status 0
Mar 26 20:14:18 stdproxy squid[1960]: Squid Parent: child process 24168
started

I am using Squid Cache: Version 3.1.10

Please help to resolve this issue.

BR
Farooq



Re: [squid-users] investigate squid eating 100% CPU

2013-03-26 Thread Youssef Ghorbal
On Mar 26, 2013, at 1:33 PM, Alexandre Chappaz alexandrechap...@gmail.com 
wrote:

 Hi,
 
 you can activate the full debug
 launch
 squid -k debug
 with the service running, and check what comes in the cache.log.

I'll give it a try.
How to stop debug by the way ? just squid -k debug again ? 

 
 squid -k parse will audit your config file. Look for WARNING in the
 output of this command.

This command does not retrun any warning.

 the cachemanager can be usefull to see the actual activity of your squid :
 
 squidclient localhost mgr:5min
 
 gives you the last 5 min stats. (see if the n° of req/s is coherent
 with what you expect )


Here after the output of the mgr:5min
It show that we are around 168 req/s
for a cpu usage around 99%
I don't think that 160 req/s is such a big number that can explain full CPU 
time.



sample_start_time = 1364316830.58546 (Tue, 26 Mar 2013 16:53:50 GMT)
sample_end_time = 1364317130.157822 (Tue, 26 Mar 2013 16:58:50 GMT)
client_http.requests = 168.400939/sec
client_http.hits = 20.203314/sec
client_http.errors = 0.013329/sec
client_http.kbytes_in = 479.431347/sec
client_http.kbytes_out = 4713.703475/sec
client_http.all_median_svc_time = 0.150482 seconds
client_http.miss_median_svc_time = 0.167753 seconds
client_http.nm_median_svc_time = 0.030657 seconds
client_http.nh_median_svc_time = 0.097357 seconds
client_http.hit_median_svc_time = 0.074090 seconds
server.all.requests = 148.814088/sec
server.all.errors = 0.00/sec
server.all.kbytes_in = 4349.643949/sec
server.all.kbytes_out = 466.209055/sec
server.http.requests = 132.799387/sec
server.http.errors = 0.00/sec
server.http.kbytes_in = 2908.297586/sec
server.http.kbytes_out = 339.334374/sec
server.ftp.requests = 0.013329/sec
server.ftp.errors = 0.00/sec
server.ftp.kbytes_in = 4.145295/sec
server.ftp.kbytes_out = 0.003332/sec
server.other.requests = 16.001371/sec
server.other.errors = 0.00/sec
server.other.kbytes_in = 1437.204400/sec
server.other.kbytes_out = 126.868017/sec
icp.pkts_sent = 0.00/sec
icp.pkts_recv = 0.00/sec
icp.queries_sent = 0.00/sec
icp.replies_sent = 0.00/sec
icp.queries_recv = 0.00/sec
icp.replies_recv = 0.00/sec
icp.replies_queued = 0.00/sec
icp.query_timeouts = 0.00/sec
icp.kbytes_sent = 0.00/sec
icp.kbytes_recv = 0.00/sec
icp.q_kbytes_sent = 0.00/sec
icp.r_kbytes_sent = 0.00/sec
icp.q_kbytes_recv = 0.00/sec
icp.r_kbytes_recv = 0.00/sec
icp.query_median_svc_time = 0.00 seconds
icp.reply_median_svc_time = 0.00 seconds
dns.median_svc_time = 0.058152 seconds
unlink.requests = 0.00/sec
page_faults = 0.00/sec
select_loops = 85.811603/sec
select_fds = 0.00/sec
average_select_fd_period = 0.00/fd
median_select_fds = 0.00
swap.outs = 0.00/sec
swap.ins = 0.00/sec
swap.files_cleaned = 0.00/sec
aborted_requests = 2.335894/sec
syscalls.disk.opens = 0.00/sec
syscalls.disk.closes = 0.00/sec
syscalls.disk.reads = 0.00/sec
syscalls.disk.writes = 0.00/sec
syscalls.disk.seeks = 0.00/sec
syscalls.disk.unlinks = 0.00/sec
syscalls.sock.accepts = 81.882903/sec
syscalls.sock.sockets = 68.897201/sec
syscalls.sock.connects = 65.285063/sec
syscalls.sock.binds = 68.897201/sec
syscalls.sock.closes = 131.846369/sec
syscalls.sock.reads = 982.381577/sec
syscalls.sock.writes = 1513.369196/sec
syscalls.sock.recvfroms = 57.544291/sec
syscalls.sock.sendtos = 34.428607/sec
cpu_time = 299.780455 seconds
wall_time = 300.099276 seconds
cpu_usage = 99.893761%



Re: [squid-users] investigate squid eating 100% CPU

2013-03-26 Thread Youssef Ghorbal
 the cachemanager can be usefull to see the actual activity of your squid :
 
 squidclient localhost mgr:5min
 
 gives you the last 5 min stats. (see if the n° of req/s is coherent
 with what you expect )
 
 
 Here after the output of the mgr:5min
 It show that we are around 168 req/s
 for a cpu usage around 99%
 I don't think that 160 req/s is such a big number that can explain full CPU 
 time.

Forgot to mention that it somtimes drops to ~30% CPU for some periods.
For example right now we are :
client_http.requests = 160.370710/sec
cpu_usage = 39.305030%

Which makes me think of some kind of requests getting it to go crazy.

[complete output here]

sample_start_time = 1364317190.222942 (Tue, 26 Mar 2013 16:59:50 GMT)
sample_end_time = 1364317490.271499 (Tue, 26 Mar 2013 17:04:50 GMT)
client_http.requests = 160.370710/sec
client_http.hits = 17.510499/sec
client_http.errors = 0.016664/sec
client_http.kbytes_in = 278.614904/sec
client_http.kbytes_out = 3589.275718/sec
client_http.all_median_svc_time = 0.092188 seconds
client_http.miss_median_svc_time = 0.097357 seconds
client_http.nm_median_svc_time = 0.000911 seconds
client_http.nh_median_svc_time = 0.016481 seconds
client_http.hit_median_svc_time = 0.000911 seconds
server.all.requests = 145.123178/sec
server.all.errors = 0.00/sec
server.all.kbytes_in = 3294.903365/sec
server.all.kbytes_out = 269.543039/sec
server.http.requests = 130.565534/sec
server.http.errors = 0.00/sec
server.http.kbytes_in = 2860.440352/sec
server.http.kbytes_out = 179.021024/sec
server.ftp.requests = 0.00/sec
server.ftp.errors = 0.00/sec
server.ftp.kbytes_in = 0.00/sec
server.ftp.kbytes_out = 0.00/sec
server.other.requests = 14.557644/sec
server.other.errors = 0.00/sec
server.other.kbytes_in = 434.463013/sec
server.other.kbytes_out = 90.525348/sec
icp.pkts_sent = 0.00/sec
icp.pkts_recv = 0.00/sec
icp.queries_sent = 0.00/sec
icp.replies_sent = 0.00/sec
icp.queries_recv = 0.00/sec
icp.replies_recv = 0.00/sec
icp.replies_queued = 0.00/sec
icp.query_timeouts = 0.00/sec
icp.kbytes_sent = 0.00/sec
icp.kbytes_recv = 0.00/sec
icp.q_kbytes_sent = 0.00/sec
icp.r_kbytes_sent = 0.00/sec
icp.q_kbytes_recv = 0.00/sec
icp.r_kbytes_recv = 0.00/sec
icp.query_median_svc_time = 0.00 seconds
icp.reply_median_svc_time = 0.00 seconds
dns.median_svc_time = 0.014637 seconds
unlink.requests = 0.00/sec
page_faults = 0.00/sec
select_loops = 2316.561716/sec
select_fds = 0.00/sec
average_select_fd_period = 0.00/fd
median_select_fds = 0.00
swap.outs = 0.00/sec
swap.ins = 0.00/sec
swap.files_cleaned = 0.00/sec
aborted_requests = 2.159650/sec
syscalls.disk.opens = 0.00/sec
syscalls.disk.closes = 0.00/sec
syscalls.disk.reads = 0.00/sec
syscalls.disk.writes = 0.00/sec
syscalls.disk.seeks = 0.00/sec
syscalls.disk.unlinks = 0.00/sec
syscalls.sock.accepts = 105.942852/sec
syscalls.sock.sockets = 66.179288/sec
syscalls.sock.connects = 62.956477/sec
syscalls.sock.binds = 66.179288/sec
syscalls.sock.closes = 124.793135/sec
syscalls.sock.reads = 1316.883520/sec
syscalls.sock.writes = 1770.096831/sec
syscalls.sock.recvfroms = 58.203913/sec
syscalls.sock.sendtos = 30.974986/sec
cpu_time = 117.934176 seconds
wall_time = 300.048557 seconds
cpu_usage = 39.305030%



Re: [squid-users] investigate squid eating 100% CPU

2013-03-26 Thread Youssef Ghorbal
On Mar 26, 2013, at 1:19 PM, Youssef Ghorbal d...@pasteur.fr wrote:

 Hello,
 
   We have a Squid 3.1.23 running on a FreeBSD 8.3 (amd64)
   The proxy is used to handle web access for ~2500 workstations and in 
 pure proxy/filter (squidGaurd) mode with no cache (all disk caching is 
 disabled)
   It's not a tranparent/intercepting proxy, just a plain explicit proxy 
 mode.
 
   What we see, is that the squid process is using 100% of CPU (userland 
 CPU usage, not kernel) all the time. Even in late night when the whole 
 traffic is very minimalistic.
 
   What I'm looking for is some advice on how to track down what is 
 causing this CPU misbehaviour. Maybe it's some stupid config option not 
 suitable for this kind of setup, maybe a bug etc.
   What would be the tools/methodology that I can use to profile the 
 running process.
   
   Any help/suggestion would be really appreciated.

Forgot to ask, is there any well know squid CPU bound operations I can start to 
focus on in order to narrow down the problem ? 
ACL checks ?
Peer selection ?

Youssef

[squid-users] Squid 3 NTLM , RPC over HTTPS, multi certs

2013-03-26 Thread Damir Reic
I can't find thorough info about what is implemented in squid 3 so i would
like to know is this implemented:

1) Sharepoint from outside with squid proxy acting as http proxy with NTLM
support
2) Outlook anywhere - RPC over HTTPS  with NTLM auth
3) Can i use multiple SSL certificates for proxy like i can do in apache?

Thanks!




Re: [squid-users] investigate squid eating 100% CPU

2013-03-26 Thread Squidblacklist
Consider this, you do not need dansguardian to use
blacklists. I know thats not really addressing your issue, I just
thought I would mention it since I host http://squidblacklist.org



-
Signed,

Fix Nichols

http://www.squidblacklist.org


[squid-users] Happy eyeballs

2013-03-26 Thread Mark Davies
Hi,
   is there something you have to do to turn on happy eyeballs is 
squid?  We are running 3.3.1 and currently there is a site 
(karen.net.nz) that is advertising both v6 and v4 addresses but not 
reachable on the v6 and its taking ages before squid serves up the 
page from the v4 address.  This is what happy eyeballs is supposed to 
deal with right? but it doesnt seem to be working.

cheers
mark


[squid-users] squid qos_flows - copying mark from client side to upstream request?

2013-03-26 Thread Ed W
Hi Andy, Sorry to bug you, but I finally got round to trying the 
qos_flows feature and I think my understanding is completely back to front?


What I need is to copy the packet/connection mark from the client 
request, and apply it to the upstream request. So for example I mark 
clients that have passed a captive portal test with some mark, I need 
that mark copying up to requests coming from squid so that I know they 
effectively come from a validated client


Near as I can tell the current qos_flows applies this all backwards, ie 
it assumes that the upstream has some mark on it, and copies this back 
to the client response connection?


How tricky would it be to offer this option in both directions? Does 
anyone else have a use for this kind of feature?


Thanks

Ed W


Re: [squid-users] Happy eyeballs

2013-03-26 Thread Amos Jeffries

On 27/03/2013 12:15 p.m., Mark Davies wrote:

Hi,
is there something you have to do to turn on happy eyeballs is
squid?  We are running 3.3.1 and currently there is a site
(karen.net.nz) that is advertising both v6 and v4 addresses but not
reachable on the v6 and its taking ages before squid serves up the
page from the v4 address.  This is what happy eyeballs is supposed to
deal with right? but it doesnt seem to be working.

cheers
mark


Happy eyeballs is an algorithm designed by browser people to take 
advantage of the browsers end-user resources of abundant TCP sockets and 
bandwidth. While it looks great to end users when its done by browsers 
it actually hits very hard on the server infrastructure where these two 
resources are much more scarce.
Squid has a partial implementation of happy eyeballs added to 3.2+ which 
performs the parallel DNS lookup portion of the algorithm but does not 
perform the parallel v6+v4 SYN portion which halves the server TCP 
capacity for only rare gains (like karen).



Also, be aware the timeout settings in Squid-3.3 connection setup have 
undergone a major redesign. If you have tuned your config for reliable 
responses around the 3.2 and older squid behaviour you would beneft from 
re-tuning under the new behaviour.
 * forward_timeout is still the overall limiter on the *whole* server 
contact process.
 * dns_timeout is now on the parallel v4+v6 lookups *combined*, and 
also excluded from connect_timeout.
 * connect_timeout is now *only* the TCP SYN waiting period - it is 
per- connect attempt.
 * max_retries limiter now counts individual servers contact attempts - 
it is effectively per-IP now instead of per-FQDN.


So what you should be seeing in 3.3.1 is that connection to karen.net/nz 
takes no more than:  connect_timeout multiplied by the number of IPv6 
karen presents to you, plus the amount of time the IPv4 takes to respond 
with data.
 NP: connect_timeout default is still the old 60 seconds, you can 
safely drop it to a few seconds if you need to in 3.3.


Amos



Re: [squid-users] Squid 3 NTLM , RPC over HTTPS, multi certs

2013-03-26 Thread Amos Jeffries

On 27/03/2013 7:02 a.m., Damir Reic wrote:

I can't find thorough info about what is implemented in squid 3 so i would
like to know is this implemented:

1) Sharepoint from outside with squid proxy acting as http proxy with NTLM
support


This is very unlikely to work. ... NTLM auth proper name is LAN Manager 
authentication - this is authentication for *LAN* management. Using it 
over the Internet varies from erratic success/fail to complete failure. 
Squid requires some horribly nasty hacks which greatly reduce the 
performance just to relay NTLM traffic around the LAN. Requiring every 
network admin in the world to also compromise good performance in order 
to let your Sharepoint traffic pass through them is not realistic - you 
will always encounter networks which require high HTTP performance.


 ... the best thing you can do is to upgrade to Negotiate/Kerberos 
instead of wasting time trying to get NTLM working on the WAN. It still 
requires some performance reduction, but not nearly as many high-impact 
problems as NTLM.




2) Outlook anywhere - RPC over HTTPS  with NTLM auth


#1 RPC is a protocol using HTTP message structure and ports. It is not 
explicitly implemented by Squid but since it uses HTTP messaging 
structure Squid handles it as HTTP.


However that is dependent on exactly which squid 3 version you are 
talking about. HTTP/1.1 feature support has been progressivley added 
from Squid-2.6 onwards and finally achieved sufficient feature 
capabilities for 3.2+ to advertise themselves as HTTP/1.1 enabled. The 
impact of this on RPC behaviour has at times been problematic as RPC 
services required features not presented by older Squid or failed to 
properly support features required by HTTP/1.1 used by Squid.


For instance, recent Sharepoint software versions have been found to 
*assume* and *require* that all proxies in existence support HTTP/1.1 
features which are not supported by the common Squid-3.1 and older 
installations.



#2 NTLM auth does *not* play nicely with HTTP. It's replacement 
Negotiate plays a lot nicer but still violates several critical HTTP 
requirements. They are supported in HTTP proxies like Squid by use of 
code hacks which break HTTP behaviour. As we have improved the code and 
tried to make Squid follow correct HTTP behaviour properly sometimes the 
HTTP changes have broken these auth and required re-fixing the code 
doing those hacks.


Sorry for the rant-like text, but that is the situation. If possible 
please use the latest Squid-3 release for best behaviour. It almost 
completely works for both NTLM and Negotiate with the currently popular 
Sharepoint versions. (There is one more fix in QA right now for both 
Negotiate and NTLM, and I can't speak for any future discoveries).




3) Can i use multiple SSL certificates for proxy like i can do in apache?


How do you do it in Apache? what version of Apache? what version of 
Squid? can you change your version of Squid if it is too old? - these 
are critical information which you have omitted.


Amos


Re: [squid-users] Squid Child process restarting....

2013-03-26 Thread Amos Jeffries

On 27/03/2013 4:28 a.m., Farooq Bhatti wrote:

Hi All,

My squid is restarting due to following as shown in cache log.

2013/03/26 20:14:02| TunnelStateData::Connection::error: FD 1767: read/write
failure: (32) Broken pipe
2013/03/26 20:14:12| TunnelStateData::Connection::error: FD 3896: read/write
failure: (32) Broken pipe
2013/03/26 20:14:15| Reconfiguring Squid Cache (version 3.1.10)...
2013/03/26 20:14:15| FD 14 Closing HTTP connection
2013/03/26 20:14:15| FD 17 Closing HTTP connection
2013/03/26 20:14:15| assertion failed: disk.cc:377: fd = 0

And in message log I got this:

Mar 26 20:14:15 stdproxy squid[1960]: Squid Parent: child process 23669
exited due to signal 6 with status 0
Mar 26 20:14:18 stdproxy squid[1960]: Squid Parent: child process 24168
started

I am using Squid Cache: Version 3.1.10

Please help to resolve this issue.


Please upgrade to the currently supported Squid. I think this was 
resolved already, but as part of later code cleanups so I can't point 
you at a particular patch.


Amos


Re: [squid-users] Happy eyeballs

2013-03-26 Thread Mark Davies
On Wed, 27 Mar 2013, Amos Jeffries wrote:
 Squid has a partial implementation of happy eyeballs added to 3.2+
 which performs the parallel DNS lookup portion of the algorithm
 but does not perform the parallel v6+v4 SYN portion which halves
 the server TCP capacity for only rare gains (like karen).

OK makes sense.

 Also, be aware the timeout settings in Squid-3.3 connection setup
 have undergone a major redesign.
[...]
 
 So what you should be seeing in 3.3.1 is that connection to
 karen.net/nz takes no more than:  connect_timeout multiplied by
 the number of IPv6 karen presents to you, plus the amount of time
 the IPv4 takes to respond with data.


In terms of actual page viewing its worse than that.  It's as you say 
to get the base page but then you have to repeat the wait for any 
elements the page references (css, images etc) before the browser 
renders the page (depending on how much parallelism the browser 
implements in grabbing the elements).

   NP: connect_timeout default is still the old 60 seconds, you can
 safely drop it to a few seconds if you need to in 3.3.

I've dropped it to 10 seconds as that shouldn't interfere with any 
real site and lets sites like karen eventually render.  I might reduce 
it even more once I decide what a legitimate max connect time is in 
todays internet.

Now to go and prod karen about getting their site fixed for v6.

cheers
mark


Re: [squid-users] Squid Child process restarting....

2013-03-26 Thread Squidblacklist
First If I were in your position, I would test the disk with the
manufacturers diagnostic tool and make sure I wasnt dealing with a
failing disk. Maybe run fschk as well.



-
Signed,

Fix Nichols

http://www.squidblacklis
* Vosto has quit (Quit: Leaving)
* RedHelper (~d...@fear.me) has joined #haxradio
* chaos gives voice to RedHelper
RedHelper hello
fix hi
fix u get that power cable?
* RedHelper has quit (Ping timeout: 184 seconds)
fix guess not
fix assholes
MR^E nou
fix lol willy wanka is madt.org
Signed,

Fix Nichols

http://squidblacklist.org


Re: [squid-users] 3.3.1 ssl-bump-server-first for google domain lockdown

2013-03-26 Thread Alex Rousskov
On 03/24/2013 01:39 AM, Robert Mason wrote:
 Hi Alex!  Thanks for the reply.
 
 It seems to see the CONNECT yes.. but still no joy.
 
 192.168.99.100 TCP_MISS/200 114940 CONNECT mail.google.com:443

Good. This means that Squid intercepts HTTPS traffic from the browser.
The next step is to figure out whether Squid bumps those intercepted
connections. Are there non-CONNECT requests for mail.google.com:443 in
access.log?


 ssl_bump server-first

Your ssl_bump directive is missing an ACL. Try adding all:

ssl_bump server-first all


HTH,

Alex.


 On Fri, Mar 22, 2013 at 12:19 AM, Alex Rousskov wrote:
 On 03/21/2013 04:21 PM, Robert Mason wrote:
 Hi all,

 I've been trying to setup a system to do ssl interception and dynamic
 certificate generation in order to prevent our users from signing in
 to their personal gmail accounts (our company mail is through gmail).

 From the info here
 http://support.google.com/a/bin/answer.py?hl=enanswer=1668854 I found
 that I needed to add a header in the request and have that working:

 request_header_add X-GoogApps-Allowed-Domains rodeofx.com all

 adds it to every http request which I'm fine with but I need to add it
 to https requests and that's not happening.

 I have tried things like:

 http_port 192.168.168.253:3128 ssl-bump generate-host-certificates=on
 dynamic_cert_mem_cache_size=4MB cert=/etc/squid/ssl_cert/myCA.pem

 always_direct allow all
 ssl_bump allow all
 # the following two options are unsafe and not always necessary:
 #sslproxy_cert_error allow all
 #sslproxy_flags DONT_VERIFY_PEER

 sslcrtd_program /etc/squid/libexec/squid/ssl_crtd -s
 /etc/squid/var/lib/ssl_db -M 4MB
 sslcrtd_children 5

 No love though.. I still get the regular google cert and don't see
 certs in my ssl_db folder.

 If anyone has suggestions to offer I'd really appreciate it.

 Does Squid get CONNECT requests for Google domains? Check access.log.

 If it does, are there any errors or warnings in cache.log?

 Alex.




Re: [squid-users] Sponsor etag/vary support for Squid 3.3

2013-03-26 Thread Alex Rousskov
On 03/20/2013 12:51 PM, Ed W wrote:

 I'm picking up an old thread from some time back.  I remain interested
 in getting support for etag into squid (and related revalidate support).
 
 My main requirement is that I have two proxies on either side of a
 bandwidth limited link (with high cost).  I want the situation that when
 a client GETs some object,

A client GETs some object currently in the cache and with ETag, but that
cached object is either stale or being forcefully reloaded by the
client, right?


 we can convert this to an IF-NONE-MATCH and
 trust the etag confirms that the object is unchanged.



 Note, I am aware of the limitations of trusting etags. In my setup I
 will have control over the proxy on the high speed side of the
 connection and we can use various methods on that side to ensure that
 the etags are sane. The main goal is to minimise bandwidth across the
 intermediate (expensive) link.
 
 Previously we discussed all kinds of complex ideas including
 implementing trailers, and custom headers with hash values.  On
 reflection I think everything required can be done using only etag
 revalidation (and some tweaking of etags, but squid needs know nothing
 about that...)

Yes, reload-into-If-None-Match and stale-into-If-None-Match features
sound simple. The latter may even be supported already (will check). If
something outside of Squid provides reliable-enough ETags to all
cachable responses, then the complexities discussed earlier go away.

Please confirm whether my understanding of your updated requirements is
correct.


Thank you,

Alex.