Re: [squid-users] ICAP protocol error

2013-07-01 Thread guest01
Hi guys,

It seems we solved the problem. Squid was running out of file
descriptors, up until recently the limit was at 1024.

There were also many log entries:
2013/06/13 14:15:11| client_side.cc(3032) okToAccept: WARNING! Your
cache is running out of filedescriptors

Checking with squidclient:
File descriptor usage for squid:
Maximum number of file descriptors:   4096
Largest file desc currently in use:   1644
Number of file desc currently in use: 1263
Files queued for open:   0
Available number of file descriptors: 2833
Reserved number of file descriptors:   100
Store Disk files open:   0

My guess is that there must have been many other connections which did
not work because of the lack of file descriptors. Anyway, it seems to
work now.

Thank you all very much!

regards,
Peter

On Thu, Jun 13, 2013 at 10:30 PM, Eliezer Croitoru elie...@ngtech.co.il wrote:
 Hey,

 Since you are using it only for filtering this seems to me like you are not
 using the machine CPU at all using only 4 instances.
 you can use more cpu on the same machine with SMP support(and without).
 I won't say to you try and experiment on your clients since it's rude
 but since it's only filtering and there is no cache involved you can easily
 and smoothly even use just another instance of squid 3.3.5 to test the
 problems.
 I would give you a nice advice to try 3.3 just to let these monsters make
 good use of their cpu.
 If i'm not wrong each machine can handle more cpu more connections etc then
 they are handling right now.
 Again you will need to think about it and plan the migration.
 the SMP sometimes is not just out of the box an specifically in your
 scenario which is a very loaded server.

 Another issue is the network interface which can slow down things.
 If you can use one interface for only ICAP service connection I would go for
 it.
 Also if you can use more then one interface like in bonding\teaming or fiber
 channel I believe that some network issues will be unavailable to your
 case.

 If you can probe the ICAP service with a simple script it can give you
 better indication if the fault is on squid 3.1 problem or ICAP service is
 too loaded.

 You can use TCPDUMP to capture one of the many ICAP reqmod request and write
 a small nagios like script that will say ok err and will report them on
 mrtg or any other way.
 This way you can coordinate find point and shoot to the right
 direction(squid or ICAP service).

 This can also be used if you use nagios:
 http://exchange.nagios.org/directory/Plugins/Anti-2DVirus/check_icap-2Epl/details

 What monitoring system are you using?nagios?zabbix?munin?Icinga?prtg?

 Thanks,
 Eliezer


 On 6/13/2013 5:22 PM, guest01 wrote:

 Hi,

 Thanks for your answers.

 At the moment, we have 4 monster-servers, no indication of any
 performance issues. (there is an extensive munin monitoring)

 TCP-states: http://prntscr.com/19qle2
 CPU: http://prntscr.com/19qltm
 Load: http://prntscr.com/19qlwe
 Vmstat: http://prntscr.com/19qm3v
 Bandwidth: http://prntscr.com/19qmc4

 We have 4 squid instances per server and 4 servers, handling all
 together approx 2000rps without harddisc-caching. Half of them is
 doing kerberos authentication and the other half is doing LDAP
 authentication. Content scanning is done by a couple (6 at the moment)
 of webwasher appliances. These are my cache settings per instance:
 # cache specific settings
 cache_replacement_policy heap LFUDA
 cache_mem 1600 MB
 memory_replacement_policy heap LFUDA
 maximum_object_size_in_memory 2048 KB
 memory_pools off
 cache_swap_low 85
 cache_swap_high 90

 My plan is to adjust a couple of icap timers and increase icap
 debugging to 93,4 or 93,5) I found these messages:
 2013/06/13 03:49:42| essential ICAP service is down after an options
 fetch failure: icap://10.122.125.48:1344/wwreqmod [down,!opt]
 2013/06/13 11:09:33.530| essential ICAP service is suspended:
 icap://10.122.125.48:1344/wwreqmod [down,susp,fail11]

 What does down,!opt or down,susp,fail11 mean?

 thanks!
 Peter



 On Thu, Jun 13, 2013 at 2:41 AM, Eliezer Croitoru elie...@ngtech.co.il
 wrote:

 Hey,

 There was a bug that is related to LOAD on a server.
 your server is a monster!!
 squid 3.1.12 cannot even use the ammount of CPU you have on this machine
 as
 far as I can tell from my knowledge unless you have couple clever ideas
 in
 your sleeve.(routing marking etc..)

 To make sure what the problem is I would recommend also to verify the
 load
 on the server in a manner of open and half open sessions\connections to
 squid and icap service\server.
 Are you using this squid server for filtering only? or also cache?
 if so what is the cache size?

 The above questions can help us determine your situation and try to help
 you
 verify that the culprit is a specific bug that from my testings on 3.3.5
 doesn't exists anymore.
 if you are up for the task to verify the loads on the server

Re: [squid-users] ICAP protocol error

2013-06-13 Thread guest01
Hi,

Thanks for your answers.

At the moment, we have 4 monster-servers, no indication of any
performance issues. (there is an extensive munin monitoring)

TCP-states: http://prntscr.com/19qle2
CPU: http://prntscr.com/19qltm
Load: http://prntscr.com/19qlwe
Vmstat: http://prntscr.com/19qm3v
Bandwidth: http://prntscr.com/19qmc4

We have 4 squid instances per server and 4 servers, handling all
together approx 2000rps without harddisc-caching. Half of them is
doing kerberos authentication and the other half is doing LDAP
authentication. Content scanning is done by a couple (6 at the moment)
of webwasher appliances. These are my cache settings per instance:
# cache specific settings
cache_replacement_policy heap LFUDA
cache_mem 1600 MB
memory_replacement_policy heap LFUDA
maximum_object_size_in_memory 2048 KB
memory_pools off
cache_swap_low 85
cache_swap_high 90

My plan is to adjust a couple of icap timers and increase icap
debugging to 93,4 or 93,5) I found these messages:
2013/06/13 03:49:42| essential ICAP service is down after an options
fetch failure: icap://10.122.125.48:1344/wwreqmod [down,!opt]
2013/06/13 11:09:33.530| essential ICAP service is suspended:
icap://10.122.125.48:1344/wwreqmod [down,susp,fail11]

What does down,!opt or down,susp,fail11 mean?

thanks!
Peter



On Thu, Jun 13, 2013 at 2:41 AM, Eliezer Croitoru elie...@ngtech.co.il wrote:
 Hey,

 There was a bug that is related to LOAD on a server.
 your server is a monster!!
 squid 3.1.12 cannot even use the ammount of CPU you have on this machine as
 far as I can tell from my knowledge unless you have couple clever ideas in
 your sleeve.(routing marking etc..)

 To make sure what the problem is I would recommend also to verify the load
 on the server in a manner of open and half open sessions\connections to
 squid and icap service\server.
 Are you using this squid server for filtering only? or also cache?
 if so what is the cache size?

 The above questions can help us determine your situation and try to help you
 verify that the culprit is a specific bug that from my testings on 3.3.5
 doesn't exists anymore.
 if you are up for the task to verify the loads on the server I can tell you
 it's a 90% go on the bug.
 What I had was a problem when squid was going over the 900 RPS the ICAP
 service would go into a mode which stopped responding to requests.(and
 showed the mentioned screen)
 This bug was tested on a very slow machine compared to yours.
 On a monster like yours this effect that I have tested might not appear with
 the same side effects like denial of service  but rather interruption of
 service which your monster recover very quickly from.

 I'm here if you need any assistance,
 Eliezer


 On 6/12/2013 4:57 PM, guest01 wrote:

 Hi guys,

 We are currently using Squid 3.1.12 (old, I know) on RHEL 5.8 64bit
 (HP ProLiant DL380 G7 with 16 CPU and 28GB RAM)
 Squid Cache: Version 3.1.12
 configure options:  '--enable-ssl' '--enable-icap-client'
 '--sysconfdir=/etc/squid' '--enable-async-io' '--enable-snmp'
 '--enable-poll' '--with-maxfd=32768' '--enable-storeio=aufs'
 '--enable-removal-policies=heap,lru' '--enable-epoll'
 '--disable-ident-lookups' '--enable-truncate'
 '--with-logdir=/var/log/squid' '--with-pidfile=/var/run/squid.pid'
 '--with-default-user=squid' '--prefix=/opt/squid' '--enable-auth=basic
 digest ntlm negotiate'
 '-enable-negotiate-auth-helpers=squid_kerb_auth'
 --with-squid=/home/squid/squid-3.1.12 --enable-ltdl-convenience

 As ICAP server, we are using McAfee Webwasher 6.9 (old too, I know).
 Up until recently we hardly had problems with this environment.
 Squid is doing authentication via Kerberos and passing the username to
 the Webwasher, which is doing a LDAP lookup to find the users groups
 and assign a policy based on group membership.
 We have multiple Squids and multiple Webwasher with a hardware
 loadbalancer, approx 15k users.

 Since a couple of weeks, we almost daily get an ICAP server error
 message, similar to:
 http://support.kaspersky.com/2723
 Unfortunately, I cannot figure out why. In blame the webwasher, but I
 am not 100% sure.

 This is my ICAP configuration:
 #ICAP
 icap_enable on
 icap_send_client_ip on
 icap_send_client_username on
 icap_preview_enable on
 icap_preview_size 30
 icap_uses_indirect_client off
 icap_persistent_connections on
 icap_client_username_encode on
 icap_client_username_header X-Authenticated-User
 icap_service service_req reqmod_precache bypass=0
 icap://10.122.125.48:1344/wwreqmod
 adaptation_access service_req deny favicon
 adaptation_access service_req deny to_localhost
 adaptation_access service_req deny from_localnet
 adaptation_access service_req deny whitelist
 adaptation_access service_req deny dst_whitelist
 adaptation_access service_req deny icap_bypass_src
 adaptation_access service_req deny icap_bypass_dst
 adaptation_access service_req allow all
 icap_service service_resp respmod_precache bypass=0
 icap://10.122.125.48:1344/wwrespmod
 adaptation_access

[squid-users] ICAP protocol error

2013-06-12 Thread guest01
Hi guys,

We are currently using Squid 3.1.12 (old, I know) on RHEL 5.8 64bit
(HP ProLiant DL380 G7 with 16 CPU and 28GB RAM)
Squid Cache: Version 3.1.12
configure options:  '--enable-ssl' '--enable-icap-client'
'--sysconfdir=/etc/squid' '--enable-async-io' '--enable-snmp'
'--enable-poll' '--with-maxfd=32768' '--enable-storeio=aufs'
'--enable-removal-policies=heap,lru' '--enable-epoll'
'--disable-ident-lookups' '--enable-truncate'
'--with-logdir=/var/log/squid' '--with-pidfile=/var/run/squid.pid'
'--with-default-user=squid' '--prefix=/opt/squid' '--enable-auth=basic
digest ntlm negotiate'
'-enable-negotiate-auth-helpers=squid_kerb_auth'
--with-squid=/home/squid/squid-3.1.12 --enable-ltdl-convenience

As ICAP server, we are using McAfee Webwasher 6.9 (old too, I know).
Up until recently we hardly had problems with this environment.
Squid is doing authentication via Kerberos and passing the username to
the Webwasher, which is doing a LDAP lookup to find the users groups
and assign a policy based on group membership.
We have multiple Squids and multiple Webwasher with a hardware
loadbalancer, approx 15k users.

Since a couple of weeks, we almost daily get an ICAP server error
message, similar to:
http://support.kaspersky.com/2723
Unfortunately, I cannot figure out why. In blame the webwasher, but I
am not 100% sure.

This is my ICAP configuration:
#ICAP
icap_enable on
icap_send_client_ip on
icap_send_client_username on
icap_preview_enable on
icap_preview_size 30
icap_uses_indirect_client off
icap_persistent_connections on
icap_client_username_encode on
icap_client_username_header X-Authenticated-User
icap_service service_req reqmod_precache bypass=0
icap://10.122.125.48:1344/wwreqmod
adaptation_access service_req deny favicon
adaptation_access service_req deny to_localhost
adaptation_access service_req deny from_localnet
adaptation_access service_req deny whitelist
adaptation_access service_req deny dst_whitelist
adaptation_access service_req deny icap_bypass_src
adaptation_access service_req deny icap_bypass_dst
adaptation_access service_req allow all
icap_service service_resp respmod_precache bypass=0
icap://10.122.125.48:1344/wwrespmod
adaptation_access service_resp deny favicon
adaptation_access service_resp deny to_localhost
adaptation_access service_resp deny from_localnet
adaptation_access service_resp deny whitelist
adaptation_access service_resp deny dst_whitelist
adaptation_access service_resp deny icap_bypass_src
adaptation_access service_resp deny icap_bypass_dst
adaptation_access service_resp allow all

Could an upgrade (either to 3.2 or to 3.3) solve this problem (There
are more icap options in recent squid versions available)?
Unfortunately, this is a rather complex organisational process, that's
why I did not do that yet.
I do have a test machine, but this ICAP error is not reproducible,
only in production. Server load and IO-througput are ok, there is
nothing suspicious on the server. I recently activated icap debug
option 93 and found following message:
2013/06/12 15:32:15| suspending ICAP service for too many failures
2013/06/12 15:32:15| essential ICAP service is suspended:
icap://10.122.125.48:1344/wwrespmod [down,susp,fail11]
2013/06/12 15:35:15| essential ICAP service is up:
icap://10.122.125.48:1344/wwreqmod [up]
2013/06/12 15:35:15| essential ICAP service is up:
icap://10.122.125.48:1344/wwrespmod [up]
I don't know why this check failed, but it usually does not occur when
clients are getting the icap protocol error page.

Another possibility would be the ICAP bypass, but our ICAP server is
doing anti-Malware-checking and that's why I don't want to activate
this feature.

Does anybody have other ideas?

Thanks!
Peter


[squid-users] SSL certificate issue with Squid as Forward-Proxy

2012-10-18 Thread guest01
Hi,

We are using Squid 3.1.12[1] in our environment as forward-Proxy with
a PAC-file for HTTP and HTTPs. As far as I know, HTTPs works via the
CONNECT-method (we are not using any SSL-bump-stuff) and should not
touch the SSL certificate at all. Unfortunately, we are currently
experiencing a strange behavior with a SSL certificate for only a
couple of users (win7 clients with IE9 and ldap basic authentication):

URL: https://www.brandschutz-online.cc/kastner/

certification path without proxy:
GeoTrust Global CA
  - RapidSSL CA
- www.brandschutz-online.cc

If we are using Squid as proxy, we get following certification path in IE9:
www.brandschutz-online.cc

IE9 is complaining about a certificate error.

Any idea why this is happening? Usually, everything is working for
HTTPs without any browser complaints.

regards,
Peter

[1]Squid Cache: Version 3.1.12
configure options:  '--enable-ssl' '--enable-icap-client'
'--sysconfdir=/etc/squid' '--enable-async-io' '--enable-snmp'
'--enable-poll' '--with-maxfd=32768' '--enable-storeio=aufs'
'--enable-removal-policies=heap,lru' '--enable-epoll'
'--disable-ident-lookups' '--enable-truncate'
'--with-logdir=/var/log/squid' '--with-pidfile=/var/run/squid.pid'
'--with-default-user=squid' '--prefix=/opt/squid' '--enable-auth=basic
digest ntlm negotiate'
'-enable-negotiate-auth-helpers=squid_kerb_auth'
--with-squid=/home/squid/squid-3.1.12 --enable-ltdl-convenience
s
OS: Red Hat Enterprise Linux Server release 5.5 (Tikanga) 64Bit
ICAP-Server: McAfee Webwasher


Re: [squid-users] Squid transparent proxy issues with redirecting from HTTP to HTTPs

2012-03-23 Thread guest01
 ok, in my setup I am using the same IP with different Ports:

 http_port 10.122.125.2:3129 intercept name=transparentHTTPPort
 https_port 10.122.125.2:3130 intercept cert=/etc/squid/squid.pem
 name=transparentHTTPsPort
 acl redirectbehavior myportname transparentHTTPPort

 And how would I apply the myportname-acl? (Sounds like a noob
 question, but I could not find helpful documentation)

I am still having problems understanding what the myportname-acl is
used for or how to use it.

My Test-Squid-Server is using one IP-address with multiple ports:
3128 - default forward proxy port (used by clients who know they have
to use a proxy)
3129 - HTTP intercept (dnat via Firewall)
3130 - HTTPs intercept (dnat via Firewall)

The problem is a HTTP-to-HTTPs-redirect, which does not work. I tried
to google about the myportname/myip-acl but I could not find anything
useful/working.
Can anybody please explain how to use it, if it is possible to solve
this problem? Thanks!

regards
Peter


[squid-users] Squid transparent proxy issues with redirecting from HTTP to HTTPs

2012-03-16 Thread guest01
Hi guys,

We are currently using our Squid (3.1.x) as transparent HTTP proxy
(with dst nat). We also want to use our Squid as transparent HTTPs
proxy, which works too, despite our Internet research in which we got
many results for transparent https proxying is not possible. I admit
that there are some issues, but we only want to use it for our guest
lan, not every site has to work. Unforuntately, there are many sites
which start as HTTP-site and redirect to HTTPs before receiving login
credentials (e.g. amazon) or just redirect (e.g.
https://www.juniper.net/customers/csc/). In these situations, my
firefox prints following error message: The page isn't redirecting
properly. It seems Squid can't handle 302 (in transparent https mode?)

https://www.juniper.net/customers/csc/

GET /customers/csc/ HTTP/1.1
Host: www.juniper.net
User-Agent: Mozilla/5.0 (Windows NT 5.1; rv:10.0.2) Gecko/20100101
Firefox/10.0.2
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip, deflate
Connection: keep-alive
Cookie: WT_FPC=waytoolongstuff

HTTP/1.0 302 Moved Temporarily
Location: https://www.juniper.net/customers/csc/
Content-Length: 222
Content-Type: text/html; charset=iso-8859-1
Server: Concealed by Juniper Networks DX
Vary: Accept-Encoding
Date: Fri, 16 Mar 2012 13:23:35 GMT
Set-Cookie: rl-sticky-key=82546ce42517c9836c5deb8079756e0e; path=/;
expires=Fri, 16 Mar 2012 14:08:34 GMT
X-Cache: MISS from xlsqit01_1
Via: 1.0 xlsqit01_1 (squid/3.1.16)
Connection: keep-alive

Can anybody offer a solution or how do you allow HTTPs in your guest
(W)LANs? Direct connection or using proxy-scripts (WPAD,...)?

thanks  best regards,
Peter


Re: [squid-users] Squid transparent proxy issues with redirecting from HTTP to HTTPs

2012-03-16 Thread guest01
Hi,

Thanks for the fast response.

On Fri, Mar 16, 2012 at 3:08 PM, Amos Jeffries squ...@treenet.co.nz wrote:
 On 17/03/2012 2:27 a.m., guest01 wrote:

 Can anybody offer a solution or how do you allow HTTPs in your guest
 (W)LANs? Direct connection or using proxy-scripts (WPAD,...)?


 Add a name=X parameter to your http_port intercept port and use the
 myportname ACL type to limit the redirect only to happen on requests
 arriving via that port.

ok, in my setup I am using the same IP with different Ports:

http_port 10.122.125.2:3129 intercept name=transparentHTTPPort
https_port 10.122.125.2:3130 intercept cert=/etc/squid/squid.pem
name=transparentHTTPsPort
acl redirectbehavior myportname transparentHTTPPort

And how would I apply the myportname-acl? (Sounds like a noob
question, but I could not find helpful documentation)


 That will get the redirects going and then you face the actual blocker
 problem...

  ... when you do HTTPS intercept on a guest how do you intend to install
 your local CA on the guest browsers to prevent fake-certificate warnings on
 every page load they do?
  SSL interception in Squid only supports the environments where the browsers
 are configured to trust the local proxies CA.   DMZ, Captive Portals, and
 residential ISP type networks cannot do it without opening themselves up to
 a range of legal issues.

We don't because we can't. It is only an internal guest lan mainly for
customers or private devices (like smartphones, tablets).
Unfortunately, there are some security regulations which prohibit
direct HTTPs connections, everything has to be proxified, even
non-HTTP-traffic like android market/google play (that's another
non-squid related issue)

thanks!


[squid-users] Re: Squid Ldap Authenticators

2012-03-13 Thread guest01
5 bad SACKs received
Detected reordering 82 times using FACK
Detected reordering 62 times using SACK
Detected reordering 1489 times using reno fast retransmit
Detected reordering 12950 times using time stamp
13026 congestion windows fully recovered
83165 congestion windows partially recovered using Hoe heuristic
TCPDSACKUndo: 5989
18959 congestion windows recovered after partial ack
3362 TCP data loss events
TCPLostRetransmit: 1
57745 timeouts after reno fast retransmit
1198 timeouts after SACK recovery
31172 timeouts in loss state
473117 fast retransmits
1478 forward retransmits
1445453 retransmits in slow start
888214 other TCP timeouts
TCPRenoRecoveryFail: 134832
27 sack retransmits failed
589052 packets collapsed in receive queue due to low socket buffer
4622235 DSACKs sent for old packets
9737 DSACKs sent for out of order packets
15792 DSACKs received
1 DSACKs for out of order packets received
421669 connections reset due to unexpected data
2977 connections reset due to early user close
11177 connections aborted due to timeout
IpExt:
InBcastPkts: 1294824
OutBcastPkts: 648179

root@xlsqip02 ~ # uptime
 15:43:02 up 6 days, 54 min,  2 users,  load average: 0.12, 0.14, 0.16

Usually, if we get high squid authenticators response time, there is
an issue, e.g.:
http://desmond.imageshack.us/Himg840/scaled.php?server=840filename=squid3ldapauthenticator.pngres=medium

sysctl-values: http://pastie.org/3586014

Sometime restarting the network helps or rebooting the server, but
during the last couple of days, these issue occur way too often.

Has anybody any idea about what else we could check or experienced
anything similar? Thanks!

regards
Peter


On Tue, Mar 13, 2012 at 3:07 PM, guest01 gues...@gmail.com wrote:
 Hi,

 We are having strange Sq


Re: [squid-users] requests per second

2012-03-12 Thread guest01
Hi,

We are using Squid as forward-proxy for about 10-20k clients with
about 1200RPS.

In our setup, we are using 4 physical servers (HP ProLiant DL380 G6/G7
with 16CPU, 32GB RAM) with RHEL5.8 64Bit as OS with a dedicated
hardware loadbalancer. At the moment, the average server load is
approx 0.6.

We are currently using:
Squid Cache: Version 3.1.16
configure options:  '--enable-ssl' '--enable-icap-client'
'--sysconfdir=/etc/squid' '--enable-async-io' '--enable-snmp'
'--enable-poll' '--with-maxfd=32768' '--enable-storeio=aufs'
'--enable-removal-policies=heap,lru' '--enable-epoll'
'--disable-ident-lookups' '--enable-truncate'
'--with-logdir=/var/log/squid' '--with-pidfile=/var/run/squid.pid'
'--with-default-user=squid' '--prefix=/opt/squid' '--enable-auth=basic
digest ntlm negotiate'
'-enable-negotiate-auth-helpers=squid_kerb_auth'
'--with-aufs-threads=32' '--enable-linux-netfilter'
'--enable-external-acl-helpers' --with-squid=/home/squid/squid-3.1.16
--enable-ltdl-convenience

IMHO, it is really important which features you are planning to use.
For example, we are using authentication (kerberos, ntlm, ldap) and
ICAP content adaption. Without that, our RPS-rate would be much
higher. Because of a lacking SMP-support in 3.1, we are using 4
instances per server. At the beginning, the setup used to be much
simpler! ;-)

hth,
Peter

On Mon, Mar 12, 2012 at 1:47 PM, David B. haazel...@gmail.com wrote:
 Hi,

 It's only a reverse proxy cache, not a proxy. This is different.
 We use squid only for images.

 Squid : 3.1.x
 OS : debian 64 bits

 Le 12/03/2012 12:44, Student University a écrit :
 Hi David 

 You achieve 2K with what version of squid ,,,
 do you have any special configuration tweaks ,,,

 also what if i use SSD [200,000 Random Write 4K IOPS]

 Best Regards ,,,
 Liley



Re: [squid-users] load balancing

2011-11-08 Thread guest01
Hi,

Yes, it is even pretty easy to accomplish. We are using a dedicated
Loadbalancer (but you can of course use LVS as loadbalancer) which is
balancing proxy request to 8 squid instances on 4 different real
servers with Kerberos authentication. We are not using any cache
hierarchy, just 4 standalone squid servers.
Just create a virtual loadbalancer IP, configure an DNS-entry for that
IP and configure this FQDN (don't use the IP-address because Kerberos
won't work) in your client browsers. Create a Kerberos Ticket for this
hostname/fqdn (I assume you already did something similiar for your
current setup) and use this ticketfile on your squid servers. That's
pretty much it.

regards
Peter

On Tue, Nov 8, 2011 at 2:43 PM, Nicola Gentile nikko...@gmail.com wrote:
 Good Morning,
 I have a proxy squid on debian with kerberos authentication and it works fine.
 I would create a cluster load balancing for 2/3 proxy squid.
 In particular, the clients connect to the load balancer, that
 redirects the request to one of the proxies.
 These proxies will must authenticate through kerberos.

 Is it possible implement something like that?

 What can I use?

 Best regards.

 Nicola



Re: [squid-users] Problem downloading file from catalog.update.microsoft.com and MS BITS(Background Intelligent Transfer Service)

2011-07-28 Thread guest01
Ok, thanks for your answer.

When do you expect a stable version of Squid 3.2? I am considering
upgrading Squid 3.1 to Squid 3.2, but as long as it is marked as beta,
I don't get any permission to do it. We are using Squid 3.2 in our
testing environment for months without troubles, but as long as it is
not marked as stable, I can't do anything.

regards
Peter

On Tue, Jul 26, 2011 at 7:09 AM, Amos Jeffries squ...@treenet.co.nz wrote:
 On 25/07/11 23:34, guest01 wrote:

 Hi guys,

 I have a problem with site catalog.update.microsoft.com and MS BITS
 (Background Intelligent Transfer Service) Squid 3.1.12. Squid 3.2.0.7
 seems to work without problems.
 Most of my clients use Kerberos authentication and WinXP as client.
 Unfortunately, BITS can only use Basic Authentication. Basically, as
 far as I figured out, BITS is sending an HEAD-request:

 Squid 3.1.12:
 HEAD
 http://download.windowsupdate.com/msdownload/update/software/updt/2011/06/rootsupd_f54752ec63369522f37e545325519ee434cdf439.exe
 HTTP/1.1
 Accept: */*
 Accept-Encoding: identity
 User-Agent: Microsoft BITS/6.7
 Host: download.windowsupdate.com
 Proxy-Connection: Keep-Alive

 HTTP/1.0 407 Proxy Authentication Required
 Server: squid/3.1.12
 Mime-Version: 1.0
 Date: Wed, 20 Jul 2011 10:35:24 GMT
 Content-Type: text/html
 Content-Length: 1702
 X-Squid-Error: ERR_CACHE_ACCESS_DENIED 0
 Vary: Accept-Language
 Content-Language: en
 Proxy-Authenticate: Negotiate
 Proxy-Authenticate: Basic realm=Proxy
 X-Cache: MISS from xlsqip03_1
 Via: 1.0 xlsqip03_1 (squid/3.1.12)
 Connection: keep-alive

 After that, the client sends an TCP RST and nothing is happening anymore.

 snip

 My question now:
 Why is Squid 3.1.12 sending an HTTP/1.0 407 and Squid 3.2.0.7 an
 HTTP/1.1 407?

 Because squid-3.1 series is only properly HTTP/1.0 protocol compliant.
 Squid-3.2 series supports HTTP/1.1 protocol.

 I could not find any configuration option which could
 explain that behavior and I am not even sure if that's the problem.

 It looks like BITS requires HTTP/1.1 support to do auth properly. Though I
 can't see anything in those requests which would create that requirement.

 The big hint seems to be that it is BITS generating the RST despite squid
 saying Connection: keep-alive. Being a RST instead of FIN it could be an
 internal crash or something else going bad inside BITS.

 Amos
 --
 Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.14
  Beta testers wanted for 3.2.0.10



[squid-users] Problem downloading file from catalog.update.microsoft.com and MS BITS(Background Intelligent Transfer Service)

2011-07-25 Thread guest01
Hi guys,

I have a problem with site catalog.update.microsoft.com and MS BITS
(Background Intelligent Transfer Service) Squid 3.1.12. Squid 3.2.0.7
seems to work without problems.
Most of my clients use Kerberos authentication and WinXP as client.
Unfortunately, BITS can only use Basic Authentication. Basically, as
far as I figured out, BITS is sending an HEAD-request:

Squid 3.1.12:
HEAD 
http://download.windowsupdate.com/msdownload/update/software/updt/2011/06/rootsupd_f54752ec63369522f37e545325519ee434cdf439.exe
HTTP/1.1
Accept: */*
Accept-Encoding: identity
User-Agent: Microsoft BITS/6.7
Host: download.windowsupdate.com
Proxy-Connection: Keep-Alive

HTTP/1.0 407 Proxy Authentication Required
Server: squid/3.1.12
Mime-Version: 1.0
Date: Wed, 20 Jul 2011 10:35:24 GMT
Content-Type: text/html
Content-Length: 1702
X-Squid-Error: ERR_CACHE_ACCESS_DENIED 0
Vary: Accept-Language
Content-Language: en
Proxy-Authenticate: Negotiate
Proxy-Authenticate: Basic realm=Proxy
X-Cache: MISS from xlsqip03_1
Via: 1.0 xlsqip03_1 (squid/3.1.12)
Connection: keep-alive

After that, the client sends an TCP RST and nothing is happening anymore.

Squid 3.2.0.7
HEAD 
http://download.windowsupdate.com/msdownload/update/software/updt/2011/06/rootsupd_f54752ec63369522f37e545325519ee434cdf439.exe
HTTP/1.1
Accept: */*
Accept-Encoding: identity
User-Agent: Microsoft BITS/6.7
Host: download.windowsupdate.com
Proxy-Connection: Keep-Alive


HTTP/1.1 407 Proxy Authentication Required
Server: squid/3.2.0.7
Mime-Version: 1.0
Date: Wed, 20 Jul 2011 10:22:49 GMT
Content-Type: text/html
Content-Length: 1701
X-Squid-Error: ERR_CACHE_ACCESS_DENIED 0
Vary: Accept-Language
Content-Language: en
Proxy-Authenticate: Negotiate
Proxy-Authenticate: Basic realm=Proxy
X-Cache: MISS from xlsqit01
Via: 1.1 xlsqit01 (squid/3.2.0.7)
Connection: keep-alive


HEAD 
http://download.windowsupdate.com/msdownload/update/software/updt/2011/06/rootsupd_f54752ec63369522f37e545325519ee434cdf439.exe
HTTP/1.1
Accept: */*
Accept-Encoding: identity
User-Agent: Microsoft BITS/6.7
Host: download.windowsupdate.com
Proxy-Connection: Keep-Alive


HTTP/1.1 407 Proxy Authentication Required
Server: squid/3.2.0.7
Mime-Version: 1.0
Date: Wed, 20 Jul 2011 10:22:56 GMT
Content-Type: text/html
Content-Length: 1701
X-Squid-Error: ERR_CACHE_ACCESS_DENIED 0
Vary: Accept-Language
Content-Language: en
Proxy-Authenticate: Negotiate
Proxy-Authenticate: Basic realm=Proxy
X-Cache: MISS from xlsqit01
Via: 1.1 xlsqit01 (squid/3.2.0.7)
Connection: keep-alive


HEAD 
http://download.windowsupdate.com/msdownload/update/software/updt/2011/06/rootsupd_f54752ec63369522f37e545325519ee434cdf439.exe
HTTP/1.1
Accept: */*
Accept-Encoding: identity
User-Agent: Microsoft BITS/6.7
Host: download.windowsupdate.com
Proxy-Connection: Keep-Alive
Proxy-Authorization: Basic BASE64USERNAMEANDPASSWORD


HTTP/1.1 200 OK
Accept-Ranges: bytes
Content-Length: 358304
Content-Type: application/cab
Date: Wed, 20 Jul 2011 10:12:15 GMT
ETag: 80cc56dda02ecc1:0
Last-Modified: Sun, 19 Jun 2011 16:49:33 GMT
Server: Microsoft-IIS/7.5
X-Powered-By: ASP.NET
Age: 641
X-Cache: HIT from xlsqit01
Via: 1.1 xlsqit01 (squid/3.2.0.7)
Connection: keep-alive

My question now:
Why is Squid 3.1.12 sending an HTTP/1.0 407 and Squid 3.2.0.7 an
HTTP/1.1 407? I could not find any configuration option which could
explain that behavior and I am not even sure if that's the problem.

thanks and regards
Peter


Re: [squid-users] Squid uses way too much RAM and starts swapping ...

2011-05-30 Thread guest01
Hi,

Any news on this topic? Unfortunately, RAM is running full within days
and at the moment, our workaround is to do a reboot ... We would
appreciate any other solution!

thanks,
peter


On Wed, May 11, 2011 at 3:47 PM, guest01 gues...@gmail.com wrote:
 On Wed, May 11, 2011 at 10:47 AM, Amos Jeffries squ...@treenet.co.nz wrote:
 On 11/05/11 19:19, guest01 wrote:

 Hi,

 I am currently using squid 3.1.12 as forward-proxy without
 harddisk-caching (only RAM is used for caching). Each server is
 running on RHEL5.5 and is pretty strong (16 CPUs, 28GB RAM), but each
 servers starts swapping a few days after start. The workaround at the
 moment is to reboot the server once a week, which I don't really like.
 But swapping leads to serious side effects, e.g. performance troubles,
 ...

 way too much swapping:
 http://imageshack.us/m/52/6149/memoryday.png

 I already read a lot of posts and mails for similar problems, but
 unfortunately, I was not able to solve this problem. I added following
 infos to my squid.conf-file:
 # cache specific settings
 cache_replacement_policy heap LFUDA
 cache_mem 1600 MB
 memory_replacement_policy heap LFUDA
 maximum_object_size_in_memory 2048 KB
 memory_pools off
 cache_swap_low 85
 cache_swap_high 90

 (There are four squid instances per server, which means that 1600*4 =
 6400MB RAM used for caching, which is not even 1/4 of the total
 available amount of RAM. Plenty enough, don't you think?)

 Not that is for HTTP object caching, emphasis on *caching* and HTTP
 object. In-transit objects and non-HTTP caches (Ip cache, domain name
 cache, persistent connections cache, client database, via/fwd database,
 network performance cache, auth caches, external ACL caches) and the indexes
 for all those caches use other memory.

 Then again they should all be using no more than a few GB combined. So you
 may have hit a new leak (all the known ones are resolved before 3.1.12).

 Ok, very strange. But at least it is reproducible, it takes about a
 week until squid is starting to swap ...
 http://img191.imageshack.us/img191/9615/memorymonth.png


 Very strange are the negative values (Memory usage for squid via
 mallinfo():) from the output below. Maybe that is a reason for running
 out of RAM?

 mallinfo() sucks badly when going above 2GB of RAM. It can be ignored.

 The section underneath it Memory accounted for: is Squids own accounting
 and more of a worry. It should not have had negatives since before 3.1.10.


 HTTP/1.0 200 OK
 Server: squid/3.1.12
 Mime-Version: 1.0
 Date: Wed, 11 May 2011 07:06:10 GMT
 Content-Type: text/plain
 Expires: Wed, 11 May 2011 07:06:10 GMT
 Last-Modified: Wed, 11 May 2011 07:06:10 GMT
 X-Cache: MISS from xlsqip03_1
 Via: 1.0 xlsqip03_1 (squid/3.1.12)
 Connection: close

 Squid Object Cache: Version 3.1.12
 Start Time:     Wed, 27 Apr 2011 11:01:13 GMT
 Current Time:   Wed, 11 May 2011 07:06:10 GMT
 Connection information for squid:
         Number of clients accessing cache:      1671
         Number of HTTP requests received:       16144359
         Number of ICP messages received:        0
         Number of ICP messages sent:    0
         Number of queued ICP replies:   0
         Number of HTCP messages received:       0
         Number of HTCP messages sent:   0
         Request failure ratio:   0.00
         Average HTTP requests per minute since start:   810.3
         Average ICP messages per minute since start:    0.0
         Select loop called: 656944758 times, 1.820 ms avg
 Cache information for squid:
         Hits as % of all requests:      5min: 17.4%, 60min: 18.2%
         Hits as % of bytes sent:        5min: 45.6%, 60min: 39.9%
         Memory hits as % of hit requests:       5min: 86.1%, 60min: 88.9%
         Disk hits as % of hit requests: 5min: 0.0%, 60min: 0.0%
         Storage Swap size:      0 KB
         Storage Swap capacity:   0.0% used,  0.0% free
         Storage Mem size:       1622584 KB
         Storage Mem capacity:   100.0% used,  0.0% free

 Okay 1.6 GB of RAM used for caching HTTP objects. Fully used.

         Mean Object Size:       0.00 KB

 Problem #1. It *may* be Squid not accounting for the memory objects in the
 mean.

         Requests given to unlinkd:      0
 Median Service Times (seconds)  5 min    60 min:
         HTTP Requests (All):   0.01648  0.01235
         Cache Misses:          0.05046  0.04277
         Cache Hits:            0.00091  0.00091
         Near Hits:             0.01469  0.01745
         Not-Modified Replies:  0.0  0.00091
         DNS Lookups:           0.00190  0.00190
         ICP Queries:           0.0  0.0
 Resource usage for squid:
         UP Time:        1195497.286 seconds
         CPU Time:       22472.507 seconds
         CPU Usage:      1.88%
         CPU Usage, 5 minute avg:        5.38%
         CPU Usage, 60 minute avg:       5.44%
         Process Data Segment Size via sbrk(): 3145032 KB
         Maximum Resident Size: 0 KB
         Page faults with physical i/o

Re: [squid-users] Squid uses way too much RAM and starts swapping ...

2011-05-30 Thread guest01
ok, I can at least try to start it under valgrind, which I have never
heard before. Do I just start squid under valgrind and send you the
logfile? Do you need any special options?


On Mon, May 30, 2011 at 2:15 PM, Amos Jeffries squ...@treenet.co.nz wrote:
 On 30/05/11 20:07, guest01 wrote:

 Hi,

 Any news on this topic? Unfortunately, RAM is running full within days
 and at the moment, our workaround is to do a reboot ... We would
 appreciate any other solution!

 thanks,
 peter


 :) I was just looking at this bug again today.

 :( still no seriously good ideas.

 Is there any way at all you can run one of these Squid under valgrind and
 get a report of what its memory is doing?
  Even if valgrind is just built in to squid. The info report gains a lot of
 extra memory stats and leak report without having to stop squid running.

 Amos
 --
 Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.12
  Beta testers wanted for 3.2.0.7 and 3.1.12.1



[squid-users] Squid uses way too much RAM and starts swapping ...

2011-05-11 Thread guest01
Hi,

I am currently using squid 3.1.12 as forward-proxy without
harddisk-caching (only RAM is used for caching). Each server is
running on RHEL5.5 and is pretty strong (16 CPUs, 28GB RAM), but each
servers starts swapping a few days after start. The workaround at the
moment is to reboot the server once a week, which I don't really like.
But swapping leads to serious side effects, e.g. performance troubles,
...

way too much swapping:
http://imageshack.us/m/52/6149/memoryday.png

I already read a lot of posts and mails for similar problems, but
unfortunately, I was not able to solve this problem. I added following
infos to my squid.conf-file:
# cache specific settings
cache_replacement_policy heap LFUDA
cache_mem 1600 MB
memory_replacement_policy heap LFUDA
maximum_object_size_in_memory 2048 KB
memory_pools off
cache_swap_low 85
cache_swap_high 90

(There are four squid instances per server, which means that 1600*4 =
6400MB RAM used for caching, which is not even 1/4 of the total
available amount of RAM. Plenty enough, don't you think?)

Very strange are the negative values (Memory usage for squid via
mallinfo():) from the output below. Maybe that is a reason for running
out of RAM?

HTTP/1.0 200 OK
Server: squid/3.1.12
Mime-Version: 1.0
Date: Wed, 11 May 2011 07:06:10 GMT
Content-Type: text/plain
Expires: Wed, 11 May 2011 07:06:10 GMT
Last-Modified: Wed, 11 May 2011 07:06:10 GMT
X-Cache: MISS from xlsqip03_1
Via: 1.0 xlsqip03_1 (squid/3.1.12)
Connection: close

Squid Object Cache: Version 3.1.12
Start Time: Wed, 27 Apr 2011 11:01:13 GMT
Current Time:   Wed, 11 May 2011 07:06:10 GMT
Connection information for squid:
Number of clients accessing cache:  1671
Number of HTTP requests received:   16144359
Number of ICP messages received:0
Number of ICP messages sent:0
Number of queued ICP replies:   0
Number of HTCP messages received:   0
Number of HTCP messages sent:   0
Request failure ratio:   0.00
Average HTTP requests per minute since start:   810.3
Average ICP messages per minute since start:0.0
Select loop called: 656944758 times, 1.820 ms avg
Cache information for squid:
Hits as % of all requests:  5min: 17.4%, 60min: 18.2%
Hits as % of bytes sent:5min: 45.6%, 60min: 39.9%
Memory hits as % of hit requests:   5min: 86.1%, 60min: 88.9%
Disk hits as % of hit requests: 5min: 0.0%, 60min: 0.0%
Storage Swap size:  0 KB
Storage Swap capacity:   0.0% used,  0.0% free
Storage Mem size:   1622584 KB
Storage Mem capacity:   100.0% used,  0.0% free
Mean Object Size:   0.00 KB
Requests given to unlinkd:  0
Median Service Times (seconds)  5 min60 min:
HTTP Requests (All):   0.01648  0.01235
Cache Misses:  0.05046  0.04277
Cache Hits:0.00091  0.00091
Near Hits: 0.01469  0.01745
Not-Modified Replies:  0.0  0.00091
DNS Lookups:   0.00190  0.00190
ICP Queries:   0.0  0.0
Resource usage for squid:
UP Time:1195497.286 seconds
CPU Time:   22472.507 seconds
CPU Usage:  1.88%
CPU Usage, 5 minute avg:5.38%
CPU Usage, 60 minute avg:   5.44%
Process Data Segment Size via sbrk(): 3145032 KB
Maximum Resident Size: 0 KB
Page faults with physical i/o: 8634
Memory usage for squid via mallinfo():
Total space in arena:  -1049140 KB
Ordinary blocks:   -1277813 KB  87831 blks
Small blocks:   0 KB  0 blks
Holding blocks:  2240 KB  5 blks
Free Small blocks:  0 KB
Free Ordinary blocks:  228673 KB
Total in use:  -1275574 KB 122%
Total free:228674 KB -22%
Total size:-1046900 KB
Memory accounted for:
Total accounted:   -1375357 KB 131%
memPool accounted: 2818947 KB -269%
memPool unaccounted:   -3865847 KB 0%
memPoolAlloc calls:   111
memPoolFree calls:  8322084644
File descriptor usage for squid:
Maximum number of file descriptors:   1024
Largest file desc currently in use:563
Number of file desc currently in use:  472
Files queued for open:   0
Available number of file descriptors:  552
Reserved number of file descriptors:   100
Store Disk files open:   0
Internal Data Structures:
 96996 StoreEntries
 96996 StoreEntries with MemObjects
 96980 Hot Object Cache Items
 0 on-disk objects

Has anyone experienced similar things or does even know a solution?



Thank you and best regards!
Peter


Re: [squid-users] Re: kerberos authentication - performance tuning

2011-02-17 Thread guest01
ok, does not sound good, but I expected something like that, even
though in theory more CPUs should be able to handle more
work/authentication processes

We don't really care about caching, we are basically only interested
in antivirus and category blocking based on username/group (achieved
with an ICAP server) which does only work with any kind of
authentication (IP based policy assignment cannot be handled
properly).

At the moment, we have 30 kerberos helpers responsible for approx 2000
users (66 users per helper) and not all of them will be used
extensively.

Maybe there is something wrong in our setup, does any of you have
experience or even have numbers of how many kerberos authentications a
recent squid version can handle on todays hardware (let's say multi
core cpu and lots of RAM) and average user behavior? How big are the
biggest squid deployments (as forward proxy with authentication)?

btw, I see following messages in my log files, but in my opinion, they
are NTLM-related.
 - samba Begin 
 **Unmatched Entries**
 libads/cldap.c:recv_cldap_netlogon(219)  no reply received to cldap
netlogon : 3771 Time(s)
 libads/ldap_utils.c:ads_do_search_retry_internal(115)  ads reopen
failed after error Referral : 1 Time(s)
 libsmb/clientgen.c:cli_rpc_pipe_close(386)  cli_rpc_pipe_close:
cli_close failed on pipe \NETLOGON, fnum 0x4008 to machine DC1.  Error
was SUCCESS - 0 : 609 Time(s)
 libsmb/clientgen.c:cli_rpc_pipe_close(386)  cli_rpc_pipe_close:
cli_close failed on pipe \NETLOGON, fnum 0x400b to machine dc1.fqdn.
Error was SUCCESS - 0 : 36 Time(s)
 libsmb/clientgen.c:cli_rpc_pipe_close(386)  cli_rpc_pipe_close:
cli_close failed on pipe \lsarpc, fnum 0x4009 to machine DC1.  Error
was SUCCESS - 0 : 609 Time(s)
 libsmb/credentials.c:creds_client_check(324)  creds_client_check:
credentials check failed. : 3923 Time(s)
 nsswitch/winbindd_group.c:winbindd_getgrnam(519)  group prod360 in
domain OUR_DOMAIN_HERE does not exist : 27 Time(s)
 rpc_client/cli_netlogon.c:rpccli_netlogon_sam_network_logon(1030)
rpccli_netlogon_sam_network_logon: credentials chain check failed :
3923 Time(s)
 -- samba End -



On Wed, Feb 16, 2011 at 10:32 PM, Amos Jeffries squ...@treenet.co.nz wrote:
 On Wed, 16 Feb 2011 13:28:29 +0100, guest01 wrote:

 Hi,

 We had to bypass the kerberos authentication for now (most of the
 users will be authenticated by IP (there are already more than 1
 unique IPs in my Squid logs). iirc, disabling the replay cache did not
 help much. There is a load avg of 0.4 right now (authenticating about
 9000 users per IP and 1000 with Kerberos) with approx 450 RPS (2
 strong servers), which looks pretty good.

 What do you think? Can SMP functionality of Squid 3.2 reduce our load
 problem significantly? At the moment, we have multiple independent
 squid processes per server (4 squid instances, 16 cpus), but I don't
 see any way (except adding more hardware) to authenticate 1 with
 Kerberos.

 SMP will help with the management of those 4 instances on each machine,
 dropping it to one config file they all work from and one SNMP contact port
 one cachemgr contact port etc.
 But I think total load, helper process count and cache duplication problems
 will remain unchanged with the current SMP capabilities.

 Amos




Re: [squid-users] Re: kerberos authentication - performance tuning

2011-02-16 Thread guest01
Hi,

We had to bypass the kerberos authentication for now (most of the
users will be authenticated by IP (there are already more than 1
unique IPs in my Squid logs). iirc, disabling the replay cache did not
help much. There is a load avg of 0.4 right now (authenticating about
9000 users per IP and 1000 with Kerberos) with approx 450 RPS (2
strong servers), which looks pretty good.

What do you think? Can SMP functionality of Squid 3.2 reduce our load
problem significantly? At the moment, we have multiple independent
squid processes per server (4 squid instances, 16 cpus), but I don't
see any way (except adding more hardware) to authenticate 1 with
Kerberos.

regards


On Sat, Feb 12, 2011 at 2:09 PM, Markus Moeller hua...@moeller.plus.com wrote:
 Hi Peter

 Nick Cairncross nick.cairncr...@condenast.co.uk wrote in message
 news:c9782338.5940f%nick.cairncr...@condenast.co.uk...
 On 09/02/2011 09:34, guest01 gues...@gmail.com wrote:

 Hi,

 We are currently using Squid 3.1.10 on RHEL5.5 and Kerberos
 authentication for most of our clients (authorization with an icap
 server). At the moment, we are serving approx 8000 users with two
 servers. Unfortunately, we have performance troubles with our Kerberos
 authentication. Load values are way to high ...

 10:19:58 up 16:14,  2 users,  load average: 23.03, 32.37, 25.01
 10:19:59 up 15:37,  2 users,  load average: 58.97, 57.92, 47.73

 Peak values have been 70 for the 5min interval. At the moment, there
 are approx 400 hits/second (200 per server). We already disabled
 caching on harddisk. Avg service time for Kerberos is up to 2500ms
 (which is quite long).

 Our kerberos configuration looks pretty simple:
 #KERBEROS
 auth_param negotiate program
 /opt/squid/libexec/negotiate_kerberos_auth -s HTTP/fqdn -r
 auth_param negotiate children 30
 auth_param negotiate keep_alive on

 Is there anyway for further caching or something like that?

 For testing purposes, we authenticated a certain subnet by IP and load
 values decreased to 1. (Unfortunately, this is not possible because
 every user gets a policy assigned by its username)

 Any ideas anyone? Are there any kerberos related benchmarks available
 (could not find any), maybe this issue is not a problem, just a
 limitation and we have to add more servers?

 Thanks!

 best regards
 Peter

 Peter,

 I have pretty much the same setup as you - just 3.1.8, though only 700
 users.

 Have you disabled the replay cache:
 http://wiki.squid-cache.org/ConfigExamples/Authenticate/Kerberos
 But beware of a memory leak (depending on your libs of course):

 http://squid-web-proxy-cache.1019090.n4.nabble.com/Intermittent-SquidKerbAu
 th-Cannot-allocate-memory-td3179036.html. I have a call outstanding with
 RH at the moment.


 Could you try disabling the replay cache ? Did it improve the load ?

 Are your rules repeating requesting authentication unnecessarily when it's
 already been done? Amos was very helpful when advising on this (search for
 the post..)

 8000 users.. Only 30 helpers? What does cachemgr say about used negotiate
 helper stats, timings/sec etc.
 Is your krb5.conf using the nearest kdc in it's own site etc?


 The kdc is only important for the client. The server (squid) never talks to
 the kdc.

 Some load testers out there incorporate Kerberos load testing.

 Just my thoughts..

 Nick





 The information contained in this e-mail is of a confidential nature and
 is intended only for the addressee.  If you are not the intended addressee,
 any disclosure, copying or distribution by you is prohibited and may be
 unlawful.  Disclosure to any party other than the addressee, whether
 inadvertent or otherwise, is not intended to waive privilege or
 confidentiality.  Internet communications are not secure and therefore Conde
 Nast does not accept legal responsibility for the contents of this message.
  Any views or opinions expressed are those of the author.

 The Conde Nast Publications Ltd (No. 226900), Vogue House, Hanover Square,
 London W1S 1JU






[squid-users] squid 3.2.0.5 - keeps reloading itself when using kerberos or ntlm authentication

2011-02-14 Thread guest01
Hi guys,

For testing purposes I tried squid 3.2.0.5 beta. After a couple of
smaller issues I ran into a bigger one which I will share with you :-)

I compiled squid 3.2.0.5 beta on RHEL5.5 64Bit with following options:
Squid Cache: Version 3.2.0.5
configure options:  '--enable-ssl' '--enable-icap-client'
'--sysconfdir=/etc/squid' '--enable-async-io' '--enable-snmp'
'--enable-poll' '--with-maxfd=32768' '--enable-storeio=aufs'
'--enable-removal-policies=heap,lru' '--enable-epoll'
'--disable-ident-lookups' '--enable-truncate'
'--with-logdir=/var/log/squid' '--with-pidfile=/var/run/squid.pid'
'--with-default-user=squid' '--prefix=/opt/squid'
'-enable-negotiate-auth-helpers=squid_kerb_auth'
--enable-ltdl-convenience

Everything is work so far except the kerberos or ntlm authentication.
Ldap authentication is working without problems. If I configure either
kerberos or ntlm, following happens:

2011/02/14 14:09:20 kid1| Starting Squid Cache version 3.2.0.5 for
x86_64-unknown-linux-gnu...
2011/02/14 14:09:20 kid1| Process ID 3923
2011/02/14 14:09:20 kid1| With 16384 file descriptors available
2011/02/14 14:09:20 kid1| Initializing IP Cache...
2011/02/14 14:09:20 kid1| DNS Socket created at 0.0.0.0, FD 9
2011/02/14 14:09:20 kid1| Adding domain domain.tld from /etc/resolv.conf
2011/02/14 14:09:20 kid1| Adding domain domain.tld from /etc/resolv.conf
2011/02/14 14:09:20 kid1| Adding nameserver 10.14.32.54 from /etc/resolv.conf
2011/02/14 14:09:20 kid1| Adding nameserver 10.14.32.122 from /etc/resolv.conf
2011/02/14 14:09:20 kid1| helperOpenServers: Starting 0/20
'negotiate_kerberos_auth' processes
2011/02/14 14:09:20 kid1| helperStatefulOpenServers: No
'negotiate_kerberos_auth' processes needed.
2011/02/14 14:09:20 kid1| Logfile: opening log
/var/log/squid/access_xlsqit01_1.log
2011/02/14 14:09:20 kid1| Unlinkd pipe opened on FD 14
2011/02/14 14:09:20 kid1| Store logging disabled
2011/02/14 14:09:20 kid1| Swap maxSize 0 + 786432 KB, estimated 60494 objects
2011/02/14 14:09:20 kid1| Target number of buckets: 3024
2011/02/14 14:09:20 kid1| Using 8192 Store buckets
2011/02/14 14:09:20 kid1| Max Mem  size: 786432 KB
2011/02/14 14:09:20 kid1| Max Swap size: 0 KB
2011/02/14 14:09:20 kid1| Using Least Load store dir selection
2011/02/14 14:09:20 kid1| Set Current Directory to /cache/squid/xlsqit01_1
2011/02/14 14:09:20 kid1| Loaded Icons.
2011/02/14 14:09:20 kid1| HTCP Disabled.
2011/02/14 14:09:20 kid1| Squid plugin modules loaded: 0
2011/02/14 14:09:20 kid1| Adaptation support is on
2011/02/14 14:09:20 kid1| Ready to serve requests.
2011/02/14 14:09:20 kid1| Accepting bumpyHTTP Socket connections at
FD 15 on 10.122.125.2:3128
2011/02/14 14:09:20 kid1| Accepting interceptedHTTP Socket connections
at  FD 16 on 10.122.125.2:3129
2011/02/14 14:09:20 kid1| Accepting interceptedHTTPS Socket
connections at  FD 17 on 10.122.125.2:3130
2011/02/14 14:09:20 kid1| Accepting SNMP messages on 10.122.125.2:161, FD 18.
2011/02/14 14:09:20 kid1| Outgoing SNMP messages on 10.122.125.2:161, FD 19.
2011/02/14 14:09:21 kid1| storeLateRelease: released 0 objects
2011/02/14 14:09:35 kid1| Starting new negotiateauthenticator helpers...
2011/02/14 14:09:35 kid1| helperOpenServers: Starting 1/20
'negotiate_kerberos_auth' processes
2011/02/14 14:09:35| negotiate_kerberos_auth: INFO: User THATSME authenticated
2011/02/14 14:09:35 kid1| Starting new negotiateauthenticator helpers...
2011/02/14 14:09:35 kid1| helperOpenServers: Starting 1/20
'negotiate_kerberos_auth' processes
2011/02/14 14:09:35 kid1| Starting new negotiateauthenticator helpers...
2011/02/14 14:09:35 kid1| helperOpenServers: Starting 1/20
'negotiate_kerberos_auth' processes
2011/02/14 14:09:35| negotiate_kerberos_auth: INFO: User THATSME authenticated
2011/02/14 14:09:35 kid1| assertion failed: User.cc:103:
from-RefCountCount() == 2
2011/02/14 14:09:35| negotiate_kerberos_auth: INFO: User THATSME authenticated
2011/02/14 14:09:35| negotiate_kerberos_auth: INFO: User THATSME authenticated
2011/02/14 14:09:38 kid1| Starting Squid Cache version 3.2.0.5 for
x86_64-unknown-linux-gnu...
2011/02/14 14:09:38 kid1| Process ID 3969
2011/02/14 14:09:38 kid1| With 16384 file descriptors available
2011/02/14 14:09:38 kid1| Initializing IP Cache...
2011/02/14 14:09:38 kid1| DNS Socket created at 0.0.0.0, FD 9
2011/02/14 14:09:38 kid1| Adding domain domain.tld from /etc/resolv.conf
2011/02/14 14:09:38 kid1| Adding domain domain.tld from /etc/resolv.conf
2011/02/14 14:09:38 kid1| Adding nameserver 10.14.32.54 from /etc/resolv.conf
2011/02/14 14:09:38 kid1| Adding nameserver 10.14.32.122 from /etc/resolv.conf
2011/02/14 14:09:38 kid1| helperOpenServers: Starting 0/20
'negotiate_kerberos_auth' processes
2011/02/14 14:09:38 kid1| helperStatefulOpenServers: No
'negotiate_kerberos_auth' processes needed.
2011/02/14 14:09:38 kid1| Logfile: opening log
/var/log/squid/access_xlsqit01_1.log
2011/02/14 14:09:39 kid1| Unlinkd pipe opened on FD 14
2011/02/14 14:09:39 kid1| Store logging disabled
2011/02/14 

[squid-users] kerberos authentication - performance tuning

2011-02-09 Thread guest01
Hi,

We are currently using Squid 3.1.10 on RHEL5.5 and Kerberos
authentication for most of our clients (authorization with an icap
server). At the moment, we are serving approx 8000 users with two
servers. Unfortunately, we have performance troubles with our Kerberos
authentication. Load values are way to high ...

10:19:58 up 16:14,  2 users,  load average: 23.03, 32.37, 25.01
10:19:59 up 15:37,  2 users,  load average: 58.97, 57.92, 47.73

Peak values have been 70 for the 5min interval. At the moment, there
are approx 400 hits/second (200 per server). We already disabled
caching on harddisk. Avg service time for Kerberos is up to 2500ms
(which is quite long).

Our kerberos configuration looks pretty simple:
#KERBEROS
auth_param negotiate program
/opt/squid/libexec/negotiate_kerberos_auth -s HTTP/fqdn -r
auth_param negotiate children 30
auth_param negotiate keep_alive on

Is there anyway for further caching or something like that?

For testing purposes, we authenticated a certain subnet by IP and load
values decreased to 1. (Unfortunately, this is not possible because
every user gets a policy assigned by its username)

Any ideas anyone? Are there any kerberos related benchmarks available
(could not find any), maybe this issue is not a problem, just a
limitation and we have to add more servers?

Thanks!

best regards
Peter


[squid-users] multiple squid 3.1.10 instances - independent vs frontend/backend

2011-01-07 Thread guest01
Hi guys,

I am using a couple of squid instances per server (Squid 3.1.0, RHEL
5.5, lots of RAM) and was wondering which would be the better
configuration? Better in that case means more performance.

Either
1) - 4 separate squid instances, each with its own completely
independent cache? (no cache hierarchy or something like that)
or
2) - a configuration similar to [1] with frontend and backend instances?

Does anybody have experiences with either of these two configurations?
We are currently using configuration #1 with 20 RPS/per instance and
load values of up to 8. It is not much, but we are authenticating with
Kerberos/NTLM/LDAP and even content filtering by ICAP. Load is pretty
high, but since we are using 16 CPUs per server, it is ok until up to
16. CPU usage stays pretty low (20%), and disk IO is not an issue
(high IO idle values, very low IO wait values) too, I am wondering why
the load is that high.
What do you think, could configuration #2 improve performance (testing
it would be best, but it is not that easy with a live system)?

regards
Peter

[1] http://wiki.squid-cache.org/ConfigExamples/MultiCpuSystem


[squid-users] tcp_miss/502 when surfing with IE6 to http://www.raiffeiseninformatik.at

2010-12-14 Thread guest01
Hi,

We are currently having a strange problem with our Squids (3.1.9, on
RHEL 5.5 64Bit).

When we surf to the site http://www.raiffeiseninformatik.at with IE6,
we get a ERR_ZERO_SIZE_OBJECT error message. With every other browser
(tested FF 3.6.12 and Chrome 8) it is working without issues.

access.log-entries:
IE:
1292324614.977   1115 14/Dec/2010:11:03:34 10.14.16.71 TCP_MISS/502
1676 GET http://www.raiffeiseninformatik.at/ USERNAME
DIRECT/217.13.189.96 text/html ICAP-X-Attribute:it
FF:
1292327617.614   1275 14/Dec/2010:11:53:37 10.14.16.71 TCP_MISS/200
16761 GET http://www.raiffeiseninformatik.at/ USERNAME
DIRECT/217.13.189.96 text/html ICAP-X-Attribute:it

Does anyone have any ideas why it does not work? I compared both HTTP
requests (you can find them below), there are some differences, but is
that causing the problem? Maybe it is a strange IE6 client policy
option?

best regards




IE:
GET http://www.raiffeiseninformatik.at/footer-navigation/kontakt/ HTTP/1.1
Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg,
application/x-shockwave-flash, application/x-ms-application,
application/x-ms-xbap, application/vnd.ms-xpsdocument,
application/xaml+xml, application/vnd.ms-excel,
application/vnd.ms-powerpoint, application/msword, */*
Accept-Language: de
UA-CPU: x86
Accept-Encoding: gzip, deflate
User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; .NET
CLR 1.1.4322; .NET CLR 2.0.50727; .NET CLR 3.0.04506.648; .NET CLR
3.5.21022; InfoPath.1)
Host: www.raiffeiseninformatik.at
Proxy-Connection: Keep-Alive
Cookie: fe_typo_user=6997755f5d8b2210950b456cf5bc1d6b
Proxy-Authorization: Negotiate

Re: [squid-users] https to http translation

2010-12-12 Thread guest01
Maybe not exactly what you are looking for, but have you thought of
using IPSec? You could deploy IPSec and encrypt every connection from
your clients to the Proxy.
I don't know what you are trying to achieve, but if your objective is
to encrypt connections from the Clients to the proxy, IPSec would be
perfectly transparent and scalable.

On Sunday, December 12, 2010, purgat purga...@gmail.com wrote:
 Hi
 I have seen similar discussions in the list in the past but none exactly
 answers my question.
 This is the setup I am looking for:
 a server somewhere out there runs one or more instances of squid.
 user at home sets up the browser to use the proxy.
 whenever user puts an address in their browser address bar, request, is
 encrypted with ssl and sent to squid. Instances (if more than one is
 necessary) of squid then request the page through normal http from the
 Internet and send the response through ssl back to the client.
 Unfortunately the answers I have seen to this question in past seem to
 ignore the fact that the user may want to use different websites. I
 don't want just a couple of addresses to be accelerated by squid and
 sent through ssl. What I am looking for is not a normal reverse proxy,
 glorified with ssl. Unfortunately there is no example of such a setup in
 wiki though I know a lot of people would want this set up for securing
 data in their unsecure local network. The explanations on the web about
 how to set this up come short of explaining a lot of things about an
 already complex matter.
 Is Squid able to help me with this?
 By the way... ssh tunnelling is not an option for me.

 Regards
 purgat




[squid-users] bug 3106 - no username in respmod request when icap_log is enabled

2010-11-18 Thread guest01
Hi guys,

This is basically just a fyi mail, I just filled following bug report:
http://bugs.squid-cache.org/show_bug.cgi?id=3106

A very brief summary, if I enable icap_log, then the
x-authenticated-user-attribut in the icap header is missing in respmod
requests.

I am curious if anybody has ever run into that kind of problem?

best regards
Peter


[squid-users] strip domain/realm from icap header username

2010-11-12 Thread guest01
Hi,

We are using squid 3.1.8 (on RHEL5.5 64Bit) as authentication/caching
forward proxy and an ICAP server for authorization and content
filtering.

At the moment, most of the users are authenticated by NTLM (we are
planning for Kerberos) and the username is sent to our ICAP server
which will do an LDAP lookup. This setup works pretty good for our
default domain. If an user from a different, trusted domain will be
authenticated by NTLM, then the username sent to the ICAP server will
look like:
DOMAIN+USERNAME

The ICAP server cannot handle that during the LDAP lookup, the domain
part has to be removed. I know that I can do that with Kerberos (there
is an -r option in the negotiate_kerberos_auth-helper, at least in
3.2x branch), but at the moment, I don't have that option for NTLM.
Does anyone have any ideas how to easily solve that? (I know that in
Freeradius, Freeradius will strip off the domain itself, that's why I
am guessing that ntlm_auth cannot do that)

Our plan is to upgrade to Kerberos and get rid of that problem, but if
there occur troubles, we have to find a way to solve that problem by
using NTLM. The easiest way I figured out is to modify the
ModXact.cc-file and modify the icap header username, e.g. if there is
a domain part, remove it. But that would cause some maintainance
troubles after upgrades (we must not forget changing this file)

I don't think it is a common problem (ntlm with multiple domains and
icap), if I am wrong it may be a possible feature request. E.g. adding
a new config option for squid.conf which will remove the domain part
if enabled and an option for specifing the separator (most likely a +)

best regards
Peter


[squid-users] got NTLMSSP command 3, expected 1

2010-10-04 Thread guest01
Hi guys,

At first I have to appologize for starting a new thread concerning this message:

[2010/10/01 12:29:45, 1] libsmb/ntlmssp.c:ntlmssp_update(334)
got NTLMSSP command 3, expected 1

I know that it has been discussed previously and I read almost all of
the answers but I did not find any solution. Maybe I missed an
acceptable answer or maybe there are new infos concerning this topic?

Anyway, I am using Squid 3.1.8 on RHEL5.5 with NTLM authentication
(Server is joined to AD2003 domain) and this message appears in my
cache.log-file multiple times (at arbitrary times). I don't really
know why or how to prevent it, a few posts said that it is a client
issue or that we could use authenticate_ip_shortcircuit_ttl on Squid
(3.x). My browsers are IE, FF, Chrome on WinXP SP3, unfortunately, I
don't know which client causes the problem and neither do I know any
possibility to prevent that problem from occur, has anybody any ideas?

(I could switch to Kerberos, this may solve the problem. Even if it is
a much more secure and better solution, I would prefer a different
solution)


thanks
best regards
Peter


Re: [squid-users] Re: Native Kerberos (squid_kerb_auth) with LDAP-Fallback (squid_ldap_auth)

2010-09-17 Thread guest01
Hi,

I am stuck with a similar problem, has there been any solution for
this topic? (Btw, I am running Squid 3.1.8 on RHEL5.5)

We are trying to achieve following:
CompanyA (us): own Active Directory domain and we are hosting the
squid web server (central forward proxy for internet access with ICAP
capabilities)
CompanyB: completely independent Active Directory Domain
(CompanyC: might use our squid soon)
(CompanyD: might use our squid soon)

We have one shared squid server which should authenticate CompanyA
with NTLM (or kerberos) and CompanyB with LDAP (they insist on LDAP, I
don't know why, but I suppose without a domain trust I could
authenticate only one company with NTLM or kerberos and would have
troubles, right?)
NTLM is the prefered authentication method and if a Client of CompanyA
wants to lookup something in the Internet, he will be authenticated
with NTLM.
If CompanyB wants to lookup something, the Browser submits NTLM data
(valid for their domain, not ours) which are not valid for our domain
and in theory, the browser should try Basic-Authentication (e.g. LDAP)
next, but that does not happen. It still tries NTLM (Firefox as well
as IE8 on Windows 7). For further infos, look at [1],[2].

Unfortunately, I don't have much options:
- disable ntml authentication in IE8 for CompanyB and then IE only
tries LDAP which works
- authenticate CompanyA by IP and disable NTML authentication (= our
current setup)

Of course it would be possible to authenticate everybody by LDAP (we
are using a OpenLDAP metadirectory which talks to the ADs), but it is
only a Basic auth and a very bad idea

Has anybody any additional idea? How do you guys handle authentication
for multiple independent customers?

In my opinion, this is a client problem, unfortunately IE and even FF
are too dumb. From a functional perspective of view, it should be
standard to try the weaker (LDAP) authentication if the stronger
(NTLM) does not work (from a security perspective of view, I am glad
that this does not seem to work ;-)). Is there any option for the
squid to track authentication and only offer basic authentication if
ntlm failed [3]? Or anything similar?

I would appreciate any response!
best regards
Peter

additional infos:
[1] http://img830.imageshack.us/img830/3920/squidntlmnotworking.png

[2] squid config:
#NTLM
auth_param ntlm program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp
auth_param ntlm children 5
auth_param ntlm keep_alive on

# LDAP authentication
auth_param basic children 5
auth_param basic realm Proxy
auth_param basic credentialsttl 120 minute
auth_param basic program /opt/squid/libexec/squid_ldap_auth -b
dc=squid-proxy -D uid=user -w passwd -h server -f (uid=%s)

[3] Tcpdump show me the header with following infos (squid offers ntlm
and basic):
GET http://fxfeeds.mozilla.com/en-US/firefox/headlines.xml HTTP/1.1
Host: fxfeeds.mozilla.com
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1)
Gecko/20090624 Firefox/3.5
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive: 300
Proxy-Connection: keep-alive
X-Moz: livebookmarks

HTTP/1.0 407 Proxy Authentication Required
Server: squid/3.1.8
Mime-Version: 1.0
Date: Fri, 17 Sep 2010 10:09:12 GMT
Content-Type: text/html
Content-Length: 1482
X-Squid-Error: ERR_CACHE_ACCESS_DENIED 0
Vary: Accept-Language
Content-Language: en-us
Proxy-Authenticate: NTLM
Proxy-Authenticate: Basic realm=Proxy
X-Cache: MISS from xlsqip02_1
Via: 1.0 xlsqip02_1 (squid/3.1.8)
Connection: keep-alive


On Fri, Aug 13, 2010 at 4:01 PM, Tom Tux tomtu...@gmail.com wrote:
 Hi

 I run squid with the named debug-options. The cache.log-output seems
 a little bit complicated. So the only way I see, is to have a remarked
 native ldap-authentication-configuration, which I can enable, if the
 kerberos-mechanism fails.

 Or does somebody has such a config (kerberos with squid_kerb_ldap to
 get ad-groups AND squid_ldap_auth with a memberOf-filter) running?

 Thanks a lot.
 Regards,
 Tom

 2010/8/11 Amos Jeffries squ...@treenet.co.nz:
 Tom Tux wrote:

 Hi Amos

 Thanks a lot for this explanation. Both configurations seperately -
 native kerberos and native ldap - are working fine. But in
 combination, there is still one problem.

 Here is my actual configuration (combined two mechanism) again:

 auth_param negotiate program /usr/local/squid/libexec/squid_kerb_auth -i
 auth_param negotiate children 50
 auth_param negotiate keep_alive on
 external_acl_type SQUID_KERB_LDAP ttl=3600 negative_ttl=3600 %LOGIN
 /usr/local/squid_kerb_ldap/bin/squid_kerb_ldap -d -g InternetUsers
 acl INTERNET_ACCESS external SQUID_KERB_LDAP

 external_acl_type SQUID_DENY_KERB_LDAP ttl=3600 negative_ttl=3600
 %LOGIN /usr/local/squid_kerb_ldap/bin/squid_kerb_ldap -d -g
 DenyInternetUsers
 acl DENY_INTERNET_ACCESS external SQUID_DENY_KERB_LDAP

 # LDAP-Fallback
 auth_param basic 

[squid-users] icap loadbalacing

2010-06-25 Thread guest01
Hi guys,

I am currently implementing a proxy solution with squid (3.1.4) as
caching/authentication proxy and a mcafee webwasher as content filter.
Besides some other issues, it seems to work. Unfortunately, we have
way too much traffic, only one squid and one webwasher cannot handle
that (we suppose, we don't have real performance values yet). That's
why we have to use a loadbalancer which will distribute (round robin
or whatever, we could even use DNS) the traffic to the squids. That's
not an issue either, the bigger problem is, that I can specify only
ONE icap server per squid, right?
So I only see two possibilities:
1) use the same count of squids and webwasher with a 1:1 mapping from
ONE squid to exactly ONE webwasher
2) use a loadbalancer in front of the webwashers and configure a VIP
address to which the squid will talk to

Number 1 is not really an option, because we are currently having more
webwashers than squids.
Number 2 is the preferred solution, but I am not sure if ICAP is
really loadbalanceable? The are a couple of problems we might run
into. The most important question, is it necessary that both request
(respmod and reqmod) will be sent to the same ICAP server? If not,
then everything should be ok with the loadbalancing. I am also not
sure about downloads which will be checked by the icap server.

Anyway, my final question, is ICAP loadbalanceable? Will there be
problems (either with downloading files, ...) if requests will be
submitted to different webwashers?

Thanks, best regards
John


[squid-users] Squid 3.1.1 ICAP Issue

2010-04-13 Thread guest01
Hi guys,

I may have found a bug related to the ICAP capabilities of Squid 3.1.1
(on RHEL5.4). We are currently evaluating a squid deployment which is
referenced by this url [1].

We want to use Squid as Caching/Authentication-Proxy and ICAP Client,
which talks to the Webwasher-server (content filtering proxy) via
ICAP. Our Squid has following ICAP configuration:

#icap
icap_enable on
icap_send_client_ip on
icap_send_client_username on
icap_preview_enable on
icap_preview_size 30
icap_client_username_encode on
icap_client_username_header X-Authenticated-User
icap_service service_req reqmod_precache bypass=0
icap://yyy.yyy.yyy.21:1344/wwreqmod
adaptation_access service_req allow all
icap_service service_resp respmod_precache bypass=0
icap://yyy.yyy.yyy.21:1344/wwrespmod
adaptation_access service_resp allow all

(Unfortunately, we can only specify one ICAP server per Squid, but
that's another issue/limitation)
This deployment is supported by McAfee (Webwasher) and there is even
an example configuration[2] for squid and documents for configuring
the webwasher by McAfee.

ICAP reqmod looks good, everything as expected:
Host: yyy.yyy.yyy.21:1344
Date: Mon, 12 Apr 2010 10:54:27 GMT
Proxy-Authorization: NTLM BASE64 STRING
Encapsulated: req-hdr=0, null-body=184
Preview: 0
Allow: 204
X-Client-IP: bbb.bbb.bbb.71
X-Authenticated-User: BASE64 encoded USERNAME

GET / HTTP/1.1
Host: www.playboy.com
Accept: text/html, text/plain

ICAP/1.0 200 OK
Encapsulated: res-hdr=0, res-body=170
ISTAG: 001-000-03
X-Attribute: sx
X-ICAP-Profile: PoC_Policy_TEST
X-WWBlockResult: 10
X-WWRepScore: 11

HTTP/1.1 403 Forbidden
Content-Length: 1480
Content-Type: text/html; charset=ISO-8859-1
Pragma: no-cache
Proxy-Connection: close
X-Error-Name: requestdynablocked

In that case we were assigned to PoC_Policy_TEST and the request to
www.playboy.com was blocked. (It seems that we are not supposed to see
nice girls at work ;-))

If we want to serve to a page which is not blocked (e.g. google.com),
we get following request:
ICAP/1.0 200 OK
Encapsulated: res-hdr=0, res-body=166
ISTAG: 001-000-03
X-ICAP-Profile: default
X-WWBlockResult: 81
X-WWRepScore: 0

HTTP/1.1 403 Forbidden
Content-Length: 1279
Content-Type: text/html; charset=ISO-8859-1
Pragma: no-cache
Proxy-Connection: close
X-Error-Name: authorizedonly

And there is the problem. The ICAP respmod (3.1.1) request does NOT
contain the X-Client-IP: bbb.bbb.bbb.71 and X-Authenticated-User:
BASE64 encoded USERNAME values and that's why the webwasher cannot
assign the right policy!

If we are using squid 3.0, it works. So in my opinion, this sound like
a bug. Right? Has anyboy experiences with ICAP or ICAP issues with
Squid 3.1.1?

Anyway, besides that problem which I solved by using squid 3.0, there
are a couple of other limitation which I don't really want to
implement, but I don't see any other change, do you? ;-) At least it
does not sound very complicated to implement in c++ ...
- possibility to specify more than one ICAP server for a Squid
configuration (for example with round robin load balancing or any
other kind of loadbalancing)
- the much bigger issue is, that Squid as ICAP client does NOT send
any group information to the ICAP server, only X-Client-IP and
X-Authenticated-User values and no X-Authenticated-Groups attribute.
Unfortunately, a policy will be assigned by a group membership. That's
why the ICAP server needs that information and it is not a good idea
to let the ICAP server lookup this user again (huge performance
issue).

So, this is quite a long mail, I would appreciate any feedback.

best regards
Peter

[1] http://img714.imageshack.us/img714/7457/topology.png
[2] http://wiki.squid-cache.org/ConfigExamples/Webwasher (they are
using a proxy chain instead of a icap solution)


[squid-users] squid deployment

2010-03-29 Thread guest01
Hi guys,

We want to replace our current proxy solution (crappy commercial
product which is way too expensive) and thought about Squid, which is
a great product.I already found a couple of example configurations,
basically for reverse proxying. What we are looking for is a caching
and authentication (LDAP and NTLM) only solution with content
filtering via ICAP. We have following configuration in mind (firewalls
omitted):

Clients
 |
 |
 v
Loadbalancer
 |
 |
 v
Squid-Proxies     ICAP-Server
 |
 |
 v
INTERNET

We are expecting approx. 4500 requests per second average (top 6000
RPS) and 150Mbit/s, so I suppose we need a couple of Squids. The
preferable solution would be big servers with a lot of memory and
Squid 3.0 on a 64Bit RHEL5.
Does anybody know any similar scenarios? Any suggestions? What are
your experiences?

The ICAP Servers are commercial ones (at least at the beginning), but
I have following problem. I want to use multiple ICAP Servers in each
Squid configuration with loadbalancing, unfortunately it is not
supported and does not work in Squid 3.

best regards


[squid-users] Re: squid deployment

2010-03-29 Thread guest01
Damn, I shouldn't have pressed the send button yet  Anyway, I
found a similar scenario, at least for small environments, at
http://wiki.squid-cache.org/ConfigExamples/Webwasher

But I don't see any chance to support multiple icap servers. Is there
a solution for this problem (I don't want to use the loadbalancer for
balancing my icap servers), I don't want to build this solution if I
can prevent it somehow:

 Clients
 |
 |
 v
 Loadbalancer
 |
 |
 v
 Squid-Proxies  --- Loadbalancer ---   ICAP-Server
 |^  ICAP-Server
 ||  ^
 |---|
 |
 |
 v
 INTERNET

I would appreciate any input,

thanks, best regards


On Mon, Mar 29, 2010 at 3:18 PM, guest01 gues...@gmail.com wrote:
 Hi guys,

 We want to replace our current proxy solution (crappy commercial
 product which is way too expensive) and thought about Squid, which is
 a great product.I already found a couple of example configurations,
 basically for reverse proxying. What we are looking for is a caching
 and authentication (LDAP and NTLM) only solution with content
 filtering via ICAP. We have following configuration in mind (firewalls
 omitted):

 Clients
     |
     |
     v
 Loadbalancer
     |
     |
     v
 Squid-Proxies     ICAP-Server
     |
     |
     v
 INTERNET

 We are expecting approx. 4500 requests per second average (top 6000
 RPS) and 150Mbit/s, so I suppose we need a couple of Squids. The
 preferable solution would be big servers with a lot of memory and
 Squid 3.0 on a 64Bit RHEL5.
 Does anybody know any similar scenarios? Any suggestions? What are
 your experiences?

 The ICAP Servers are commercial ones (at least at the beginning), but
 I have following problem. I want to use multiple ICAP Servers in each
 Squid configuration with loadbalancing, unfortunately it is not
 supported and does not work in Squid 3.

 best regards



[squid-users] squid performance - requests per second

2010-03-26 Thread guest01
Hi guys,

I am sorry if this is a question which has been asked for many times,
but I did not find any actual question concerning the performance of
recent versions of squid.

We are trying to replace a commercial product with squid servers on
64bit linux servers (most likely red hat 5). At the moment, we have a
peak of about 6000 requests per second, which is really a lot. How
many requests can one single squid server handle? I am just talking
about caching, we also have icap servers and different forms of
authentication. What are your experiences? How many requests can you
handle with which hardware? A raw guess would be ok.

thanks, best regards


[squid-users] access denied, WHY????????

2005-03-02 Thread guest01
Hi!

I have a very strange problem with my squid. I am using it on a livecd
as http proxy.
It always worked, but now, I don't know why not!

I am using Debian Woody stable, squid/2.5.STABLE4 with a previous
working config file, all
iptable-rules are disabled. Authentication disabled to.
I tried to increase the debug_level, but there are no concrete errors in
the logfile!!
The request GET ... is ALLOWED
There is nothing like a permission denied,  I just get an error page
and I have absolutely
no idea why!!

Testing procedure in the bash (local):
 - export http_proxy=http://localhost:3128
 - wget www.google.at
response:
ERROR 403: Forbidden

Testing procedure via web
 - correct IP  configured in  my firefox
 - trying a little bit surfing www.google.at
response:
Access Denied

Can anyone please help me??? Where could be the problem?
thxs

PS: logfiles get very hugh with debug_level ALL,10! ;-)