Re: [squid-users] stopping sslbump to domains with invalid or unsigned certs

2011-12-20 Thread Sean Boran
According to the doc, sslproxy_flags only has only  one other value
NO_DEFAULT_CA.
That doesn't seem of much use... it does recognise and refuse the
expired cert though:

2011/12/21 07:30:01.269| Self signed certificate:
/C=--/ST=SomeState/L=SomeCity/O=SomeOrganization/OU=SomeOrganizationalUnit/CN=localhost.localdomain/emailAddress=root@localhost.localdomain
2011/12/21 07:30:01.269| confirming SSL error 18
2011/12/21 07:30:01.269| fwdNegotiateSSL: Error negotiating SSL
connection on FD 29: error:14090086:SSL
routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed
(1/-1/0)

But also refuses a well know bank:
Self signed certificate in certificate chain:
/1.3.6.1.4.1.311.60.2.1.3=CH/2.5.4.15=Private
Organization/serialNumber=CH-020.3.906.075-9/C=CH/postalCode=8001/ST=Zuerich/L=Zuerich/streetAddress=Paradeplatz
8/O=Credit Suisse Group AG/CN=www.credit-suisse.com
2011/12/21 07:32:47.859| confirming SSL error 19

And amazon:
Unable to get local issuer certificate:
/C=US/ST=Washington/L=Seattle/O=Amazon.com Inc./CN=www.amazon.com

I had expected DONT_VERIFY_PEER to mean "dont verify peer if it is the
except acl".
Hmm.
Digging in the sources, in ssl/support.cc, there are more that two
constants defined (I had just looked at the docs so far..).  There is
no actual VERIFY_PEER though.

Looking at the sources it seems necessary that
SSL_FLAG_DONT_VERIFY_PEER not be set if this is to be called:
SSL_CTX_set_verify(sslContext, SSL_VERIFY_PEER ...);

So, compiled the lastest HEAD and tried both VERIFY_CRL,
VERIFY_CRL_ALL which would presumably have done some additional CRL
checking, but the example sites above fail on that too:

Unable to get certificate CRL:
/C=US/ST=Washington/L=Seattle/O=Amazon.com Inc./CN=www.amazon.com

Which would look like its requires the existence of a CRL for each destination?
Tried setting capath to an empty directory, but it probably requires
some standard CRLs.

Squid pull its standard CA list from openssl (/etc/ssl/certs ?), but
should just accept empty crl lists if there are none?  Setting
capath=/etc/ssl/certs and crlfile=/emptyfile does not help.

I muust still be missing something..


As regards The Measurement Factory, their website looks interesting,
but I dont see any relevant references. Is there a discussion or
ticket on what they are planning and how to contact them ? Should I
ask on squid-dev?

Thanks,

Sean


On 21 December 2011 01:02, Amos Jeffries  wrote:
> On 21/12/2011 3:34 a.m., Sean Boran wrote:
>>
>> Hi,
>>
>> sslbump allows me to interrupts ssl connections and run an AV check on
>> them.
>> It generates a certs for the target domain (via sslcrtd), so that the
>> users browser sees a server cert signed by the proxy.
>>
>> If the target domain has a certificate that is expired, or it not
>> signed by a recognised CA, its important that the lack of trust is
>> communicated to the end user.
>>
>> Example, on connecting direct (not via a proxy) to
>> https://wiki.squid-cache.org the certificated presented is expired 2
>> years ago and not signed by known CA  .
>> Noext on connecting via a sslbump proxy (v3.2.0.14), the proxy creates
>> a valid cert for wiki.squid-cache.org and in the user's browsers it
>> looks like wiki.squid-cache.org has a valid cert signed by the proxy.
>>
>> So my question is:
>> What ssl_bump settings would allow the proxy to handle such
>> destinations with expired or non trusted sites by, for example:
>> a) Not bumping the connection but piping it through to the user
>> unchanged, so the user browser notices the invalid certs?
>> b) Refuses the connection with a message to the user, if the
>> destination is not on an allowed ACL of exceptions.
>
>
> Pretty much. The Measurement Factory has a project underway to fix this
> limitation.
> Please contact Alex about sponsoring their work to make it happen faster, or
> get access to the experimental code.
>
>
>>
>> Looking at squid.conf, there is sslproxy_flags, sslproxy_cert_error
>> #  TAG: sslproxy_flags
>> #           DONT_VERIFY_PEER    Accept certificates that fail
>> verification.
>> #           NO_DEFAULT_CA       Don't use the default CA list built in
>>  to OpenSSL.
>> #  TAG: sslproxy_cert_error
>> #       Use this ACL to bypass server certificate validation errors.
>>
>> So, the following config would then implement scenario b) above?
>>
>> # Verify destinations: yes, but allow exceptions
>> sslproxy_flags DONT_VERIFY_PEER
>> #sslproxy_flags none
>> # ignore Certs with certain cites
>> acl TrustedName url_regex ^https://badcerts.example.com/
>> sslproxy_cert_error allow TrustedName
>> sslproxy_cert_error deny all
>>
>> ==>  But then, why does it not throw an error when connecting to
>> https://wiki.squid-cache.org ?
>
>
> You configured not to verify, therefore the error is not noticed and cannot
> trigger any action.
>
> Why no output is displayed you will have to ask the OpenSSL people. There
> are a few places in their API like this where errors are silently dropped
> and seemingly no

Re: [squid-users] Fwd: Need help regarding an issue!

2011-12-20 Thread Amos Jeffries

On 21/12/2011 7:17 p.m., Girish Dudhwal wrote:

Hi Squid Team,

Greetings for today. well we are currently on track of updating our
squid versio to 3.1. So could you give us a solution for our problem
if we update our squid version. Because tackling with QoS standards of
our OS and informing users is not a specific solution. Kindly reply.


QoS remains the simpler solution.

The ssl bump feature is documented here:
   http://wiki.squid-cache.org/Features/SslBump


Amos


Re: [squid-users] Fwd: Need help regarding an issue!

2011-12-20 Thread Amos Jeffries

On 21/12/2011 6:30 p.m., Girish Dudhwal wrote:

Hi

Greetings Squid team. I am stuck on squid situation. actually while
using gmail using as SSL version it's browsing data is too much for
our server as we have limited bandwidth. we are using a squid 2.7
stable version. so can you suggest me any way to force gmail to load
basic HTML view so that browsing data could be reduced.  Kindly reply
ASAP. as this has become a serious matter for our network bandwidth.


Switching traffic from secured to non-secured is not the right solution 
for this. And 2.7 is not capable of the ssl-bump feature for caching of 
encrypted contents.


With your current Squid version you are best to use the operating system 
QoS functionality on port 443 traffic speed and inform your users that 
they should use http://gmail.com to get faster access. Squid can send 
tcp_outgoing_tos values on gmail requests for QoS policy to work with.


Amos



[squid-users] Fwd: Need help regarding an issue!

2011-12-20 Thread Girish Dudhwal
Hi

Greetings Squid team. I am stuck on squid situation. actually while
using gmail using as SSL version it's browsing data is too much for
our server as we have limited bandwidth. we are using a squid 2.7
stable version. so can you suggest me any way to force gmail to load
basic HTML view so that browsing data could be reduced.  Kindly reply
ASAP. as this has become a serious matter for our network bandwidth.

Thanks & Regards,

--
Girish Dudhwal | System Administrator
+918447224336 , +911120222093


Re: [squid-users] Squid 3.2.0.14 didn't work in interception mode

2011-12-20 Thread Amos Jeffries

On 21/12/2011 2:02 p.m., Nguyen Hai Nam wrote:


Squid Cache: Version 3.2.0.14
configure options:  '--prefix=/usr/squid' '--enable-ipf-transparent' 
--enable-ltdl-convenience


I forgot to attach the debug errors; by the way, it failed at ioclt() 
lookup:


2011/12/20 04:06:03 kid1| BUG: Orphan Comm::Connection: 
local=10.2.176.31:3129 remote=10.2.178.178:13216 FD 14 flags=33

2011/12/20 04:06:03 kid1| NOTE: 7 Orphans since last started.
2011/12/20 04:06:03 kid1| Intercept.cc(253) IpfInterception: NAT 
lookup failed: ioctl(SIOCGNATL) 


I have opened bug http://bugs.squid-cache.org/show_bug.cgi?id=3455 to 
track this and added a patch there to improve the error output. Please 
apply it, re-test and post the new error message to the bug report. Thanks.


Amos



Re: [squid-users] Squid 3.2.0.14 didn't work in interception mode

2011-12-20 Thread Nguyen Hai Nam

On 12/20/2011 7:06 PM, Amos Jeffries wrote:

On 21/12/2011 12:33 a.m., Nguyen Hai Nam wrote:

Hi there,

I'm building new squid box which is 3.2.0.14 on OpenIndiana 151a, the 
configuration is as usual but when squid started up, intercept mode 
didn't work.


IP NAT table already works:

# ipnat -l
List of active MAP/Redirect filters:
rdr rtls0 0.0.0.0/0 port 80 -> 10.2.176.31 port 3129 tcp

List of active sessions:
RDR 10.2.176.31 3129 <- -> 66.220.149.48   80[10.10.225.253 
57093]
RDR 10.2.176.31 3129 <- -> 66.220.149.48   80[10.10.225.253 
57092]




What NAT system is this?
 a PF or IPFilter?
 if PF, which OpenBSD version is it based on?

How exactly is it not working?
 ioclt() lookup failures?
 or 409 (Conflict) HTTP responses?
 or something else?

Amos


Squid starts up normally:

# tail -n 25 /usr/squid/var/logs/cache.log
2011/12/20 02:24:07 kid1| Using Least Load store dir selection
2011/12/20 02:24:07 kid1| Set Current Directory to 
/usr/squid/var/cache/squid

2011/12/20 02:24:07 kid1| Loaded Icons.
2011/12/20 02:24:07 kid1| HTCP Disabled.
2011/12/20 02:24:07 kid1| Squid plugin modules loaded: 0
2011/12/20 02:24:07 kid1| Ready to serve requests.
2011/12/20 02:24:07 kid1| Accepting HTTP Socket connections at 
local=[::]:3128 remote=[::] FD 19 flags=9
2011/12/20 02:24:07 kid1| Accepting NAT intercepted HTTP Socket 
connections at local=0.0.0.0:3129 remote=[::] FD 20 flags=41
2011/12/20 02:24:07 kid1| Done reading /usr/squid/var/cache/squid 
swaplog (0 entries)

2011/12/20 02:24:07 kid1| Finished rebuilding storage from disk.
2011/12/20 02:24:07 kid1| 0 Entries scanned
2011/12/20 02:24:07 kid1| 0 Invalid entries.
2011/12/20 02:24:07 kid1| 0 With invalid flags.
2011/12/20 02:24:07 kid1| 0 Objects loaded.
2011/12/20 02:24:07 kid1| 0 Objects expired.
2011/12/20 02:24:07 kid1| 0 Objects cancelled.
2011/12/20 02:24:07 kid1| 0 Duplicate URLs purged.
2011/12/20 02:24:07 kid1| 0 Swapfile clashes avoided.
2011/12/20 02:24:07 kid1|   Took 0.05 seconds (  0.00 objects/sec).
2011/12/20 02:24:07 kid1| Beginning Validation Procedure
2011/12/20 02:24:07 kid1|   Completed Validation Procedure
2011/12/20 02:24:07 kid1|   Validated 0 Entries
2011/12/20 02:24:07 kid1|   store_swap_size = 0.00 KB
2011/12/20 02:24:08 kid1| storeLateRelease: released 0 objects
2011/12/20 02:24:27| Squid is already running!  Process ID 2413

Squid still works fine with configured proxy setting in browser.

Hope to receive your kind assistance.

Best regards,
~Neddie



Hi,

It's IPfilter:

Squid Cache: Version 3.2.0.14
configure options:  '--prefix=/usr/squid' '--enable-ipf-transparent' 
--enable-ltdl-convenience


I forgot to attach the debug errors; by the way, it failed at ioclt() 
lookup:


2011/12/20 04:06:03 kid1| BUG: Orphan Comm::Connection: 
local=10.2.176.31:3129 remote=10.2.178.178:13216 FD 14 flags=33

2011/12/20 04:06:03 kid1| NOTE: 7 Orphans since last started.
2011/12/20 04:06:03 kid1| Intercept.cc(253) IpfInterception: NAT lookup 
failed: ioctl(SIOCGNATL)


Thanks,



Re: RES: [squid-users] Squid3 don't run any external acl

2011-12-20 Thread Amos Jeffries

On 21/12/2011 10:28 a.m., Igor NM wrote:

Hi Andy,

The permissions and path is ok.

I find the problem... In my server, I disabled the ipv6, but squid try use it 
to connect 'external acl'... On the line of acl, put the 'ipv4' parameter and 
the problem is gone!
I don't find any solution where say 'put ipv4 on', but I find this: 
http://wiki.squid-cache.org/Features/IPv6#How_do_I_make_squid_use_IPv6_to_its_helpers.3F

"With squid external ACL helpers there are two new options ipv4 and ipv6. Squid 
prefers to use unix pipes to helpers and these are ignored. But on some networks TCP 
sockets are required. To work with older setups, helpers are still connected over IPv4 by 
default. You can add ipv6 option to use IPv6."

But the squid use ipv6... because, I don’t now...


Cause it was not part of the IPv6 feature, but a bug in the TCP stack 
handling.


Considering the lack of problems it caused (you are only the second 
report in 6 months since the bug was added). I am thinking of switching 
that background channel over to default IPv6 in the next release.


On the side; you have identified a helper which needs to be 
IPv6-enabled. If only to make it listen on all of the localhost IPs. It 
would be a good idea to get that fixed soonish.


Amos



Re: [squid-users] After reloading squid3, takes about 2 minutes to serve pages?

2011-12-20 Thread Amos Jeffries

On 21/12/2011 4:48 a.m., Terry Dobbs wrote:

Thanks.

After looking into it more, it appears squidGuard seems to be taking a
while to initialize the blacklists. The only reason I have to reload
squid3 is for squidGuard to recognize the new blacklist entries.

I am using Berkley DB for the first time, perhaps that's why it takes
longer? Although, I don't really see what Berkley DB is doing for me as
I am still using flat files for my domains/urls? Guess I should take
this to the squidGuard list!


If you are using 3.2.0.14 or later you could try loading the blacklists 
straight into Squid ACLs please?


Marcus Kool, the author of ufdbGuard, has contributed back to Squid 
several optimizations which apparenty cut down the regex overheads by a 
few tens of CPU percentage points. But we have not exactly tested with 
the large blacklists people are using squidGuard to optimize. So some 
feedback on how that goes would be very useful.


Amos



Re: [squid-users] stopping sslbump to domains with invalid or unsigned certs

2011-12-20 Thread Amos Jeffries

On 21/12/2011 3:34 a.m., Sean Boran wrote:

Hi,

sslbump allows me to interrupts ssl connections and run an AV check on them.
It generates a certs for the target domain (via sslcrtd), so that the
users browser sees a server cert signed by the proxy.

If the target domain has a certificate that is expired, or it not
signed by a recognised CA, its important that the lack of trust is
communicated to the end user.

Example, on connecting direct (not via a proxy) to
https://wiki.squid-cache.org the certificated presented is expired 2
years ago and not signed by known CA  .
Noext on connecting via a sslbump proxy (v3.2.0.14), the proxy creates
a valid cert for wiki.squid-cache.org and in the user's browsers it
looks like wiki.squid-cache.org has a valid cert signed by the proxy.

So my question is:
What ssl_bump settings would allow the proxy to handle such
destinations with expired or non trusted sites by, for example:
a) Not bumping the connection but piping it through to the user
unchanged, so the user browser notices the invalid certs?
b) Refuses the connection with a message to the user, if the
destination is not on an allowed ACL of exceptions.


Pretty much. The Measurement Factory has a project underway to fix this 
limitation.
Please contact Alex about sponsoring their work to make it happen 
faster, or get access to the experimental code.




Looking at squid.conf, there is sslproxy_flags, sslproxy_cert_error
#  TAG: sslproxy_flags
#   DONT_VERIFY_PEERAccept certificates that fail verification.
#   NO_DEFAULT_CA   Don't use the default CA list built in
  to OpenSSL.
#  TAG: sslproxy_cert_error
#   Use this ACL to bypass server certificate validation errors.

So, the following config would then implement scenario b) above?

# Verify destinations: yes, but allow exceptions
sslproxy_flags DONT_VERIFY_PEER
#sslproxy_flags none
# ignore Certs with certain cites
acl TrustedName url_regex ^https://badcerts.example.com/
sslproxy_cert_error allow TrustedName
sslproxy_cert_error deny all

==>  But then, why does it not throw an error when connecting to
https://wiki.squid-cache.org ?


You configured not to verify, therefore the error is not noticed and 
cannot trigger any action.


Why no output is displayed you will have to ask the OpenSSL people. 
There are a few places in their API like this where errors are silently 
dropped and seemingly no way is provided to check for them externally 
(ie from Squid).


Amos


Re: [squid-users] Squid with Kerberos auth

2011-12-20 Thread Amos Jeffries

On 21/12/2011 3:03 a.m., Wladner Klimach wrote:

But the problem is that i'm not running IPv6 in my network. That's why

"Welcome to your IPv6 enabled transit network. Whether you like it, or not."
- Rob Issac, 2008. 
(http://www.ausnog.net/files/ausnog-03/presentations/ausnog03-ward-IPv6_enabled_network.pdf)


Try with -n parameter to lsof. You might get a surprise.

The TCP "hybrid" stack can use IPv6 sockets for IPv4 traffic, this may 
also be what you are seeing. Squid-3.1+ will detect stack types and use 
this optimization for receiving ports if it can.



I've asked if this could be a problem. And the cpu usage hiting 99%
with only one user? Does it look like hardware limitation? When i'm
not using authentication, the cpu usage doesn't hit 50%.


Unlikely with one user.

All Squid does for auth is take the tokens out of HTTP headers and relay 
it to the auth backend. Then add the backends reply token to the HTTP 
response for the client. Very minimal CPU operations in Squid, unknown 
amount in the backend. Maybe (max) 32KB of token copied each way, plus 
the HTTP bits.


Amos


RES: [squid-users] Squid3 don't run any external acl

2011-12-20 Thread Igor NM
Hi Andy,

The permissions and path is ok.

I find the problem... In my server, I disabled the ipv6, but squid try use it 
to connect 'external acl'... On the line of acl, put the 'ipv4' parameter and 
the problem is gone!
I don't find any solution where say 'put ipv4 on', but I find this: 
http://wiki.squid-cache.org/Features/IPv6#How_do_I_make_squid_use_IPv6_to_its_helpers.3F

"With squid external ACL helpers there are two new options ipv4 and ipv6. Squid 
prefers to use unix pipes to helpers and these are ignored. But on some 
networks TCP sockets are required. To work with older setups, helpers are still 
connected over IPv4 by default. You can add ipv6 option to use IPv6."

But the squid use ipv6... because, I don’t now...

The line working:
external_acl_type ADGroup ipv4 ttl=60 children=5 %LOGIN 
/usr/lib/squid3/wbinfo_group.pl

Its ok now! :)


-Mensagem original-
De: Andrew Beverley [mailto:a...@andybev.com] 
Enviada em: terça-feira, 20 de dezembro de 2011 18:01
Para: Igor NM
Cc: squid-users@squid-cache.org
Assunto: Re: [squid-users] Squid3 don't run any external acl

On Tue, 2011-12-20 at 15:49 -0200, Igor NM wrote:
> Hi all!
> 
> My squid cannot run any “external acl” script or soft…
> I want to restrict web access by Windows AD group..
> 
> I test with other helpers, softs and scripts in this location and other
> location (ex. /tmp, /, /etc/squid3) and I got same error on cache.log
> 
> I use Ubuntu 64 11.10 and Squid 3.1.14
> 
> Ps.: The linux was integrated with Win AD 2008 R2
> 
> 2011/12/20 15:22:49| Starting Squid Cache version 3.1.14 for
> x86_64-pc-linux-gnu...
> 2011/12/20 15:22:49| Process ID 2503
> 2011/12/20 15:22:49| With 65535 file descriptors available
> 2011/12/20 15:22:49| Initializing IP Cache...
> 2011/12/20 15:22:49| DNS Socket created at [::], FD 7
> 2011/12/20 15:22:49| DNS Socket created at 0.0.0.0, FD 8
> 2011/12/20 15:22:49| Adding domain 4Talk.com.br from /etc/resolv.conf
> 2011/12/20 15:22:49| Adding domain 4Talk.com.br from /etc/resolv.conf
> 2011/12/20 15:22:49| Adding nameserver 192.168.1.6 from /etc/resolv.conf
> 2011/12/20 15:22:49| helperOpenServers: Starting 5/5 'wbinfo_group.pl'
> processes
> 2011/12/20 15:22:49| commBind: Cannot bind socket FD 9 to [::1]: (99) Cannot
> assign requested address
> 2011/12/20 15:22:49| commBind: Cannot bind socket FD 10 to [::1]: (99)
> Cannot assign requested address
> 2011/12/20 15:22:49| ipcCreate: Failed to create child FD.
> 2011/12/20 15:22:49| WARNING: Cannot run '/usr/lib/squid3/wbinfo_group.pl'
> process.

What are the permissions on /usr/lib/squid3/wbinfo_group.pl? Is it
executable by the squid user? Does it even exist?

Andy




Re: [squid-users] After reloading squid3, takes about 2 minutes to serve pages?

2011-12-20 Thread Sean Boran
How do you "reload", by doing restart or "-k reconfigure" (must faster)

Sean

On 20 December 2011 16:48, Terry Dobbs  wrote:
> Thanks.
>
> After looking into it more, it appears squidGuard seems to be taking a
> while to initialize the blacklists. The only reason I have to reload
> squid3 is for squidGuard to recognize the new blacklist entries.
>
> I am using Berkley DB for the first time, perhaps that's why it takes
> longer? Although, I don't really see what Berkley DB is doing for me as
> I am still using flat files for my domains/urls? Guess I should take
> this to the squidGuard list!
>
> -Original Message-
> From: Eliezer Croitoru [mailto:elie...@ec.hadorhabaac.com]
> Sent: Monday, December 19, 2011 1:04 PM
> To: squid-users@squid-cache.org
> Subject: Re: [squid-users] After reloading squid3, takes about 2 minutes
> to serve pages?
>
> On 19/12/2011 19:12, Terry Dobbs wrote:
> it's an old issue from squid 3.1 to 3.2 there is nothing yet as far as i
>
> know that solves this issue.
>
> Regards
> Eliezer
>> Hi All.
>>
>> I just installed squid3 after running squid2.5 for a number of years.
> I
>> find after reloading squid3 and trying to access the internet on a
> proxy
>> client it takes about 2 minutes until pages load. For example, if I
>> reload squid3 and try to access a page, such as www.tsn.ca it will try
>> to load for a minute or 2 until it finally displays. I understand I
>> shouldn't need to reload squid3 too much, but is there something I am
>> missing to make this happen? I am not using it for cacheing just for
>> monitoring/website control. Here is the log from when I was trying to
>> access the mentioned site:
>>
>> 1324310991.377      2 192.168.70.97 TCP_DENIED/407 2868 GET
>> http://www.tsn.ca/ - NONE/- text/html [Accept: image/gif, image/jpeg,
>> image/pjpeg, image/pjpeg, application/x-shockwave-flash,
>> application/xaml+xml, application/vnd.ms-xpsdocument,
>> application/x-ms-xbap, application/x-ms-application,
>> application/vnd.ms-excel, application/vnd.ms-powerpoint,
>> application/msword, */*\r\nAccept-Language: en-us\r\nUser-Agent:
>> Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET
> CLR
>> 2.0.50727; InfoPath.1)\r\nAccept-Encoding: gzip,
>> deflate\r\nProxy-Connection: Keep-Alive\r\nHost: www.tsn.ca\r\nCookie:
>> TSN=NameKey={ffc1186b-54bb-47ef-b072-097f5fafc5f2};
>> __utma=54771374.1383136889.1323806167.1324305925.1324309890.7;
>>
> __utmz=54771374.1323806167.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(n
>> one); __utmb=54771374.1.10.1324309890\r\n] [HTTP/1.0 407 Proxy
>> Authentication Required\r\nServer: squid/3.0.STABLE19\r\nMime-Version:
>> 1.0\r\nDate: Mon, 19 Dec 2011 16:09:51 GMT\r\nContent-Type:
>> text/html\r\nContent-Length: 2485\r\nX-Squid-Error:
>> ERR_CACHE_ACCESS_DENIED 0\r\nProxy-Authenticate: NTLM\r\n\r]
>> 1324310991.447      5 192.168.70.97 TCP_DENIED/407 3244 GET
>> http://www.tsn.ca/ - NONE/- text/html [Accept: image/gif, image/jpeg,
>> image/pjpeg, image/pjpeg, application/x-shockwave-flash,
>> application/xaml+xml, application/vnd.ms-xpsdocument,
>> application/x-ms-xbap, application/x-ms-application,
>> application/vnd.ms-excel, application/vnd.ms-powerpoint,
>> application/msword, */*\r\nAccept-Language: en-us\r\nUser-Agent:
>> Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET
> CLR
>> 2.0.50727; InfoPath.1)\r\nAccept-Encoding: gzip,
>> deflate\r\nProxy-Connection: Keep-Alive\r\nCookie:
>> TSN=NameKey={ffc1186b-54bb-47ef-b072-097f5fafc5f2};
>> __utma=54771374.1383136889.1323806167.1324305925.1324309890.7;
>>
> __utmz=54771374.1323806167.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(n
>> one); __utmb=54771374.1.10.1324309890\r\nProxy-Authorization: NTLM
>> TlRMTVNTUAABB4IIogAFASgKDw==\r\nHost:
>> www.tsn.ca\r\n] [HTTP/1.0 407 Proxy Authentication Required\r\nServer:
>> squid/3.0.STABLE19\r\nMime-Version: 1.0\r\nDate: Mon, 19 Dec 2011
>> 16:09:51 GMT\r\nContent-Type: text/html\r\nContent-Length:
>> 2583\r\nX-Squid-Error: ERR_CACHE_ACCESS_DENIED
> 0\r\nProxy-Authenticate:
>> NTLM
>>
> TlRMTVNTUAACEgASADAFgomid3FHZLqI7WsAAIoAigBCQwBPAE4A
>>
> VgBFAEMAVABPAFIAAgASAEMATwBOAFYARQBDAFQATwBSAAEACgBTAFEAVQBJAEQABAAmAGEA
>>
> cwBzAG8AYwBpAGEAdABlAGQAYgByAGEAbgBkAHMALgBjAGEAAwA0AHUAYgB1AG4AdAB1AC4A
>> YQBzAHMAbwBjAGkAYQB0AGUAZABiAHIAYQBuAGQAcwAuAGMAYQAA\r\n\r]
>


Re: [squid-users] Squid3 don't run any external acl

2011-12-20 Thread Andrew Beverley
On Tue, 2011-12-20 at 15:49 -0200, Igor NM wrote:
> Hi all!
> 
> My squid cannot run any “external acl” script or soft…
> I want to restrict web access by Windows AD group..
> 
> I test with other helpers, softs and scripts in this location and other
> location (ex. /tmp, /, /etc/squid3) and I got same error on cache.log
> 
> I use Ubuntu 64 11.10 and Squid 3.1.14
> 
> Ps.: The linux was integrated with Win AD 2008 R2
> 
> 2011/12/20 15:22:49| Starting Squid Cache version 3.1.14 for
> x86_64-pc-linux-gnu...
> 2011/12/20 15:22:49| Process ID 2503
> 2011/12/20 15:22:49| With 65535 file descriptors available
> 2011/12/20 15:22:49| Initializing IP Cache...
> 2011/12/20 15:22:49| DNS Socket created at [::], FD 7
> 2011/12/20 15:22:49| DNS Socket created at 0.0.0.0, FD 8
> 2011/12/20 15:22:49| Adding domain 4Talk.com.br from /etc/resolv.conf
> 2011/12/20 15:22:49| Adding domain 4Talk.com.br from /etc/resolv.conf
> 2011/12/20 15:22:49| Adding nameserver 192.168.1.6 from /etc/resolv.conf
> 2011/12/20 15:22:49| helperOpenServers: Starting 5/5 'wbinfo_group.pl'
> processes
> 2011/12/20 15:22:49| commBind: Cannot bind socket FD 9 to [::1]: (99) Cannot
> assign requested address
> 2011/12/20 15:22:49| commBind: Cannot bind socket FD 10 to [::1]: (99)
> Cannot assign requested address
> 2011/12/20 15:22:49| ipcCreate: Failed to create child FD.
> 2011/12/20 15:22:49| WARNING: Cannot run '/usr/lib/squid3/wbinfo_group.pl'
> process.

What are the permissions on /usr/lib/squid3/wbinfo_group.pl? Is it
executable by the squid user? Does it even exist?

Andy




Re: [squid-users] squid 3.2 helpers/external_acl/session compile problem

2011-12-20 Thread Andrew Beverley
On Tue, 2011-12-20 at 20:18 +0200, yusuf özbilgin wrote:
> Hi,
>  
> I am getting error when compile helpers/external_acl/session on freebsd 7.4.
> Error details are below.
>  
> What can be the problem?
>  
> Thanks,
> Yusuf
>  
> 
> squid version is squid-3.2.0.14-20111219-r11470
> berkeley db version is 4.8
>  
> 
> $make
>  
> /usr/local/bin/bash ../../../libtool --tag=CXX --mode=link c++ -Wall 
> -Wpointer-arith -Wwrite-strings -Wcomments -Werror -pipe -pipe 
> -I/usr/local/include -g -I/usr/local/include -rpath=/usr/local/lib 
> -L/usr/local/lib -L/usr/local/lib -Wl,-R/usr/local/lib -o ext_session_acl 
> ext_session_acl.o -L../../../compat
> libtool: link: c++ -Wall -Wpointer-arith -Wwrite-strings -Wcomments -Werror 
> -pipe -pipe -I/usr/local/include -g -I/usr/local/include 
> -rpath=/usr/local/lib -Wl,-R/usr/local/lib -o ext_session_acl 
> ext_session_acl.o -L/usr/local/lib 
> -L/home/user1/squid/squid-3.2.0.14-20111219-r11470/compat
> ext_session_acl.o(.text+0x3ff): In function `init_db':
> /home/user1/squid/squid-3.2.0.14-20111219-r11470/helpers/external_acl/session/ext_session_acl.cc:68:
>  undefined reference to `db_env_create'
> ext_session_acl.o(.text+0x4a6):/home/user1/squid/squid-3.2.0.14-20111219-r11470/helpers/external_acl/session/ext_session_acl.cc:74:
>  undefined reference to `db_create'
> ext_session_acl.o(.text+0x57c):/home/user1/squid/squid-3.2.0.14-20111219-r11470/helpers/external_acl/session/ext_session_acl.cc:87:
>  undefined reference to `db_create'
> *** Error code 1
> Stop in 
> /home/user1/squid/squid-3.2.0.14-20111219-r11470/helpers/external_acl/session.

Looks like it hasn't found your db.inc. Try a "grep HAVE_DB_H
config.log" in the source tree. You should see something like "HAVE_DB_H
1". If not, then it's not found db.h, hence the compilation errors
above.

Andy

  




[squid-users] squid 3.2 helpers/external_acl/session compile problem

2011-12-20 Thread yusuf özbilgin

Hi,
 
I am getting error when compile helpers/external_acl/session on freebsd 7.4.
Error details are below.
 
What can be the problem?
 
Thanks,
Yusuf
 

squid version is squid-3.2.0.14-20111219-r11470
berkeley db version is 4.8
 

$make
 
/usr/local/bin/bash ../../../libtool --tag=CXX --mode=link c++ -Wall 
-Wpointer-arith -Wwrite-strings -Wcomments -Werror -pipe -pipe 
-I/usr/local/include -g -I/usr/local/include -rpath=/usr/local/lib 
-L/usr/local/lib -L/usr/local/lib -Wl,-R/usr/local/lib -o ext_session_acl 
ext_session_acl.o -L../../../compat
libtool: link: c++ -Wall -Wpointer-arith -Wwrite-strings -Wcomments -Werror 
-pipe -pipe -I/usr/local/include -g -I/usr/local/include -rpath=/usr/local/lib 
-Wl,-R/usr/local/lib -o ext_session_acl ext_session_acl.o -L/usr/local/lib 
-L/home/user1/squid/squid-3.2.0.14-20111219-r11470/compat
ext_session_acl.o(.text+0x3ff): In function `init_db':
/home/user1/squid/squid-3.2.0.14-20111219-r11470/helpers/external_acl/session/ext_session_acl.cc:68:
 undefined reference to `db_env_create'
ext_session_acl.o(.text+0x4a6):/home/user1/squid/squid-3.2.0.14-20111219-r11470/helpers/external_acl/session/ext_session_acl.cc:74:
 undefined reference to `db_create'
ext_session_acl.o(.text+0x57c):/home/user1/squid/squid-3.2.0.14-20111219-r11470/helpers/external_acl/session/ext_session_acl.cc:87:
 undefined reference to `db_create'
*** Error code 1
Stop in 
/home/user1/squid/squid-3.2.0.14-20111219-r11470/helpers/external_acl/session.


  


[squid-users] Squid3 don't run any external acl

2011-12-20 Thread Igor NM
Hi all!

My squid cannot run any “external acl” script or soft…
I want to restrict web access by Windows AD group..

I test with other helpers, softs and scripts in this location and other
location (ex. /tmp, /, /etc/squid3) and I got same error on cache.log

I use Ubuntu 64 11.10 and Squid 3.1.14

Ps.: The linux was integrated with Win AD 2008 R2

2011/12/20 15:22:49| Starting Squid Cache version 3.1.14 for
x86_64-pc-linux-gnu...
2011/12/20 15:22:49| Process ID 2503
2011/12/20 15:22:49| With 65535 file descriptors available
2011/12/20 15:22:49| Initializing IP Cache...
2011/12/20 15:22:49| DNS Socket created at [::], FD 7
2011/12/20 15:22:49| DNS Socket created at 0.0.0.0, FD 8
2011/12/20 15:22:49| Adding domain 4Talk.com.br from /etc/resolv.conf
2011/12/20 15:22:49| Adding domain 4Talk.com.br from /etc/resolv.conf
2011/12/20 15:22:49| Adding nameserver 192.168.1.6 from /etc/resolv.conf
2011/12/20 15:22:49| helperOpenServers: Starting 5/5 'wbinfo_group.pl'
processes
2011/12/20 15:22:49| commBind: Cannot bind socket FD 9 to [::1]: (99) Cannot
assign requested address
2011/12/20 15:22:49| commBind: Cannot bind socket FD 10 to [::1]: (99)
Cannot assign requested address
2011/12/20 15:22:49| ipcCreate: Failed to create child FD.
2011/12/20 15:22:49| WARNING: Cannot run '/usr/lib/squid3/wbinfo_group.pl'
process.
2011/12/20 15:22:49| commBind: Cannot bind socket FD 11 to [::1]: (99)
Cannot assign requested address
2011/12/20 15:22:49| commBind: Cannot bind socket FD 12 to [::1]: (99)
Cannot assign requested address
2011/12/20 15:22:49| ipcCreate: Failed to create child FD.
2011/12/20 15:22:49| WARNING: Cannot run '/usr/lib/squid3/wbinfo_group.pl'
process.
2011/12/20 15:22:49| commBind: Cannot bind socket FD 13 to [::1]: (99)
Cannot assign requested address
2011/12/20 15:22:49| commBind: Cannot bind socket FD 14 to [::1]: (99)
Cannot assign requested address
2011/12/20 15:22:49| ipcCreate: Failed to create child FD.
2011/12/20 15:22:49| WARNING: Cannot run '/usr/lib/squid3/wbinfo_group.pl'
process.
2011/12/20 15:22:49| commBind: Cannot bind socket FD 15 to [::1]: (99)
Cannot assign requested address
2011/12/20 15:22:49| commBind: Cannot bind socket FD 16 to [::1]: (99)
Cannot assign requested address
2011/12/20 15:22:49| ipcCreate: Failed to create child FD.
2011/12/20 15:22:49| WARNING: Cannot run '/usr/lib/squid3/wbinfo_group.pl'
process.
2011/12/20 15:22:49| commBind: Cannot bind socket FD 17 to [::1]: (99)
Cannot assign requested address
2011/12/20 15:22:49| commBind: Cannot bind socket FD 18 to [::1]: (99)
Cannot assign requested address
2011/12/20 15:22:49| ipcCreate: Failed to create child FD.
2011/12/20 15:22:49| WARNING: Cannot run '/usr/lib/squid3/wbinfo_group.pl'
process.
2011/12/20 15:22:49| Unlinkd pipe opened on FD 23
2011/12/20 15:22:49| Local cache digest enabled; rebuild/rewrite every
3600/3600 sec
2011/12/20 15:22:49| Store logging disabled
2011/12/20 15:22:49| Swap maxSize 0 + 262144 KB, estimated 20164 objects
2011/12/20 15:22:49| Target number of buckets: 1008
2011/12/20 15:22:49| Using 8192 Store buckets
2011/12/20 15:22:49| Max Mem  size: 262144 KB
2011/12/20 15:22:49| Max Swap size: 0 KB
2011/12/20 15:22:49| Using Least Load store dir selection
2011/12/20 15:22:49| Current Directory is /
2011/12/20 15:22:49| Loaded Icons.
2011/12/20 15:22:49| Accepting  HTTP connections at 192.168.1.1:3128, FD 24.
2011/12/20 15:22:49| HTCP Disabled.
2011/12/20 15:22:49| Squid plugin modules loaded: 0
2011/12/20 15:22:49| Adaptation support is off.
2011/12/20 15:22:49| Ready to serve requests.
2011/12/20 15:22:50| storeLateRelease: released 0 objects



root@srv-router:/etc/squid3# squid3 -v
Squid Cache: Version 3.1.14
configure options:  '--build=x86_64-linux-gnu' '--prefix=/usr'
'--includedir=${prefix}/include' '--mandir=${prefix}/share/man'
'--infodir=${prefix}/share/info' '--sysconfdir=/etc' '--localstatedir=/var'
'--libexecdir=${prefix}/lib/squid3' '--srcdir=.' '--disable-maintainer-mode'
'--disable-dependency-tracking' '--disable-silent-rules'
'--datadir=/usr/share/squid3' '--sysconfdir=/etc/squid3'
'--mandir=/usr/share/man' '--with-cppunit-basedir=/usr' '--enable-inline'
'--enable-async-io=8' '--enable-storeio=ufs,aufs,diskd'
'--enable-removal-policies=lru,heap' '--enable-delay-pools'
'--enable-cache-digests' '--enable-underscores' '--enable-icap-client'
'--enable-follow-x-forwarded-for'
'--enable-auth=basic,digest,ntlm,negotiate'
'--enable-basic-auth-helpers=LDAP,MSNT,NCSA,PAM,SASL,SMB,YP,DB,POP3,getpwnam
,squid_radius_auth,multi-domain-NTLM' '--enable-ntlm-auth-helpers=smb_lm,'
'--enable-digest-auth-helpers=ldap,password'
'--enable-negotiate-auth-helpers=squid_kerb_auth'
'--enable-external-acl-helpers=ip_user,ldap_group,session,unix_group,wbinfo_
group' '--enable-arp-acl' '--enable-esi' '--enable-zph-qos'
'--disable-translation' '--with-logdir=/var/log/squid3'
'--with-pidfile=/var/run/squid3.pid' '--with-filedescriptors=65536'
'--with-large-files' '--with-default-user=proxy

RE: [squid-users] After reloading squid3, takes about 2 minutes to serve pages?

2011-12-20 Thread Terry Dobbs
Thanks.

After looking into it more, it appears squidGuard seems to be taking a
while to initialize the blacklists. The only reason I have to reload
squid3 is for squidGuard to recognize the new blacklist entries.

I am using Berkley DB for the first time, perhaps that's why it takes
longer? Although, I don't really see what Berkley DB is doing for me as
I am still using flat files for my domains/urls? Guess I should take
this to the squidGuard list!

-Original Message-
From: Eliezer Croitoru [mailto:elie...@ec.hadorhabaac.com] 
Sent: Monday, December 19, 2011 1:04 PM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] After reloading squid3, takes about 2 minutes
to serve pages?

On 19/12/2011 19:12, Terry Dobbs wrote:
it's an old issue from squid 3.1 to 3.2 there is nothing yet as far as i

know that solves this issue.

Regards
Eliezer
> Hi All.
>
> I just installed squid3 after running squid2.5 for a number of years.
I
> find after reloading squid3 and trying to access the internet on a
proxy
> client it takes about 2 minutes until pages load. For example, if I
> reload squid3 and try to access a page, such as www.tsn.ca it will try
> to load for a minute or 2 until it finally displays. I understand I
> shouldn't need to reload squid3 too much, but is there something I am
> missing to make this happen? I am not using it for cacheing just for
> monitoring/website control. Here is the log from when I was trying to
> access the mentioned site:
>
> 1324310991.377  2 192.168.70.97 TCP_DENIED/407 2868 GET
> http://www.tsn.ca/ - NONE/- text/html [Accept: image/gif, image/jpeg,
> image/pjpeg, image/pjpeg, application/x-shockwave-flash,
> application/xaml+xml, application/vnd.ms-xpsdocument,
> application/x-ms-xbap, application/x-ms-application,
> application/vnd.ms-excel, application/vnd.ms-powerpoint,
> application/msword, */*\r\nAccept-Language: en-us\r\nUser-Agent:
> Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET
CLR
> 2.0.50727; InfoPath.1)\r\nAccept-Encoding: gzip,
> deflate\r\nProxy-Connection: Keep-Alive\r\nHost: www.tsn.ca\r\nCookie:
> TSN=NameKey={ffc1186b-54bb-47ef-b072-097f5fafc5f2};
> __utma=54771374.1383136889.1323806167.1324305925.1324309890.7;
>
__utmz=54771374.1323806167.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(n
> one); __utmb=54771374.1.10.1324309890\r\n] [HTTP/1.0 407 Proxy
> Authentication Required\r\nServer: squid/3.0.STABLE19\r\nMime-Version:
> 1.0\r\nDate: Mon, 19 Dec 2011 16:09:51 GMT\r\nContent-Type:
> text/html\r\nContent-Length: 2485\r\nX-Squid-Error:
> ERR_CACHE_ACCESS_DENIED 0\r\nProxy-Authenticate: NTLM\r\n\r]
> 1324310991.447  5 192.168.70.97 TCP_DENIED/407 3244 GET
> http://www.tsn.ca/ - NONE/- text/html [Accept: image/gif, image/jpeg,
> image/pjpeg, image/pjpeg, application/x-shockwave-flash,
> application/xaml+xml, application/vnd.ms-xpsdocument,
> application/x-ms-xbap, application/x-ms-application,
> application/vnd.ms-excel, application/vnd.ms-powerpoint,
> application/msword, */*\r\nAccept-Language: en-us\r\nUser-Agent:
> Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET
CLR
> 2.0.50727; InfoPath.1)\r\nAccept-Encoding: gzip,
> deflate\r\nProxy-Connection: Keep-Alive\r\nCookie:
> TSN=NameKey={ffc1186b-54bb-47ef-b072-097f5fafc5f2};
> __utma=54771374.1383136889.1323806167.1324305925.1324309890.7;
>
__utmz=54771374.1323806167.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(n
> one); __utmb=54771374.1.10.1324309890\r\nProxy-Authorization: NTLM
> TlRMTVNTUAABB4IIogAFASgKDw==\r\nHost:
> www.tsn.ca\r\n] [HTTP/1.0 407 Proxy Authentication Required\r\nServer:
> squid/3.0.STABLE19\r\nMime-Version: 1.0\r\nDate: Mon, 19 Dec 2011
> 16:09:51 GMT\r\nContent-Type: text/html\r\nContent-Length:
> 2583\r\nX-Squid-Error: ERR_CACHE_ACCESS_DENIED
0\r\nProxy-Authenticate:
> NTLM
>
TlRMTVNTUAACEgASADAFgomid3FHZLqI7WsAAIoAigBCQwBPAE4A
>
VgBFAEMAVABPAFIAAgASAEMATwBOAFYARQBDAFQATwBSAAEACgBTAFEAVQBJAEQABAAmAGEA
>
cwBzAG8AYwBpAGEAdABlAGQAYgByAGEAbgBkAHMALgBjAGEAAwA0AHUAYgB1AG4AdAB1AC4A
> YQBzAHMAbwBjAGkAYQB0AGUAZABiAHIAYQBuAGQAcwAuAGMAYQAA\r\n\r]



Re: [squid-users] integrating with wlc

2011-12-20 Thread Henrik Nordström
tis 2011-12-20 klockan 15:37 +0100 skrev Sean Boran:
> It might be possible to sent the WLC logs to a syslog server, where
> one could pipe into a parser to extract the pairs needed and front
> there create an ACL for squid?

As soon as you from the Squid server somehow can query "who is the user
at IP X" then you can plug this into Squid via external_acl_type,
providing the username to Squid for usage in logs and access controls.

Squid do not care how you do this. All Squid cares about in this context
is being able to query "I have ip X, who is the user?"

Regards
Henirk



Re: [squid-users] integrating with wlc

2011-12-20 Thread Sean Boran
It might be possible to sent the WLC logs to a syslog server, where
one could pipe into a parser to extract the pairs needed and front
there create an ACL for squid?

Sean

2011/12/20 Henrik Nordström :
> tis 2011-12-20 klockan 14:09 +0200 skrev E.S. Rosenberg:
>
>> About the wlc I don't know for sure yet, I can probably create a
>> script/program that when presented with an IP can convert it to a
>> username on the Radius server...
>> But I don't know how that would then interact with squid...
>> Thanks,
>
> You can then plug that into Squid via the extenal acl interface. See
> external_acl_type.
>
>  http://www.squid-cache.org/Doc/config/external_acl_type/
>
> Regards
> Henrik
>


[squid-users] stopping sslbump to domains with invalid or unsigned certs

2011-12-20 Thread Sean Boran
Hi,

sslbump allows me to interrupts ssl connections and run an AV check on them.
It generates a certs for the target domain (via sslcrtd), so that the
users browser sees a server cert signed by the proxy.

If the target domain has a certificate that is expired, or it not
signed by a recognised CA, its important that the lack of trust is
communicated to the end user.

Example, on connecting direct (not via a proxy) to
https://wiki.squid-cache.org the certificated presented is expired 2
years ago and not signed by known CA  .
Noext on connecting via a sslbump proxy (v3.2.0.14), the proxy creates
a valid cert for wiki.squid-cache.org and in the user's browsers it
looks like wiki.squid-cache.org has a valid cert signed by the proxy.

So my question is:
What ssl_bump settings would allow the proxy to handle such
destinations with expired or non trusted sites by, for example:
a) Not bumping the connection but piping it through to the user
unchanged, so the user browser notices the invalid certs?
b) Refuses the connection with a message to the user, if the
destination is not on an allowed ACL of exceptions.

Looking at squid.conf, there is sslproxy_flags, sslproxy_cert_error
#  TAG: sslproxy_flags
#   DONT_VERIFY_PEERAccept certificates that fail verification.
#   NO_DEFAULT_CA   Don't use the default CA list built in
 to OpenSSL.
#  TAG: sslproxy_cert_error
#   Use this ACL to bypass server certificate validation errors.

So, the following config would then implement scenario b) above?

# Verify destinations: yes, but allow exceptions
sslproxy_flags DONT_VERIFY_PEER
#sslproxy_flags none
# ignore Certs with certain cites
acl TrustedName url_regex ^https://badcerts.example.com/
sslproxy_cert_error allow TrustedName
sslproxy_cert_error deny all

==> But then, why does it not throw an error when connecting to
https://wiki.squid-cache.org ?

Next I though it might be an idea to delete any cached certs and try again.
Looking in /var/lib/squid_ssl_db/index.txt, there is an extra for the
destination:
V   121107103058Z   0757348Eunknown /CN=www.squid-cache.org
So, then I deleted 0757348E.pem to force a new cert to be generated,
and restarted squid.

Connecting to https://wiki.squid-cache.org/ resulted in a new cert
being silently generated, stored in 075734AD.pem and the https
connection signed.

What am I going wrong?

Finally had a look at the sources:
sslproxy_flags  led to Config.ssl_client.flags in cf_parser.cci which
led to ssl_client.sslContext in cache_cf.cc to initiateSSL() in
forward.cc and finally ssl_verify_cb in ssl/support.cc.

There one finds nice debugs prefixed with "83", so, enabled high
debugging for 83:
   debug_options ALL,1 83,20 23,2 26,10 33,4 84,3
Restarted squid, and watched with
   tail -f cache.log|egrep -i "SSL|certificate"
but dont see certificate errors.

Any suggestions?


Thanks,
Sean


Re: [squid-users] Squid 3.2.0.14 beta is available

2011-12-20 Thread Helmut Hullen
Hallo, Amos,

Du meintest am 13.12.11:

> The Squid HTTP Proxy team is very pleased to announce the
> availability of the Squid-3.2.0.14 beta release!

Slackware binary:

  


Viele Gruesse!
Helmut


Re: [squid-users] integrating with wlc

2011-12-20 Thread Henrik Nordström
tis 2011-12-20 klockan 14:09 +0200 skrev E.S. Rosenberg:

> About the wlc I don't know for sure yet, I can probably create a
> script/program that when presented with an IP can convert it to a
> username on the Radius server...
> But I don't know how that would then interact with squid...
> Thanks,

You can then plug that into Squid via the extenal acl interface. See
external_acl_type.

  http://www.squid-cache.org/Doc/config/external_acl_type/

Regards
Henrik



Re: [squid-users] integrating with wlc

2011-12-20 Thread E.S. Rosenberg
2011/12/20 Henrik Nordström 
>
> mån 2011-12-19 klockan 18:35 +0200 skrev E.S. Rosenberg:
> > Hi all,
> > We have a Cisco WLC controlling our local wireless network, I would
> > like it for squid to know which user is associated with the IP of the
> > wireless client, so that I can implement user based
> > restrictions/freedoms for our wireless network as well.
> > So far my searches haven't turned up anything useful so I was
> > wondering if anyone here had made that link in the past.
>
> Is it possible to somehow query the WLC or perhaps your radius
> accounting server which user is logged on to which IP?
About the wlc I don't know for sure yet, I can probably create a
script/program that when presented with an IP can convert it to a
username on the Radius server...
But I don't know how that would then interact with squid...
Thanks,
Eli
>
> Regards
> Henrik
>


Re: [squid-users] Squid 3.2.0.14 didn't work in interception mode

2011-12-20 Thread Amos Jeffries

On 21/12/2011 12:33 a.m., Nguyen Hai Nam wrote:

Hi there,

I'm building new squid box which is 3.2.0.14 on OpenIndiana 151a, the 
configuration is as usual but when squid started up, intercept mode 
didn't work.


IP NAT table already works:

# ipnat -l
List of active MAP/Redirect filters:
rdr rtls0 0.0.0.0/0 port 80 -> 10.2.176.31 port 3129 tcp

List of active sessions:
RDR 10.2.176.31 3129 <- -> 66.220.149.48   80[10.10.225.253 
57093]
RDR 10.2.176.31 3129 <- -> 66.220.149.48   80[10.10.225.253 
57092]




What NAT system is this?
 a PF or IPFilter?
 if PF, which OpenBSD version is it based on?

How exactly is it not working?
 ioclt() lookup failures?
 or 409 (Conflict) HTTP responses?
 or something else?

Amos


Squid starts up normally:

# tail -n 25 /usr/squid/var/logs/cache.log
2011/12/20 02:24:07 kid1| Using Least Load store dir selection
2011/12/20 02:24:07 kid1| Set Current Directory to 
/usr/squid/var/cache/squid

2011/12/20 02:24:07 kid1| Loaded Icons.
2011/12/20 02:24:07 kid1| HTCP Disabled.
2011/12/20 02:24:07 kid1| Squid plugin modules loaded: 0
2011/12/20 02:24:07 kid1| Ready to serve requests.
2011/12/20 02:24:07 kid1| Accepting HTTP Socket connections at 
local=[::]:3128 remote=[::] FD 19 flags=9
2011/12/20 02:24:07 kid1| Accepting NAT intercepted HTTP Socket 
connections at local=0.0.0.0:3129 remote=[::] FD 20 flags=41
2011/12/20 02:24:07 kid1| Done reading /usr/squid/var/cache/squid 
swaplog (0 entries)

2011/12/20 02:24:07 kid1| Finished rebuilding storage from disk.
2011/12/20 02:24:07 kid1| 0 Entries scanned
2011/12/20 02:24:07 kid1| 0 Invalid entries.
2011/12/20 02:24:07 kid1| 0 With invalid flags.
2011/12/20 02:24:07 kid1| 0 Objects loaded.
2011/12/20 02:24:07 kid1| 0 Objects expired.
2011/12/20 02:24:07 kid1| 0 Objects cancelled.
2011/12/20 02:24:07 kid1| 0 Duplicate URLs purged.
2011/12/20 02:24:07 kid1| 0 Swapfile clashes avoided.
2011/12/20 02:24:07 kid1|   Took 0.05 seconds (  0.00 objects/sec).
2011/12/20 02:24:07 kid1| Beginning Validation Procedure
2011/12/20 02:24:07 kid1|   Completed Validation Procedure
2011/12/20 02:24:07 kid1|   Validated 0 Entries
2011/12/20 02:24:07 kid1|   store_swap_size = 0.00 KB
2011/12/20 02:24:08 kid1| storeLateRelease: released 0 objects
2011/12/20 02:24:27| Squid is already running!  Process ID 2413

Squid still works fine with configured proxy setting in browser.

Hope to receive your kind assistance.

Best regards,
~Neddie




[squid-users] Squid 3.2.0.14 didn't work in interception mode

2011-12-20 Thread Nguyen Hai Nam

Hi there,

I'm building new squid box which is 3.2.0.14 on OpenIndiana 151a, the 
configuration is as usual but when squid started up, intercept mode 
didn't work.


IP NAT table already works:

# ipnat -l
List of active MAP/Redirect filters:
rdr rtls0 0.0.0.0/0 port 80 -> 10.2.176.31 port 3129 tcp

List of active sessions:
RDR 10.2.176.31 3129 <- -> 66.220.149.48   80[10.10.225.253 57093]
RDR 10.2.176.31 3129 <- -> 66.220.149.48   80[10.10.225.253 57092]

Squid starts up normally:

# tail -n 25 /usr/squid/var/logs/cache.log
2011/12/20 02:24:07 kid1| Using Least Load store dir selection
2011/12/20 02:24:07 kid1| Set Current Directory to 
/usr/squid/var/cache/squid

2011/12/20 02:24:07 kid1| Loaded Icons.
2011/12/20 02:24:07 kid1| HTCP Disabled.
2011/12/20 02:24:07 kid1| Squid plugin modules loaded: 0
2011/12/20 02:24:07 kid1| Ready to serve requests.
2011/12/20 02:24:07 kid1| Accepting HTTP Socket connections at 
local=[::]:3128 remote=[::] FD 19 flags=9
2011/12/20 02:24:07 kid1| Accepting NAT intercepted HTTP Socket 
connections at local=0.0.0.0:3129 remote=[::] FD 20 flags=41
2011/12/20 02:24:07 kid1| Done reading /usr/squid/var/cache/squid 
swaplog (0 entries)

2011/12/20 02:24:07 kid1| Finished rebuilding storage from disk.
2011/12/20 02:24:07 kid1| 0 Entries scanned
2011/12/20 02:24:07 kid1| 0 Invalid entries.
2011/12/20 02:24:07 kid1| 0 With invalid flags.
2011/12/20 02:24:07 kid1| 0 Objects loaded.
2011/12/20 02:24:07 kid1| 0 Objects expired.
2011/12/20 02:24:07 kid1| 0 Objects cancelled.
2011/12/20 02:24:07 kid1| 0 Duplicate URLs purged.
2011/12/20 02:24:07 kid1| 0 Swapfile clashes avoided.
2011/12/20 02:24:07 kid1|   Took 0.05 seconds (  0.00 objects/sec).
2011/12/20 02:24:07 kid1| Beginning Validation Procedure
2011/12/20 02:24:07 kid1|   Completed Validation Procedure
2011/12/20 02:24:07 kid1|   Validated 0 Entries
2011/12/20 02:24:07 kid1|   store_swap_size = 0.00 KB
2011/12/20 02:24:08 kid1| storeLateRelease: released 0 objects
2011/12/20 02:24:27| Squid is already running!  Process ID 2413

Squid still works fine with configured proxy setting in browser.

Hope to receive your kind assistance.

Best regards,
~Neddie


Re: [squid-users] Make Dansguardian working with squid 3.2 + NTLM: Cannot initialise conversion from UTF-16LE to UTF-8

2011-12-20 Thread Amos Jeffries

On 20/12/2011 11:50 p.m., David Touzeau wrote:

Dear all

I'm writing this topic here because it seems that the dansguardian
mailing list is very silent.

I have set squid 3.2 with nlm has this

  #- NTLM AUTH settings
auth_param ntlm program /usr/bin/ntlm_auth
--helper-protocol=squid-2.5-ntlmssp
auth_param basic program /usr/bin/ntlm_auth
--helper-protocol=squid-2.5-basic
auth_param ntlm children 15
auth_param basic children 15
auth_param basic credentialsttl 5 hours
auth_param basic casesensitive off
auth_param basic realm Squid/NTLM proxy-caching web server
authenticate_cache_garbage_interval 10 seconds
auth_param basic credentialsttl 2 hour

authenticate_ttl 1 hour

authenticate_ip_ttl 60 seconds

** The NTLM authentication with "only squid" works fine ***

But, when adding dansguardian with proxy-ntlm plugin there is this error


Dec 20 11:48:18 squid32-64 squid[22693]: storeLateRelease: released 0
objects
Dec 20 11:48:21 squid32-64 squid[22693]: Starting new ntlmauthenticator
helpers...
Dec 20 11:48:21 squid32-64 squid[22693]: helperOpenServers: Starting
1/15 'ntlm_auth' processes
Dec 20 11:48:21 squid32-64 dansguardian[22681]: NTLM - Cannot initialise
conversion from UTF-16LE to UTF-8: Invalid argument
Dec 20 11:48:21 squid32-64 dansguardian[22681]: Auth plugin returned
error code: -2
Dec 20 11:48:21 squid32-64 dansguardian[22681]: NTLM - Cannot initialise
conversion from UTF-16LE to UTF-8: Invalid argument
Dec 20 11:48:21 squid32-64 dansguardian[22681]: Auth plugin returned
error code: -2

Does anybody meet this issue ?
What does "Cannot initialise conversion from UTF-16LE to UTF-8 means" ?


You are the first to mention it. Although as you know, this mailing list 
is not a DG support list. No matter how quiet those appears to be, that 
is the right place to ask. The DG software is old and simple ==> not 
many bugs or problems left to discuss in their lists ==> "quiet".


Amos


Re: [squid-users] Re : [squid-users] Re : [squid-users] Anonymous FTP and login pass url based

2011-12-20 Thread Amos Jeffries

On 20/12/2011 9:35 p.m., Henrik Nordström wrote:

mån 2011-12-19 klockan 23:53 +1300 skrev Amos Jeffries:


Do you have a trace from this server when requesting something from the
login-required area of the site?

If the requested URL contains login credentials then anonymous FTP login
SHOULD NOT be attempted.

Regards
Henrik



Sorry. My brain seems to have died :(   see the src/ftp.cc checkAuth() 
function for reality.


Default is username "anonymous" with password from config file (default 
"Squid@"). Which gets overridden by HTTP Basic auth headers (if any). 
Which then gets overridden by URL details (if any).


The final result of all that merging is what gets sent to the server in 
a single USER command. (I was thinking of it incorrectly as the order of 
several USER commands)


Amos


[squid-users] Make Dansguardian working with squid 3.2 + NTLM: Cannot initialise conversion from UTF-16LE to UTF-8

2011-12-20 Thread David Touzeau
Dear all

I'm writing this topic here because it seems that the dansguardian
mailing list is very silent.

I have set squid 3.2 with nlm has this 

 #- NTLM AUTH settings
auth_param ntlm program /usr/bin/ntlm_auth
--helper-protocol=squid-2.5-ntlmssp
auth_param basic program /usr/bin/ntlm_auth
--helper-protocol=squid-2.5-basic
auth_param ntlm children 15
auth_param basic children 15
auth_param basic credentialsttl 5 hours
auth_param basic casesensitive off
auth_param basic realm Squid/NTLM proxy-caching web server
authenticate_cache_garbage_interval 10 seconds
auth_param basic credentialsttl 2 hour

authenticate_ttl 1 hour

authenticate_ip_ttl 60 seconds

** The NTLM authentication with "only squid" works fine ***

But, when adding dansguardian with proxy-ntlm plugin there is this error


Dec 20 11:48:18 squid32-64 squid[22693]: storeLateRelease: released 0
objects
Dec 20 11:48:21 squid32-64 squid[22693]: Starting new ntlmauthenticator
helpers...
Dec 20 11:48:21 squid32-64 squid[22693]: helperOpenServers: Starting
1/15 'ntlm_auth' processes
Dec 20 11:48:21 squid32-64 dansguardian[22681]: NTLM - Cannot initialise
conversion from UTF-16LE to UTF-8: Invalid argument
Dec 20 11:48:21 squid32-64 dansguardian[22681]: Auth plugin returned
error code: -2
Dec 20 11:48:21 squid32-64 dansguardian[22681]: NTLM - Cannot initialise
conversion from UTF-16LE to UTF-8: Invalid argument
Dec 20 11:48:21 squid32-64 dansguardian[22681]: Auth plugin returned
error code: -2

Does anybody meet this issue ?
What does "Cannot initialise conversion from UTF-16LE to UTF-8 means" ?


Best regards





Re: [squid-users] Read timeout Error

2011-12-20 Thread Amos Jeffries

On 20/12/2011 9:56 p.m., Sekar Duraisamy wrote:

Hi ,

Iam getting more read timeout while iam using Squid proxy with
persistent connection off state.


A strong sign that there is something broken at the TCP level of the 
networks your traffic travels over (yours, your suppliers or peers). 
Usually due to ICMP blocking preventing TCP control messages moving 
around the network. Possibly also ECN or window-scaling issues if you 
have old hardware/software sitting around the network.





What is the maximum values for read_timeout and connect_timeout.


Just over 136 years.  Operating system limits on TCP packets and 
connection state timeouts will occur much sooner.


Read timeouts are not solved by extending the read wait, but by 
decreasing it causing squid to identtify the problem faster and perform 
whatever recovery it can. The default is 15 minutes. Way more than long 
enough for most of the Internet (only the spacecraft control connections 
need longer timeouts).


It depends on your squid version, but connect_timeout may be for one TCP 
connection (Squid-3.2.0.9 and later), or for a whole set of IP addresses 
including the DNS lookup time (for Squid-3.2.0.8 and older). That 
difference affects how to tune for best results; 3.2.0.9+ you should 
tune connect_timeout *down* to make Squid failover quickly and avoid 
long TCP waits getting to a working IP faster, older Squid you tune it 
_up_ to allow more time for the TCP wait to happen (maybe several times) 
and a good IP to be found somewhere in the set.



NP: I recommend enabling persistent connections since it has some effect 
reducing the amount of problems you face. If you need constant turnover 
of ports (ie for a reverse-proxy or thousands of clients) you can set 
the persistence timeout low to cause that and still gain most of the 
benefits of HTTP persistence.


Amos



Re: [squid-users] Squid with Kerberos auth

2011-12-20 Thread Amos Jeffries

On 20/12/2011 7:40 a.m., Wladner Klimach wrote:

Look at this:

Every 2.0s: lsof -i :3128
Mon Dec
19 16:38:22 2011

COMMAND   PID  USER   FD   TYPE  DEVICE SIZE/OFF NODE NAME
squid   20367 squid   12u  IPv6 2474452  0t0  TCP
trotsky.redecamara.camara.gov.br:squid->cainf-269642.redecamara.camara.gov.br:4225
(ESTABLISHED)
squid   20367 squid   18u  IPv6 2473286  0t0  TCP
trotsky.redecamara.camara.gov.br:squid->cainf-269642.redecamara.camara.gov.br:4202
(ESTABLISHED)
squid   20367 squid   22u  IPv6 2474474  0t0  TCP
trotsky.redecamara.camara.gov.br:squid->cainf-269642.redecamara.camara.gov.br:4229
(ESTABLISHED)
squid   20367 squid   24u  IPv6 2473304  0t0  TCP
trotsky.redecamara.camara.gov.br:squid->cainf-269642.redecamara.camara.gov.br:4204
(ESTABLISHED)
squid   20367 squid   28u  IPv6 2473756  0t0  TCP
trotsky.redecamara.camara.gov.br:squid->cainf-269642.redecamara.camara.gov.br:4210
(ESTABLISHED)
squid   20367 squid   34u  IPv6 2474462  0t0  TCP
trotsky.redecamara.camara.gov.br:squid->cainf-269642.redecamara.camara.gov.br:4227
(ESTABLISHED)
squid   20367 squid   38u  IPv6 2474457  0t0  TCP
trotsky.redecamara.camara.gov.br:squid->cainf-269642.redecamara.camara.gov.br:4226
(ESTABLISHED)
squid   20367 squid   42u  IPv6 2474467  0t0  TCP
trotsky.redecamara.camara.gov.br:squid->cainf-269642.redecamara.camara.gov.br:4228
(ESTABLISHED)
squid   20367 squid   44u  IPv6 2474477  0t0  TCP
trotsky.redecamara.camara.gov.br:squid->cainf-269642.redecamara.camara.gov.br:4230
(ESTABLISHED)
squid   20367 squid  156u  IPv6 2472223  0t0  TCP *:squid (LISTEN)


Is only has IPV6 conection types. Is this a problem or point a
possible bottleneck ?


Problem? no.

Possible bottleneck? depends if there is a slow IPv6 connectivity 
between Squid and that remote machine (ie a tunnel with wrapping 
overheads). ~75% of networks have faster IPv6 connectivity than IPv4 
connectivity.


Amos


Re: [squid-users] squid occupying 100% cpu at free time also

2011-12-20 Thread Benjamin

On 12/20/2011 02:06 PM, Ralf Hildebrandt wrote:

* Benjamin:

Hi,

When i have heavy traffic that time squid always consume 100% cpu
utilization. Is there anyway to tune squid or OS to reduce cpu
utilization?

Usually restarting squid fixes things.
It does for me


Hi,

When i restart squid service and then it seems ok now i can see cpu 
free.But while restarting squid service, i lose mem objects.so that is 
not convenient method.


I will try with latest version for the same.

Regards,
Benjamin


Re: [squid-users] Tool for calculating the object-freshness

2011-12-20 Thread Amos Jeffries

On 20/12/2011 7:40 p.m., Tom Tux wrote:

Hi

I have found the following web-based tool to calculate the objects freshness:

http://web.forret.com/tools/squid.asp

If it's useful for others too, can a site-admin publish this url on
"squid-cache.org" (perhaps 'Related Software')?

Thanks and regards,
Tom


Nice tool. Although if you use it I recommend also using the redbot.org 
tool to see problems with other details which affect caching service.


 * it displays what it calls "header contents" dates in a format which 
is invalid for HTTP.
 * it omits several of the age limit directives Squid considers for the 
algorithm (max_stale, and refresh_pattern max-stale=N)
 * it omits about half of the cache-control directives which Squid uses 
to adjust the algorithm (max-stale, stale-if-error, 
stale-while-revalidate, must-revalidate, no-cache)
 * it does not permit HTTP/1.1 server cache-control header contents to 
be entered for consideration, this may drastically alter the algorithm 
output
 * it crashes the script if you put some very common invalid header 
values into the input fields.
 * it assumes ETag: and Vary: are identical on the client request, but 
does not state that anywhere for the script user to be aware of the 
relationship.


It may have worked well for Squid-2.5 and early 2.6/3.0 responses but 
newer Squid as you can see from the above, have quite a few more 
protocol details and config settings involved with the calculation.



Side Note:
  we are working upstream on JavaScript extensions to the cache manager 
reports. If anyone feels interest towards creating a JS script which can 
take cached object headers and config details and produce a little 
service-state report like that tools output please contact kinkie via 
the squid-dev mailing list about it.


Amos



Re: [squid-users] Squid logs not showing original client IP

2011-12-20 Thread Sekar Duraisamy
Thank you all. Yes. My LB is not sending the original IP to squid.

Regards,
Sekar

2011/12/19 Henrik Nordström :
> lör 2011-12-17 klockan 19:15 +0530 skrev Sekar Duraisamy:
>
>> I have configured the log format with %{X-Forwarded-For}>h . But in
>> this field shows "-" . Not showing original client IP.
>
> Is the load balancer adding a X-Forwarded-For header?
>
>> How to configure the squid to find the original client IP in squid logs ?
>
> How do the load balancer indicate the original client IP in the request
> sent to Squid?
>
> Regards
> Henrik
>


[squid-users] Read timeout Error

2011-12-20 Thread Sekar Duraisamy
Hi ,

Iam getting more read timeout while iam using Squid proxy with
persistent connection off state.

What is the maximum values for read_timeout and connect_timeout.

Thanks in Advance,
Sekar


Re: [squid-users] squid occupying 100% cpu at free time also

2011-12-20 Thread Ralf Hildebrandt
* Henrik Nordström :
> tis 2011-12-20 klockan 14:02 +0530 skrev Benjamin:
> 
> > When i remove traffic from router to squid means that time, there is no 
> > traffic on squid box and that time also i can see same 100% cpu 
> > utilization in top command.
> 
> Sounds like a bug.
> 
> First step, upgrade to a current release. 3.1.10 is pretty dated by now
> (a year to be exact). Current release is 3.1.17.

3.1.18!
(need to update as well)

-- 
Ralf Hildebrandt
  Geschäftsbereich IT | Abteilung Netzwerk
  Charité - Universitätsmedizin Berlin
  Campus Benjamin Franklin
  Hindenburgdamm 30 | D-12203 Berlin
  Tel. +49 30 450 570 155 | Fax: +49 30 450 570 962
  ralf.hildebra...@charite.de | http://www.charite.de



Re: [squid-users] squid occupying 100% cpu at free time also

2011-12-20 Thread Henrik Nordström
tis 2011-12-20 klockan 14:02 +0530 skrev Benjamin:

> When i remove traffic from router to squid means that time, there is no 
> traffic on squid box and that time also i can see same 100% cpu 
> utilization in top command.

Sounds like a bug.

First step, upgrade to a current release. 3.1.10 is pretty dated by now
(a year to be exact). Current release is 3.1.17.

Then if you still see this, please run

   /path/to/sbin/squid -k debug ; sleep 5; /path/to/sbin/squid -k debug

then file a bug report at bugs.squid-cache.org describing the problem
and attach your cache.log.

Regards
Henrik



Re: [squid-users] integrating with wlc

2011-12-20 Thread Henrik Nordström
mån 2011-12-19 klockan 18:35 +0200 skrev E.S. Rosenberg:
> Hi all,
> We have a Cisco WLC controlling our local wireless network, I would
> like it for squid to know which user is associated with the IP of the
> wireless client, so that I can implement user based
> restrictions/freedoms for our wireless network as well.
> So far my searches haven't turned up anything useful so I was
> wondering if anyone here had made that link in the past.

Is it possible to somehow query the WLC or perhaps your radius
accounting server which user is logged on to which IP?

Regards
Henrik



Re: [squid-users] squid occupying 100% cpu at free time also

2011-12-20 Thread Ralf Hildebrandt
* Benjamin :
> Hi,
> 
> When i have heavy traffic that time squid always consume 100% cpu
> utilization. Is there anyway to tune squid or OS to reduce cpu
> utilization?

Usually restarting squid fixes things.
It does for me

-- 
Ralf Hildebrandt
  Geschäftsbereich IT | Abteilung Netzwerk
  Charité - Universitätsmedizin Berlin
  Campus Benjamin Franklin
  Hindenburgdamm 30 | D-12203 Berlin
  Tel. +49 30 450 570 155 | Fax: +49 30 450 570 962
  ralf.hildebra...@charite.de | http://www.charite.de



Re: [squid-users] Re : [squid-users] Re : [squid-users] Anonymous FTP and login pass url based

2011-12-20 Thread Henrik Nordström
mån 2011-12-19 klockan 23:53 +1300 skrev Amos Jeffries:

> Do you have a trace from this server when requesting something from the 
> login-required area of the site?

If the requested URL contains login credentials then anonymous FTP login
SHOULD NOT be attempted.

Regards
Henrik



[squid-users] squid occupying 100% cpu at free time also

2011-12-20 Thread Benjamin

Hi,

When i have heavy traffic that time squid always consume 100% cpu 
utilization. Is there anyway to tune squid or OS to reduce cpu utilization?


When i remove traffic from router to squid means that time, there is no 
traffic on squid box and that time also i can see same 100% cpu 
utilization in top command. What could be the reason behind that? Does 
squid occupied cpu usage even in there is no traffic?


OS: CENTOS 64 bit
squid -v
Squid Cache: Version 3.1.10
configure options:  '--build=x86_64-unknown-linux-gnu' 
'--host=x86_64-unknown-linux-gnu' '--target=x86_64-redhat-linux-gnu' 
'--program-prefix=' '--prefix=/usr' '--exec-prefix=/usr' 
'--bindir=/usr/bin' '--sbindir=/usr/sbin' '--sysconfdir=/etc' 
'--datadir=/usr/share' '--includedir=/usr/include' '--libdir=/usr/lib64' 
'--libexecdir=/usr/libexec' '--sharedstatedir=/var/lib' 
'--mandir=/usr/share/man' '--infodir=/usr/share/info' 
'--exec_prefix=/usr' '--libexecdir=/usr/lib64/squid' 
'--localstatedir=/var' '--datadir=/usr/share/squid' 
'--sysconfdir=/etc/squid' '--with-logdir=$(localstatedir)/log/squid' 
'--with-pidfile=$(localstatedir)/run/squid.pid' 
'--disable-dependency-tracking' '--enable-arp-acl' 
'--enable-follow-x-forwarded-for' 
'--enable-auth=basic,digest,ntlm,negotiate' 
'--enable-basic-auth-helpers=LDAP,MSNT,NCSA,PAM,SMB,YP,getpwnam,multi-domain-NTLM,SASL,DB,POP3,squid_radius_auth' 
'--enable-ntlm-auth-helpers=smb_lm,no_check,fakeauth' 
'--enable-digest-auth-helpers=password,ldap,eDirectory' 
'--enable-negotiate-auth-helpers=squid_kerb_auth' 
'--enable-external-acl-helpers=ip_user,ldap_group,session,unix_group,wbinfo_group' 
'--enable-cache-digests' '--enable-cachemgr-hostname=localhost' 
'--enable-delay-pools' '--enable-epoll' '--enable-icap-client' 
'--enable-ident-lookups' '--enable-linux-netfilter' 
'--enable-referer-log' '--enable-removal-policies=heap,lru' 
'--enable-snmp' '--enable-ssl' '--enable-storeio=aufs,diskd,ufs' 
'--with-aufs-threads=128' '--enable-useragent-log' '--enable-wccpv2' 
'--enable-esi' '--with-aio' '--with-default-user=squid' 
'--with-filedescriptors=16384' '--with-dl' '--with-openssl' 
'--with-pthreads' '--enable-zph-qos' '--enable-err-languages=English' 
'--enable-default-err-language=English' 
'build_alias=x86_64-unknown-linux-gnu' 
'host_alias=x86_64-unknown-linux-gnu' 
'target_alias=x86_64-redhat-linux-gnu' 'CFLAGS=-O2 -g -pipe -Wall 
-Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector 
--param=ssp-buffer-size=4 -m64 -mtune=generic -fpie' 'LDFLAGS=-pie' 
'CXXFLAGS=-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions 
-fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -fpie' 
--with-squid=/root/rpmbuild/BUILD/squid-3.1.10


We have Intel(R) Xeon(R) CPU E5504  @ 2.00GHz ( 4 core ).

Concurrent users are 2500-3000 users and bandwith usage is 250-300 Mbps.

We are using squid for only cache gain purpose.Does my squid configure 
options are ok with my requirement or do i need to make changes with them?


Also please suggest me that my current processor is fine as per my 
network traffic or suggest me better processor.



Regards,
Benjamin