Re: [squid-users] About access.log hourly?

2012-03-13 Thread Amos Jeffries

On 14.03.2012 14:54, Ibrahim Lubis wrote:

I use cron...


Then the answer is quite simply to set it to run its command every 15 
minutes and bump up your logfile_rotate limit to prevent loosing logs 
earlier.


Amos




-Original Message-

From: Amos Jeffries

On 13.03.2012 23:09, Ibrahim Lubis wrote:

How can i configurre squid to create a new access.log file every 15
minutes, in 1 hour i have 4 different log file...


What are you using to manage the Squid logs  cron? logrotate?
something else?

Amos




Re: [squid-users] Re: Squid Ldap Authenticators

2012-03-13 Thread Amos Jeffries

On 14.03.2012 03:54, guest01 wrote:

Hi,

Sorry, I pressed the send button by mistake ...

We are having strange Squid troubles, at first, let me describe our 
setup:


- 4 HP G6/G7 DL380 servers with 16CPUs and 28GB RAM with RHEL 5.4-5.8
64bit and Squid 3.1.12 (custom compiled)
Squid Cache: Version 3.1.12
configure options:  '--enable-ssl' '--enable-icap-client'
'--sysconfdir=/etc/squid' '--enable-async-io' '--enable-snmp'
'--enable-poll' '--with-maxfd=32768' '--enable-storeio=aufs'
'--enable-removal-policies=heap,lru' '--enable-epoll'
'--disable-ident-lookups' '--enable-truncate'
'--with-logdir=/var/log/squid' '--with-pidfile=/var/run/squid.pid'
'--with-default-user=squid' '--prefix=/opt/squid' 
'--enable-auth=basic

digest ntlm negotiate'
'-enable-negotiate-auth-helpers=squid_kerb_auth'
--with-squid=/home/squid/squid-3.1.12 --enable-ltdl-convenience

- Each server has two instances for kerberos/ntlm authentication and
two instances for LDAP authentication (different customers)
- we have a hardware loadbalancer which is balancing request for our
kerberos-customers (4x2 instances) and ldap-customers (4x2 
instances),

each has a different IP address.
- average load values are approx 0.5 (5min values)
- approx 60RPS per instance (equally distributed -> 16 * 60 => 960 
RPS)

- up to 150Mbit/s traffic per server
- ICAP servers for content adaption (multiple servers with a hardware
loadbalancer in front of it)

From time to time we are having troubles with our Squid servers which
may not be a problem related to Squid, I suspect an OS issue.
Nevertheless, sometimes the servers don't respond to request (even
SSH-requests) or logging in takes forever (reverse lookup failure?) 
or

even worse, sometimes the server interface is just down (there is no
indication of any problem at the switch port level). If we check the
squidclient output, we can see some hanging ldap authenticators:

squid@xlsqit01 /opt/squid/bin $ ./squidclient -h 10.122.125.23
cache_object://10.122.125.23/basicauthenticator
HTTP/1.0 200 OK
Server: squid/3.1.12
Mime-Version: 1.0
Date: Tue, 13 Mar 2012 13:34:07 GMT
Content-Type: text/plain
Expires: Tue, 13 Mar 2012 13:34:07 GMT
Last-Modified: Tue, 13 Mar 2012 13:34:07 GMT
X-Cache: MISS from xlsqip02_3
Via: 1.0 xlsqip02_3 (squid/3.1.12)
Connection: close

Basic Authenticator Statistics:
program: /opt/squid/libexec/squid_ldap_auth
number active: 20 of 20 (0 shutting down)
requests sent: 13316
replies received: 13312
queue length: 0
avg service time: 4741 msec

  #  FD PID  # Requests Flags  Time  Offset 
Request
  1  12   160382150 B   125.885   0 user1 
pw1\n
  2  24   16043  85 B   119.562   0 user2 
pw2\n
  3  32   16049  63 B13.639   0 user3 
pw3\n
  4  43   16055  21 B   116.143   0 user4 
pw4\n
  5  46   16059  12 189.002   0 
(none)
  6  50   16064   1 189.003   0 
(none)
  7  56   16069   2   0.079   0 
(none)
  8  60   16074   0   0.000   0 
(none)
  9  65   16079   0   0.000   0 
(none)
 10  86   16084   0   0.000   0 
(none)
 11  88   16095   0   0.000   0 
(none)
 12  90   16101   0   0.000   0 
(none)
 13  92   16117   0   0.000   0 
(none)
 14  95   16122   0   0.000   0 
(none)
 15  97   16130   0   0.000   0 
(none)
 16  99   16138   0   0.000   0 
(none)
 17 101   16144   0   0.000   0 
(none)
 18 104   16150   0   0.000   0 
(none)
 19 107   16162   0   0.000   0 
(none)
 20 109   16173   0   0.000   0 
(none)


Looks like you can save some resources by dropping that down to 10 
helpers. But re-evaluate that after they are fixed in case the loading 
goes up after that.




Flags key:

   B = BUSY
   W = WRITING
   C = CLOSING
   S = SHUTDOWN PENDING

2012/03/13 03:00:04| Ready to serve requests.
squid_ldap_auth: WARNING, could not bind to binddn 'Can't contact
LDAP server'




squid_ldap_auth: WARNING, could not bind to binddn 'Can't contact
LDAP server'
squid_ldap_auth: WARNING, could not bind to binddn 'Can't contact
LDAP server'
squid_ldap_auth: WARNING, could not bind to binddn 'Can't contact
LDAP server'
squid_ldap_auth: WARNING, could not bind to binddn 'Can't contact
LDAP server'
squid_ldap_auth: WARNING, could not bind to binddn 'Can't contact
LDAP server'
squid_ldap_auth: WARNING, could not bind to binddn 'Can't contact
LDAP server'
squid_ldap_auth: WARNING, could not bind to binddn 'Can't contact
LDAP server'
squid_ldap_auth: WARNING, c

Re: [squid-users] About access.log hourly?

2012-03-13 Thread Ibrahim Lubis
I use cron...

-Original Message-

From: Amos Jeffries
Sent: 13 Mar 2012 10:15:54 GMT
To: squid-users@squid-cache.org
Subject: Re: [squid-users] About access.log hourly?

On 13.03.2012 23:09, Ibrahim Lubis wrote:
> How can i configurre squid to create a new access.log file every 15
> minutes, in 1 hour i have 4 different log file...

What are you using to manage the Squid logs  cron? logrotate?
something else?

Amos


Re: [squid-users] Need 413 status code when reply_body_max_size is hit

2012-03-13 Thread Amos Jeffries

On 14.03.2012 04:50, squid-list wrote:
I limit the maximum file size an employee can download to our network 
using

reply_body_max_size 100 MB proxy_user1. If this limit is hit, Squid
returns a
403.

My problem is that I would like to differentiate between the status 
code 403

that comes from a target website that does not allow access at all to
download
a specific file and to the intentional 403 that is generated from 
Squid

because the file size exceeds the limit we set.

The reason is that I want to display the user a message like "The 
target

website does not allow access" and "You have requested a file to
download that
is too large. Please contact the IT department". I was thinking that 
the 413

is useful for that.


Apart from the minor detail that 413 applies only to *request* size. 
Client software can get confused if the zero-length *request* body on a 
GET is "too big" for example. Yes, I have looked into this.




Is there a way to change the status code/message when a user hits the
reply_body_max_size and differentiate that case? Is there any other
workaround?


If your Squid is new enough (3.2 beta) you can use deny_info to set the 
status code delivered.


I recommend picking some high unused number for this. 49x or such until 
somebody writes up a IETF specification for this type of usage and 
allocates a code.


Amos



[squid-users] Need 413 status code when reply_body_max_size is hit

2012-03-13 Thread squid-list
I limit the maximum file size an employee can download to our network using 
reply_body_max_size 100 MB proxy_user1. If this limit is hit, Squid returns a 
403. 

My problem is that I would like to differentiate between the status code 403 
that comes from a target website that does not allow access at all to download 
a specific file and to the intentional 403 that is generated from Squid 
because the file size exceeds the limit we set. 

The reason is that I want to display the user a message like "The target 
website does not allow access" and "You have requested a file to download that 
is too large. Please contact the IT department". I was thinking that the 413 
is useful for that.

Is there a way to change the status code/message when a user hits the 
reply_body_max_size and differentiate that case? Is there any other 
workaround?

Thanks a lot for your help.


[squid-users] Re: Squid Ldap Authenticators

2012-03-13 Thread guest01
Hi,

Sorry, I pressed the send button by mistake ...

We are having strange Squid troubles, at first, let me describe our setup:

- 4 HP G6/G7 DL380 servers with 16CPUs and 28GB RAM with RHEL 5.4-5.8
64bit and Squid 3.1.12 (custom compiled)
Squid Cache: Version 3.1.12
configure options:  '--enable-ssl' '--enable-icap-client'
'--sysconfdir=/etc/squid' '--enable-async-io' '--enable-snmp'
'--enable-poll' '--with-maxfd=32768' '--enable-storeio=aufs'
'--enable-removal-policies=heap,lru' '--enable-epoll'
'--disable-ident-lookups' '--enable-truncate'
'--with-logdir=/var/log/squid' '--with-pidfile=/var/run/squid.pid'
'--with-default-user=squid' '--prefix=/opt/squid' '--enable-auth=basic
digest ntlm negotiate'
'-enable-negotiate-auth-helpers=squid_kerb_auth'
--with-squid=/home/squid/squid-3.1.12 --enable-ltdl-convenience

- Each server has two instances for kerberos/ntlm authentication and
two instances for LDAP authentication (different customers)
- we have a hardware loadbalancer which is balancing request for our
kerberos-customers (4x2 instances) and ldap-customers (4x2 instances),
each has a different IP address.
- average load values are approx 0.5 (5min values)
- approx 60RPS per instance (equally distributed -> 16 * 60 => 960 RPS)
- up to 150Mbit/s traffic per server
- ICAP servers for content adaption (multiple servers with a hardware
loadbalancer in front of it)

>From time to time we are having troubles with our Squid servers which
may not be a problem related to Squid, I suspect an OS issue.
Nevertheless, sometimes the servers don't respond to request (even
SSH-requests) or logging in takes forever (reverse lookup failure?) or
even worse, sometimes the server interface is just down (there is no
indication of any problem at the switch port level). If we check the
squidclient output, we can see some hanging ldap authenticators:

squid@xlsqit01 /opt/squid/bin $ ./squidclient -h 10.122.125.23
cache_object://10.122.125.23/basicauthenticator
HTTP/1.0 200 OK
Server: squid/3.1.12
Mime-Version: 1.0
Date: Tue, 13 Mar 2012 13:34:07 GMT
Content-Type: text/plain
Expires: Tue, 13 Mar 2012 13:34:07 GMT
Last-Modified: Tue, 13 Mar 2012 13:34:07 GMT
X-Cache: MISS from xlsqip02_3
Via: 1.0 xlsqip02_3 (squid/3.1.12)
Connection: close

Basic Authenticator Statistics:
program: /opt/squid/libexec/squid_ldap_auth
number active: 20 of 20 (0 shutting down)
requests sent: 13316
replies received: 13312
queue length: 0
avg service time: 4741 msec

  #  FD PID  # Requests Flags  Time  Offset Request
  1  12   160382150 B   125.885   0 user1 pw1\n
  2  24   16043  85 B   119.562   0 user2 pw2\n
  3  32   16049  63 B13.639   0 user3 pw3\n
  4  43   16055  21 B   116.143   0 user4 pw4\n
  5  46   16059  12 189.002   0 (none)
  6  50   16064   1 189.003   0 (none)
  7  56   16069   2   0.079   0 (none)
  8  60   16074   0   0.000   0 (none)
  9  65   16079   0   0.000   0 (none)
 10  86   16084   0   0.000   0 (none)
 11  88   16095   0   0.000   0 (none)
 12  90   16101   0   0.000   0 (none)
 13  92   16117   0   0.000   0 (none)
 14  95   16122   0   0.000   0 (none)
 15  97   16130   0   0.000   0 (none)
 16  99   16138   0   0.000   0 (none)
 17 101   16144   0   0.000   0 (none)
 18 104   16150   0   0.000   0 (none)
 19 107   16162   0   0.000   0 (none)
 20 109   16173   0   0.000   0 (none)

Flags key:

   B = BUSY
   W = WRITING
   C = CLOSING
   S = SHUTDOWN PENDING

2012/03/13 03:00:04| Ready to serve requests.
squid_ldap_auth: WARNING, could not bind to binddn 'Can't contact LDAP server'
squid_ldap_auth: WARNING, could not bind to binddn 'Can't contact LDAP server'
squid_ldap_auth: WARNING, could not bind to binddn 'Can't contact LDAP server'
squid_ldap_auth: WARNING, could not bind to binddn 'Can't contact LDAP server'
squid_ldap_auth: WARNING, could not bind to binddn 'Can't contact LDAP server'
squid_ldap_auth: WARNING, could not bind to binddn 'Can't contact LDAP server'
squid_ldap_auth: WARNING, could not bind to binddn 'Can't contact LDAP server'
squid_ldap_auth: WARNING, could not bind to binddn 'Can't contact LDAP server'
squid_ldap_auth: WARNING, could not bind to binddn 'Can't contact LDAP server'
squid_ldap_auth: WARNING, could not bind to binddn 'Can't contact LDAP server'

Testing the ldap authentication at CLI level, it is working without
any problems:

root@xlsqip02 ~ #  /op

[squid-users] Re: squid 3.1 - endless loop IIS webserver

2012-03-13 Thread kadvar
Hi Amos, that worked brilliantly, thanks a lot!

--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/squid-3-1-endless-loop-IIS-webserver-tp4465329p4469087.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] Help with a tcp_miss/200 issue

2012-03-13 Thread James Ashton
Thanks Amos,

The web servers reply to squid with these headers

=
Cache-Control   max-age=60
Connection  Keep-Alive
Content-Encodinggzip
Content-Length  15139
Content-Typetext/html; charset=UTF-8
DateTue, 13 Mar 2012 12:42:26 GMT
Expires Tue, 13 Mar 2012 12:43:26 GMT
Keep-Alive  timeout=15, max=5000
Server  Apache/2.2.15 (CentOS)
VaryAccept-Encoding,User-Agent
X-Pingback  http://planetphotoshop.com/xmlrpc.php
=


They look good to me...
Do you see anything missing from this?

Thank You
James

- Original Message -
From: "Amos Jeffries" 
To: squid-users@squid-cache.org
Sent: Monday, March 12, 2012 10:39:13 PM
Subject: Re: [squid-users] Help with a tcp_miss/200 issue

On 13.03.2012 03:13, James Ashton wrote:
> Any thoughts guys?
>
> This has me baffled.  I am digging through list archives, but nothing
> relevant so far.
> I figure it has to be a response header issue.  I just don't see it.
>

Could be. You will need to know the headers being sent into Squid 
"squid1.kelbymediagroup.com" from the origin server though. I suspect it 
may be missing Date: header or something like that making the original 
non-cacheable. Squid does many fixing-up of details like that on its 
output to ensure the output is more friendly to downstream clients.


> Using Squid 3.1.8

Or it could be some bug in that particular version. Tried the more 
current .19 release?


Config seems okay.

> #
> visible_hostname squid2.kelbymediagroup.com
> #
> refresh_pattern
> 
> (phpmyadmin|process|register|login|contact|signup|admin|gateway|ajax|account|cart|checkout|members)
> 0 10% 0
> refresh_pattern (blog|feed) 300 20% 4320
> refresh_pattern -i \.(gif|png|jpg|jpeg|ico)$ 40320 75% 86400
> refresh_pattern -i \.(iso|avi|wav|mp3|mpeg|swf|flv|x-flv)$ 1440 40% 
> 40320
> refresh_pattern -i \.mp4$   1440   90% 43200
> refresh_pattern -i \.(css|js)$ 300 40% 7200
> refresh_pattern -i \.(html|htm)$ 300 40% 7200
> refresh_pattern (/cgi-bin/|\?) 300 20% 4320
> refresh_pattern . 300 40% 40320
> #



Amos

>
>
> - Original Message -
> From: "James Ashton"
>
> Hello all,
>  I am trying to improve caching/acceleration on a series of wordpress 
> sites.
> Almost all objects are being cached at this point other than the page
> HTML itself.
> All I am getting there is TCP_MISS/200 log lines.
>
> The request is a GET for the URL  http://planetphotoshop.com
>
> At the moment my response header is:
>
> Cache-Controlmax-age=60
> Cneonctionclose
> Connectionkeep-alive
> Content-Encodinggzip
> Content-Length15339
> Content-Typetext/html; charset=UTF-8
> DateFri, 09 Mar 2012 13:58:01 GMT
> ServerApache/2.2.15 (CentOS)
> VaryAccept-Encoding
> Via1.0 squid1.kelbymediagroup.com (squid)
> X-CacheMISS from squid1.kelbymediagroup.com
> X-Cache-LookupMISS from squid1.kelbymediagroup.com:80
> X-Pingbackhttp://planetphotoshop.com/xmlrpc.php
>
>
> I dont see anything preventing caching
>
> Any thoughts or ideas?
>
> Thank you in advance for the help.
>
> James



Re: [squid-users] Cache_mem settings

2012-03-13 Thread Amos Jeffries

On 13.03.2012 23:44, Peter Gaughran wrote:

Hi folks,

We're in a large-number-of-users, high bandwidth/usage situation
(average 80 gigs per hour during business hours) and so have opted 
for

a couple of new proxies (one for fail-over) which we're about to make
'live'. Currently, our cache_mem and cache_dir look like the 
following


cache_mem 23424 MB
cache_dir aufs /cache1 183000 32 256
cache_dir aufs /cache2 183000 32 256
cache_dir aufs /cache3 183000 32 256
cache_dir aufs /cache4 183000 32 256

With internal testing (from a team of 5!), of course, everything is
fine. I'm concerned about what will happen when we roll them out, or
if there's something (blindingly) obvious we might have missed or 
that

could be tweaked.


Don't forget the 10-15 MB of RAM index per GB of total cache space. 
(~11 GB RAM index).


Each of those live transactions making up those ~20MBps will also 
consume a ~64KB or more of RAM.


All of that *plus* cache_mem needs to fit into the available box RAM. 
By my rough calc that is over 35GB of RAM needed on each box.


Amos


[squid-users] Cache_mem settings

2012-03-13 Thread Peter Gaughran

Hi folks,

We're in a large-number-of-users, high bandwidth/usage situation 
(average 80 gigs per hour during business hours) and so have opted for a 
couple of new proxies (one for fail-over) which we're about to make 
'live'. Currently, our cache_mem and cache_dir look like the following


cache_mem 23424 MB
cache_dir aufs /cache1 183000 32 256
cache_dir aufs /cache2 183000 32 256
cache_dir aufs /cache3 183000 32 256
cache_dir aufs /cache4 183000 32 256

With internal testing (from a team of 5!), of course, everything is 
fine. I'm concerned about what will happen when we roll them out, or if 
there's something (blindingly) obvious we might have missed or that 
could be tweaked.


This is very new to us, so we'd appreciate your understanding/help :)


Re: [squid-users] maxconn bug ?

2012-03-13 Thread FredB

> Bit suspicious yes.
> 
> Tried apachebench (ab) with concurrency level 10? or anything like
> that
> which can guarantee multiple simultaneous connections for the test?
> 
> Amos

Yes, a little script who make many wget recursive + I navigate with firefox, 
after I watch access.log and read 20 cnx by second 
Also with just a simple firefox and 30 tabs refresh I'm not deny, and same 
behaviour with acl all 

I tried maxconn (or something like that I can't remember) a long time ago with 
squid 2.6, and I had to increase the maxconn's value more than 5 for 
comfortable navigation.

> 


Re: [squid-users] About access.log hourly?

2012-03-13 Thread Amos Jeffries

On 13.03.2012 23:09, Ibrahim Lubis wrote:

How can i configurre squid to create a new access.log file every 15
minutes, in 1 hour i have 4 different log file...


What are you using to manage the Squid logs  cron? logrotate? 
something else?


Amos


Re: [squid-users] Checking squid Configuration

2012-03-13 Thread Amos Jeffries

On 13.03.2012 22:25, Vishal Agarwal wrote:

Hi,

Pl advise how do I can check my squid configuration from shell
prompt.




With squidclient command line tool and the "mgr:config" action and 
whatever password you configured in squid.conf for management.

http://wiki.squid-cache.org/SquidClientTool

Amos


[squid-users] About access.log hourly?

2012-03-13 Thread Ibrahim Lubis
How can i configurre squid to create a new access.log file every 15 minutes, in 
1 hour i have 4 different log file...

Re: [squid-users] maxconn bug ?

2012-03-13 Thread Amos Jeffries

On 13.03.2012 21:38, FredB wrote:


Duration and overlap of those connections matters. If they were all
serviced in less than 100ms and closed it is possible they all took
place one after another sequentially with no more than 1 open at a
time.

maxconn allows up to 3 *simultaneous* connections. Opening three 
then

closing one before opening a fourth is permitted. Only opening four
at
once is not permitted.


I'm trying with maxconn 1 and 20 r/s without any deny, it's strange 
no ?


Bit suspicious yes.

Tried apachebench (ab) with concurrency level 10? or anything like that 
which can guarantee multiple simultaneous connections for the test?


Amos


Re: [squid-users] squid 3.1 and HTTPS (and probably ipv6)

2012-03-13 Thread Amos Jeffries

On 13.03.2012 22:10, Eugene M. Zheganin wrote:

Hi.

I'm using squid 3.1.x on FreeBSD. Squid is built from ports.

Recently I was hit by a weird issue: my users cannot open HTTPS
pages. This is not something constant - if they hit the F5 button in
browser, the pages are loading, sometimes showing the message like
'Unable to connect. Firefox can't establish a connection to the 
server

at access.ripe.net.' (for example. most of them are using FF). In the
same time plain HTTP pages are working fine.

I did some investigation and it appears like squid really thinks it
cannot connect to HTTPS-enabled web server:




As you guessed this does seem to be a stack issue. Dual-stack systems 
can be configured to operate as hybrid stacks or as split stacks (two 
distinct socket handling paths). Recently there has been a trend away 
from the simpler hybrid stacks towards split stacks.


Squid-3.1 was written for hybrid stacks with v4-mapping ability. When 
run on stack without mapping (split) it cannot reset the FD protocol to 
switch stack types. Workaround/Support for split stacks has been added 
incrementally across the 3.1 series, with some of the deeper changes 
only in 3.2.




===Cut===


2012/03/13 14:08:39.661| ipcache_nbgethostbyname: HIT for 
'access.ripe.net'


Found the site IPs.

assuming: connect to the first one (IPv6).


2012/03/13 14:08:39.662| ipcacheMarkBadAddr: access.ripe.net
[2001:67c:2e8:22::c100:685]:443


 Didn't work. Mark it bad.


2012/03/13 14:08:39.662| ipcacheCycleAddr: access.ripe.net now at
193.0.6.133 (2 of 2)
2012/03/13 14:08:39.662| commResetFD: Reset socket FD 14->16 : 
family=28
2012/03/13 14:08:39.662| commResetFD: Reset socket FD 14->16 : 
family=28


Reset the socket FD to convert for the new IP (v4).

assuming: socket still IPv6 and trying to use for IPv4?
assuming: connect to this IP also failed.

2012/03/13 14:08:39.662| ipcache_nbgethostbyname: Name 
'access.ripe.net'.
2012/03/13 14:08:39.662| ipcache_nbgethostbyname: HIT for 
'access.ripe.net'
2012/03/13 14:08:39.662| commResetFD: Reset socket FD 14->16 : 
family=28
2012/03/13 14:08:39.662| commResetFD: Reset socket FD 14->16 : 
family=28


 Reset the socket FD (again).. Bit strange that this is still "14->16".


2012/03/13 14:08:39.662| ipcacheCycleAddr: access.ripe.net now at
193.0.6.133 (2 of 2)
2012/03/13 14:08:39.662| ipcache_nbgethostbyname: Name 
'access.ripe.net'.
2012/03/13 14:08:39.662| ipcache_nbgethostbyname: HIT for 
'access.ripe.net'

2012/03/13 14:08:39.662| ipcacheMarkAllGood: Changing ALL
access.ripe.net addrs to OK (1/2 bad)


Failed a second time. tries > number of IPs (huh? 3 or 2 tries?)

Instead of retrying yet again, cycle the IPs ...


2012/03/13 14:08:39.662| errorpage.cc(1051) BuildContent: No existing
error page language negotiated for ERR_CONNECT_FAIL. Using default
error file.


... and respond with error.


===Cut==

But why ? I did some telnetting from this server to the
access.ripe.net:443, and it succeeded like 10 from 10 times (squid
error rate is far more frequent). The only thing that bothers me is
that telnet also first tries ipv6 too, but then switches to the ipv4,
and connects.

Now some suggestions (probably a shot in the dark). This only happens
on an ipv6-enabled machines, but without actual ipv6 connectivity (no
ipv6 default route or no public ipv6 address, for example I have
unique-local addresses for the testing purposes). In the same time
this issue can be easily solved by restoring the ipv6 connectivity to
the outer world. So, can it be some dual-stack behavior bug ? Or is 
it

'by design' ? Do I need to report it ?



Squid opens an IPv6 socket by default, attempts the IPv6 destination 
(route down, IPv6 socket). Fails. Then attempts to reset the socket 
protocol family and contact the IPv4 destination (route fine, IPv6 
socket [oops!]).
You can avoid this in 3.1 by enabling v4-mapping capability in your 
kernel or using "tcp_outgoing_address 0.0.0.0" to force the sockets to 
be IPv4-only from the start. 3.2 series has better split-stack support 
so should have this behaviour problem fixed now.



Amos


[squid-users] Checking squid Configuration

2012-03-13 Thread Vishal Agarwal
Hi,

Pl advise how do I can check my squid configuration from shell
prompt.

Thanks/regards,
Vishal Agarwal




[squid-users] squid 3.1 and HTTPS (and probably ipv6)

2012-03-13 Thread Eugene M. Zheganin

Hi.

I'm using squid 3.1.x on FreeBSD. Squid is built from ports.

Recently I was hit by a weird issue: my users cannot open HTTPS pages. 
This is not something constant - if they hit the F5 button in browser, 
the pages are loading, sometimes showing the message like 'Unable to 
connect. Firefox can't establish a connection to the server at 
access.ripe.net.' (for example. most of them are using FF). In the same 
time plain HTTP pages are working fine.


I did some investigation and it appears like squid really thinks it 
cannot connect to HTTPS-enabled web server:


===Cut===
2012/03/13 14:08:39.661| ACL::ChecklistMatches: result for 'all' is 1
2012/03/13 14:08:39.661| ACLList::matches: result is true
2012/03/13 14:08:39.661| aclmatchAclList: 0x285e4810 returning true (AND 
list satisfied)
2012/03/13 14:08:39.661| ACLChecklist::markFinished: 0x285e4810 
checklist processing finished
2012/03/13 14:08:39.661| ACLChecklist::check: 0x285e4810 match found, 
calling back with 1

2012/03/13 14:08:39.661| ACLChecklist::checkCallback: 0x285e4810 answer=1
2012/03/13 14:08:39.661| peerCheckAlwaysDirectDone: 1
2012/03/13 14:08:39.661| peerSelectFoo: 'CONNECT access.ripe.net'
2012/03/13 14:08:39.661| peerSelectFoo: direct = DIRECT_YES
2012/03/13 14:08:39.661| The AsyncCall SomeCommConnectHandler 
constructed, this=0x286e6740 [call1916]
2012/03/13 14:08:39.661| commConnectStart: FD 14, cb 0x286e6740*1, 
access.ripe.net:443
2012/03/13 14:08:39.661| The AsyncCall SomeCloseHandler constructed, 
this=0x2956c2c0 [call1917]

2012/03/13 14:08:39.661| ipcache_nbgethostbyname: Name 'access.ripe.net'.
2012/03/13 14:08:39.661| ipcache_nbgethostbyname: HIT for 'access.ripe.net'
2012/03/13 14:08:39.662| ipcacheMarkBadAddr: access.ripe.net 
[2001:67c:2e8:22::c100:685]:443
2012/03/13 14:08:39.662| ipcacheCycleAddr: access.ripe.net now at 
193.0.6.133 (2 of 2)

2012/03/13 14:08:39.662| commResetFD: Reset socket FD 14->16 : family=28
2012/03/13 14:08:39.662| commResetFD: Reset socket FD 14->16 : family=28
2012/03/13 14:08:39.662| FilledChecklist.cc(168) ~ACLFilledChecklist: 
ACLFilledChecklist destroyed 0x285e4810

2012/03/13 14:08:39.662| ACLChecklist::~ACLChecklist: destroyed 0x285e4810
2012/03/13 14:08:39.662| FilledChecklist.cc(168) ~ACLFilledChecklist: 
ACLFilledChecklist destroyed 0x285e4910

2012/03/13 14:08:39.662| ACLChecklist::~ACLChecklist: destroyed 0x285e4910
2012/03/13 14:08:39.662| The AsyncCall SomeCommReadHandler constructed, 
this=0x28ce9100 [call1918]
2012/03/13 14:08:39.662| leaving SomeCommReadHandler(FD 150, 
data=0x286b6710, size=4, buf=0x28d1e000)

2012/03/13 14:08:39.662| ipcache_nbgethostbyname: Name 'access.ripe.net'.
2012/03/13 14:08:39.662| ipcache_nbgethostbyname: HIT for 'access.ripe.net'
2012/03/13 14:08:39.662| commResetFD: Reset socket FD 14->16 : family=28
2012/03/13 14:08:39.662| commResetFD: Reset socket FD 14->16 : family=28
2012/03/13 14:08:39.662| ipcacheCycleAddr: access.ripe.net now at 
193.0.6.133 (2 of 2)

2012/03/13 14:08:39.662| ipcache_nbgethostbyname: Name 'access.ripe.net'.
2012/03/13 14:08:39.662| ipcache_nbgethostbyname: HIT for 'access.ripe.net'
2012/03/13 14:08:39.662| ipcacheMarkAllGood: Changing ALL 
access.ripe.net addrs to OK (1/2 bad)

2012/03/13 14:08:39.662| commConnectCallback: FD 14
2012/03/13 14:08:39.662| comm.cc(1195) commSetTimeout: FD 14 timeout -1
2012/03/13 14:08:39.662| comm.cc(1206) commSetTimeout: FD 14 timeout -1
2012/03/13 14:08:39.662| comm.cc(934) will call 
SomeCommConnectHandler(FD 14, errno=22, flag=-8, data=0x28f6bdd0, ) 
[call1916]

2012/03/13 14:08:39.662| commConnectFree: FD 14
2012/03/13 14:08:39.662| entering SomeCommConnectHandler(FD 14, 
errno=22, flag=-8, data=0x28f6bdd0, )
2012/03/13 14:08:39.662| AsyncCall.cc(32) make: make call 
SomeCommConnectHandler [call1916]

2012/03/13 14:08:39.662| errorSend: FD 12, err=0x28f995d0
2012/03/13 14:08:39.662| errorpage.cc(1051) BuildContent: No existing 
error page language negotiated for ERR_CONNECT_FAIL. Using default error 
file.

===Cut==

But why ? I did some telnetting from this server to the 
access.ripe.net:443, and it succeeded like 10 from 10 times (squid error 
rate is far more frequent). The only thing that bothers me is that 
telnet also first tries ipv6 too, but then switches to the ipv4, and 
connects.


Now some suggestions (probably a shot in the dark). This only happens on 
an ipv6-enabled machines, but without actual ipv6 connectivity (no ipv6 
default route or no public ipv6 address, for example I have unique-local 
addresses for the testing purposes). In the same time this issue can be 
easily solved by restoring the ipv6 connectivity to the outer world. So, 
can it be some dual-stack behavior bug ? Or is it 'by design' ? Do I 
need to report it ?


Thanks.
Eugene.


Re: [squid-users] Re: squid 3.1 - endless loop IIS webserver

2012-03-13 Thread Amos Jeffries

On 13.03.2012 21:21, kadvar wrote:

Hi,

Hi, Thanks for that. I tried your recommendations and now I get this.

2012/03/13 12:11:25| clientNegotiateSSL: Error negotiating SSL 
connection on
FD 18: error:14094418:SSL routines:SSL3_READ_BYTES:tlsv1 alert 
unknown ca

(1/0)
2012/03/13 12:11:25| clientNegotiateSSL: Error negotiating SSL 
connection on
FD 20: error:14094418:SSL routines:SSL3_READ_BYTES:tlsv1 alert 
unknown ca

(1/0)
2012/03/13 12:11:27| clientNegotiateSSL: Error negotiating SSL 
connection on
FD 18: error:14094418:SSL routines:SSL3_READ_BYTES:tlsv1 alert 
unknown ca

(1/0)

Adding ssl sslflags=NO_VERIFY_PEER to the cache_peer line gives:

2012/03/13 12:35:23| TCP connection to 192.168.124.169/443 failed
2012/03/13 12:35:23| SSL unknown certificate error 20 in /CN=savitr69
2012/03/13 12:35:23| SSL unknown certificate error 20 in /CN=savitr69
2012/03/13 12:35:23| fwdNegotiateSSL: Error negotiating SSL 
connection on FD
20: error:14090086:SSL 
routines:SSL3_GET_SERVER_CERTIFICATE:certificate

verify failed (1/-1/0)

I'm a bit confused about the actual interaction between squid and the
backend IIS server. The programmer has generated an ssl cert on the 
iis

server, I have generated an ssl cert on squid. When I type
https://www.example.com/Login.aspx, it asks me to add an exception 
for the

squid box but the request now hangs at the iis server.


There are two very separate TCP links. Each having separate TLS done on 
them. The traffic is fully decrypted on arrival into Squid and 
re-encrypted for transfer to IIS.


Notice how there are two logged message prefixes "clientNegotiateSSL:" 
and "fwdNegotiateSSL:".

 "clientNegotiateSSL:" is the browser->Squid link having problems.
 "fwdNegotiateSSL:" is the Squid->IIS link having problems.

fwdNegotiateSSL is likely having problems because the flag you set was 
NO_VERIFY_PEER instead of sslflags=DONT_VERIFY_PEER. So it tried to 
verify and failed.





As per my understanding according to the current setup, when a user 
visits
www.example.com/Login.aspx he is presented with a "Add exception" 
window for
the squid box, after accepting he should again be presented with an 
"add

exception" window from the iis server, this never happens.


No that should not happen. The transport security between Squid and IIS 
is *only* on the transport between them. The browser has nothing to do 
with that.



Amos


Re: [squid-users] problem with installing squid 3.1.11 manually

2012-03-13 Thread Amos Jeffries

On 13.03.2012 20:24, Mustafa Raji wrote:

hi
i have some problem with installing squid manually, these problem
exist when i want to support the transparent mode in squid using this
configuration option

./configure --prefix=/usr/local/squid3


This: "--enable-linux-netlter --enable-linux-tproxy"

Should be only:  "--enable-linux-netfilter"


... and make sure you have the rest of the build requirements (version 
numbers are *minimum* version):
 
http://wiki.squid-cache.org/Features/Tproxy4#Minimum_Requirements_.28IPv6_and_IPv4.29



Amos


Re: [squid-users] maxconn bug ?

2012-03-13 Thread FredB

> 
> Duration and overlap of those connections matters. If they were all
> serviced in less than 100ms and closed it is possible they all took
> place one after another sequentially with no more than 1 open at a
> time.
> 
> maxconn allows up to 3 *simultaneous* connections. Opening three then
> closing one before opening a fourth is permitted. Only opening four
> at
> once is not permitted.
 
I'm trying with maxconn 1 and 20 r/s without any deny, it's strange no ?  

> >
> > I have an another question about deny pages, when I block by
> > maxconn/port/acldst/etc my users get the same DENY page without
> > distinction, how can I customize the result (one page for
> > dstdomain,
> > one page for maxconn, one page for ldap ident, etc)
> 
> You use deny_info to attach a custom output to the last ACL on the
> line. This output gets presented every time that ACL is last on a
> deny
> line.
> http://www.squid-cache.org/Doc/config/deny_info/

Thanks


[squid-users] Re: squid 3.1 - endless loop IIS webserver

2012-03-13 Thread kadvar
Hi,

Hi, Thanks for that. I tried your recommendations and now I get this.

2012/03/13 12:11:25| clientNegotiateSSL: Error negotiating SSL connection on
FD 18: error:14094418:SSL routines:SSL3_READ_BYTES:tlsv1 alert unknown ca
(1/0)
2012/03/13 12:11:25| clientNegotiateSSL: Error negotiating SSL connection on
FD 20: error:14094418:SSL routines:SSL3_READ_BYTES:tlsv1 alert unknown ca
(1/0)
2012/03/13 12:11:27| clientNegotiateSSL: Error negotiating SSL connection on
FD 18: error:14094418:SSL routines:SSL3_READ_BYTES:tlsv1 alert unknown ca
(1/0)

Adding ssl sslflags=NO_VERIFY_PEER to the cache_peer line gives:

2012/03/13 12:35:23| TCP connection to 192.168.124.169/443 failed
2012/03/13 12:35:23| SSL unknown certificate error 20 in /CN=savitr69
2012/03/13 12:35:23| SSL unknown certificate error 20 in /CN=savitr69
2012/03/13 12:35:23| fwdNegotiateSSL: Error negotiating SSL connection on FD
20: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate
verify failed (1/-1/0)

I'm a bit confused about the actual interaction between squid and the
backend IIS server. The programmer has generated an ssl cert on the iis
server, I have generated an ssl cert on squid. When I type
https://www.example.com/Login.aspx, it asks me to add an exception for the
squid box but the request now hangs at the iis server. 

As per my understanding according to the current setup, when a user visits
www.example.com/Login.aspx he is presented with a "Add exception" window for
the squid box, after accepting he should again be presented with an "add
exception" window from the iis server, this never happens. Is there a better
way to do this? 

Thanks,
Adi

--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/squid-3-1-endless-loop-IIS-webserver-tp4465329p4468478.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] problem with installing squid 3.1.11 manually

2012-03-13 Thread Mustafa Raji
hi 
i have some problem with installing squid manually, these problem exist when i 
want to support the transparent mode in squid using this configuration option

./configure --prefix=/usr/local/squid3 --enable-ssl --enable-storeio=ufs,aufs 
--enable-removal-policies=lru,heap --enable-linux-netlter --enable-linux-tproxy 
--enable-zph-qos --with-openssl --with-filedescriptors=8000

the config.log file shows these errors and warning 
checking if Linux (Netfilter) kernel header files are installed... no
configure: WARNING: Cannot find necessary Linux kernel (Netfilter) header files
configure: WARNING: Linux Transparent and Intercepting Proxy support WILL NOT 
be enabled

i downloaded the ip-filter package from the net,
when i want to install this package and open the file INSTALL.Linux file to 
know how to install it in linux the file contain this text
linux is no more supported 

when i configure the squid in the transparent mode, and when the client 
request's from the cache server, the cache.log files show transparent mode not 
supported 

what is the solution for this problem 
thanks and best regards