Re: [squid-users] WARNING! Your cache is running out of filedescriptors -------Version 3.0.STABLE13

2009-02-23 Thread Visolve Squid Team
Probably, you can change the ulimit value and then try with 
--with-filedescriptors option/. /It may work.

Change the ulimit value:  root#ulimit -HSn 32768
or try
client_persistent_connections off
server_persistent_connections off
in the squid.conf configuration.

Regards,
ViSolve Squid Team./

/Shekhar Gupta wrote:

Any thoughts on this ..


On Mon, Feb 23, 2009 at 4:11 PM, Shekhar Gupta  wrote:
  

I think this is some bug as the same machine with 2.6 swuid version
were not having any of these messages , I still have 3 machine on the
older squid version and i upgraded 2 machine to 3.0 13 version and i
am finding this problem .

On Mon, Feb 23, 2009 at 3:53 PM, Amos Jeffries  wrote:


Shekhar Gupta wrote:
  

Amos,

I only configured it with delay pool , so you are saying that i have
to recompile the squid with that option .  do i have to do ant
thing else apart from it like something in OS .


I would hope nothing in OS is needed. But I don't know RHEL very well.
The option is equivalent to --with-maxfd from 2.6. With the same usage and
related settings.

Amos

  

On Mon, Feb 23, 2009 at 3:12 PM, Amos Jeffries 
wrote:


Shekhar Gupta wrote:
  

Guys , i tried fixing this however most of the derivatives are not
working with this verision and can any one throw some light how to
make this fix in Version 3.0.STABLE13 running on RHEL 5.3..


Check you are using the configure option: --with-filedescriptors=N
3.0 uses a different option name than 2.6 did.

Amos
--
Please be using
 Current Stable Squid 2.7.STABLE6 or 3.0.STABLE13
 Current Beta Squid 3.1.0.5

  

--
Please be using
 Current Stable Squid 2.7.STABLE6 or 3.0.STABLE13
 Current Beta Squid 3.1.0.5

  



  


Re: [squid-users] WARNING! Your cache is running out of filedescriptors -------Version 3.0.STABLE13

2009-02-23 Thread Shekhar Gupta
Any thoughts on this ..


On Mon, Feb 23, 2009 at 4:11 PM, Shekhar Gupta  wrote:
> I think this is some bug as the same machine with 2.6 swuid version
> were not having any of these messages , I still have 3 machine on the
> older squid version and i upgraded 2 machine to 3.0 13 version and i
> am finding this problem .
>
> On Mon, Feb 23, 2009 at 3:53 PM, Amos Jeffries  wrote:
>> Shekhar Gupta wrote:
>>>
>>> Amos,
>>>
>>> I only configured it with delay pool , so you are saying that i have
>>> to recompile the squid with that option .  do i have to do ant
>>> thing else apart from it like something in OS .
>>
>> I would hope nothing in OS is needed. But I don't know RHEL very well.
>> The option is equivalent to --with-maxfd from 2.6. With the same usage and
>> related settings.
>>
>> Amos
>>
>>>
>>> On Mon, Feb 23, 2009 at 3:12 PM, Amos Jeffries 
>>> wrote:

 Shekhar Gupta wrote:
>
> Guys , i tried fixing this however most of the derivatives are not
> working with this verision and can any one throw some light how to
> make this fix in Version 3.0.STABLE13 running on RHEL 5.3..

 Check you are using the configure option: --with-filedescriptors=N
 3.0 uses a different option name than 2.6 did.

 Amos
 --
 Please be using
  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE13
  Current Beta Squid 3.1.0.5

>>
>>
>> --
>> Please be using
>>  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE13
>>  Current Beta Squid 3.1.0.5
>>
>


RE: [squid-users] New Setup help

2009-02-23 Thread Jim Lawrence
Why yes it was
thank you !  



-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Sent: Monday, February 23, 2009 9:47 PM
To: Jim Lawrence
Cc: Amos Jeffries; squid-users@squid-cache.org
Subject: RE: [squid-users] New Setup help

> cat /etc/squid/allowed_sites.squid
> *.americas-pet-store.com
> *.petfrenzy.com
> *.google.com
> [r...@virt1 ~]#


There is the problem. the '*' is not a proper part of domain names.
Just begin the partial domains with a '.'

Amos

>
> I did a service squid restart
> And for good measure  service squid reload
>
> -Original Message-
> From: Amos Jeffries [mailto:squ...@treenet.co.nz]
> Sent: Monday, February 23, 2009 8:45 PM
> To: Jim Lawrence
> Cc: Amos Jeffries; squid-users@squid-cache.org
> Subject: RE: [squid-users] New Setup help
>
>> Current config
>>
>> http_port 192.168.31.3:3128
>> hierarchy_stoplist cgi-bin ?
>> acl QUERY urlpath_regex cgi-bin \?
>> cache deny QUERY
>> acl apache rep_header Server ^Apache
>> broken_vary_encoding allow apache
>>  cache_dir ufs /var/spool/squid 1000 16 256
>> access_log /var/log/squid/access.log squid
>> dns_nameservers 192.168.31.11
>> refresh_pattern ^ftp:   144020% 10080
>> refresh_pattern ^gopher:14400%  1440
>> refresh_pattern .   0   20% 4320
>> acl all src 0.0.0.0/0.0.0.0
>> acl manager proto cache_object
>> acl localhost src 127.0.0.1/255.255.255.255
>> acl to_localhost dst 127.0.0.0/8
>> acl SSL_ports port 443
>> acl CONNECT method CONNECT
>> acl good_url dstdomain "/etc/squid/allowed_sites.squid"
>> acl pnc_network src 192.168.31.0/255.255.255.0
>> http_access allow manager localhost
>> http_access deny manager
>> http_access deny !Safe_ports
>> http_access deny CONNECT !SSL_ports
>> http_access allow good_url
>> http_access deny all
>> visible_hostname VIRT1
>> coredump_dir /var/spool/squid
>>
>>
>> [r...@virt1 ~]# tail -12 /var/log/squid/access.log
>> 1235431489.584  1 192.168.31.12 TCP_DENIED/403 1420 GET
>> http://mail.google.com/mail/channel/test? - NONE/- text/html
>> 1235431489.599  0 192.168.31.12 TCP_DENIED/403 1434 GET
>> http://mail.google.com/mail/images/cleardot.gif? - NONE/- text/html
>> 1235431513.168  0 192.168.31.12 TCP_DENIED/403 1382 GET
>> http://www.google.com/ - NONE/- text/html
>> 1235431526.782  0 192.168.31.12 TCP_DENIED/403 1406 GET
>> http://www.americas-pet.store.com/ - NONE/- text/html
>> 1235431547.499  0 192.168.31.12 TCP_DENIED/403 1450 GET
>> http://wiki.squid-cache.org/KnowledgeBase/DebugSections? - NONE/-
>> text/html
>> 1235431851.235  0 192.168.31.12 TCP_DENIED/403 1406 GET
>> http://www.americas-pet-store.com/ - NONE/- text/html
>> 1235431851.577  0 192.168.31.12 TCP_DENIED/403 1428 GET
>> http://www.americas-pet-store.com/favicon.ico - NONE/- text/html
>> 1235432020.747  2 192.168.31.12 TCP_DENIED/403 1406 GET
>> http://www.americas-pet-store.com/ - NONE/- text/html
>> 1235432022.176  2 192.168.31.12 TCP_DENIED/403 1406 GET
>> http://www.americas-pet-store.com/ - NONE/- text/html
>> 1235432030.656  4 192.168.31.12 TCP_DENIED/403 1450 GET
>> http://wiki.squid-cache.org/KnowledgeBase/DebugSections? - NONE/-
>> text/html
>> 1235432036.294  2 192.168.31.12 TCP_DENIED/403 1382 GET
>> http://www.google.com/ - NONE/- text/html
>> 1235432087.084  2 192.168.31.12 TCP_DENIED/403 1382 GET
>> http://www.google.com/ - NONE/- text/html
>> [r...@virt1 ~]#
>
>
> Assuming you remembered to -k reconfigure squid.
> That leaves the question:
>  are any of these actually listed in your allowed_sites.squid file?
>
> mail.google.com
> www.google.com
> .google.com
> www.americas-pet-store.com
> .americas-pet-store.com
> .com
> wiki.squid-cache.org
> .squid-cache.org
> .org
>
>
> Amos
>
>> -Original Message-
>> From: Amos Jeffries [mailto:squ...@treenet.co.nz]
>> Sent: Monday, February 23, 2009 5:53 PM
>> To: Jim Lawrence
>> Cc: squid-users@squid-cache.org
>> Subject: Re: [squid-users] New Setup help
>>
>>> Cisco1720 router --> 4 windows based servers 1 centos virtual server
> 1
>>> centos squid server.
>>> Client computers (8)
>>>
>>> Would like to have all web traffic blocked except websites defined
in
>> a
>>> allowed_sites.squid config file.
>>> My squid.conf file
>>>
>>> Should my squid server have 2 network cards or can I leave it with
> the
>> one
>>> ?
>>
>> One or two, it does not matter to the problem you currently have.
>>
>>>
>>> +++
>>> [r...@virt1 ~]# cat /etc/squid/squid.conf | sed '/ *#/d; /^ *$/d'
>>> http_port 192.168.31.3:3128
>>> hierarchy_stoplist cgi-bin ?
>>> acl QUERY urlpath_regex cgi-bin \?
>>> cache deny QUERY
>>> acl apache rep_header Server ^Apache
>>> broken_vary_encoding allow apache
>>>  cache_dir ufs /var/spool/squid 1000 16 256
>>> access_log /var/log/squid/access.log squid
>>> dns_nameservers 192.168.31.11
>>> refresh_pattern ^ftp:   144020% 10080
>>> refresh_pattern ^gopher:14400%  1440
>>> refresh_patte

RE: [squid-users] New Setup help

2009-02-23 Thread Amos Jeffries
> cat /etc/squid/allowed_sites.squid
> *.americas-pet-store.com
> *.petfrenzy.com
> *.google.com
> [r...@virt1 ~]#


There is the problem. the '*' is not a proper part of domain names.
Just begin the partial domains with a '.'

Amos

>
> I did a service squid restart
> And for good measure  service squid reload
>
> -Original Message-
> From: Amos Jeffries [mailto:squ...@treenet.co.nz]
> Sent: Monday, February 23, 2009 8:45 PM
> To: Jim Lawrence
> Cc: Amos Jeffries; squid-users@squid-cache.org
> Subject: RE: [squid-users] New Setup help
>
>> Current config
>>
>> http_port 192.168.31.3:3128
>> hierarchy_stoplist cgi-bin ?
>> acl QUERY urlpath_regex cgi-bin \?
>> cache deny QUERY
>> acl apache rep_header Server ^Apache
>> broken_vary_encoding allow apache
>>  cache_dir ufs /var/spool/squid 1000 16 256
>> access_log /var/log/squid/access.log squid
>> dns_nameservers 192.168.31.11
>> refresh_pattern ^ftp:   144020% 10080
>> refresh_pattern ^gopher:14400%  1440
>> refresh_pattern .   0   20% 4320
>> acl all src 0.0.0.0/0.0.0.0
>> acl manager proto cache_object
>> acl localhost src 127.0.0.1/255.255.255.255
>> acl to_localhost dst 127.0.0.0/8
>> acl SSL_ports port 443
>> acl CONNECT method CONNECT
>> acl good_url dstdomain "/etc/squid/allowed_sites.squid"
>> acl pnc_network src 192.168.31.0/255.255.255.0
>> http_access allow manager localhost
>> http_access deny manager
>> http_access deny !Safe_ports
>> http_access deny CONNECT !SSL_ports
>> http_access allow good_url
>> http_access deny all
>> visible_hostname VIRT1
>> coredump_dir /var/spool/squid
>>
>>
>> [r...@virt1 ~]# tail -12 /var/log/squid/access.log
>> 1235431489.584  1 192.168.31.12 TCP_DENIED/403 1420 GET
>> http://mail.google.com/mail/channel/test? - NONE/- text/html
>> 1235431489.599  0 192.168.31.12 TCP_DENIED/403 1434 GET
>> http://mail.google.com/mail/images/cleardot.gif? - NONE/- text/html
>> 1235431513.168  0 192.168.31.12 TCP_DENIED/403 1382 GET
>> http://www.google.com/ - NONE/- text/html
>> 1235431526.782  0 192.168.31.12 TCP_DENIED/403 1406 GET
>> http://www.americas-pet.store.com/ - NONE/- text/html
>> 1235431547.499  0 192.168.31.12 TCP_DENIED/403 1450 GET
>> http://wiki.squid-cache.org/KnowledgeBase/DebugSections? - NONE/-
>> text/html
>> 1235431851.235  0 192.168.31.12 TCP_DENIED/403 1406 GET
>> http://www.americas-pet-store.com/ - NONE/- text/html
>> 1235431851.577  0 192.168.31.12 TCP_DENIED/403 1428 GET
>> http://www.americas-pet-store.com/favicon.ico - NONE/- text/html
>> 1235432020.747  2 192.168.31.12 TCP_DENIED/403 1406 GET
>> http://www.americas-pet-store.com/ - NONE/- text/html
>> 1235432022.176  2 192.168.31.12 TCP_DENIED/403 1406 GET
>> http://www.americas-pet-store.com/ - NONE/- text/html
>> 1235432030.656  4 192.168.31.12 TCP_DENIED/403 1450 GET
>> http://wiki.squid-cache.org/KnowledgeBase/DebugSections? - NONE/-
>> text/html
>> 1235432036.294  2 192.168.31.12 TCP_DENIED/403 1382 GET
>> http://www.google.com/ - NONE/- text/html
>> 1235432087.084  2 192.168.31.12 TCP_DENIED/403 1382 GET
>> http://www.google.com/ - NONE/- text/html
>> [r...@virt1 ~]#
>
>
> Assuming you remembered to -k reconfigure squid.
> That leaves the question:
>  are any of these actually listed in your allowed_sites.squid file?
>
> mail.google.com
> www.google.com
> .google.com
> www.americas-pet-store.com
> .americas-pet-store.com
> .com
> wiki.squid-cache.org
> .squid-cache.org
> .org
>
>
> Amos
>
>> -Original Message-
>> From: Amos Jeffries [mailto:squ...@treenet.co.nz]
>> Sent: Monday, February 23, 2009 5:53 PM
>> To: Jim Lawrence
>> Cc: squid-users@squid-cache.org
>> Subject: Re: [squid-users] New Setup help
>>
>>> Cisco1720 router --> 4 windows based servers 1 centos virtual server
> 1
>>> centos squid server.
>>> Client computers (8)
>>>
>>> Would like to have all web traffic blocked except websites defined in
>> a
>>> allowed_sites.squid config file.
>>> My squid.conf file
>>>
>>> Should my squid server have 2 network cards or can I leave it with
> the
>> one
>>> ?
>>
>> One or two, it does not matter to the problem you currently have.
>>
>>>
>>> +++
>>> [r...@virt1 ~]# cat /etc/squid/squid.conf | sed '/ *#/d; /^ *$/d'
>>> http_port 192.168.31.3:3128
>>> hierarchy_stoplist cgi-bin ?
>>> acl QUERY urlpath_regex cgi-bin \?
>>> cache deny QUERY
>>> acl apache rep_header Server ^Apache
>>> broken_vary_encoding allow apache
>>>  cache_dir ufs /var/spool/squid 1000 16 256
>>> access_log /var/log/squid/access.log squid
>>> dns_nameservers 192.168.31.11
>>> refresh_pattern ^ftp:   144020% 10080
>>> refresh_pattern ^gopher:14400%  1440
>>> refresh_pattern .   0   20% 4320
>>> acl all src 0.0.0.0/0.0.0.0
>>> acl manager proto cache_object
>>> acl localhost src 127.0.0.1/255.255.255.255
>>> acl to_localhost dst 127.0.0.0/8
>>> acl SSL_ports port 443
>>> acl CONNECT method CONNECT
>>> acl

Re: [squid-users] HTML loggin

2009-02-23 Thread Amos Jeffries
> Hi Squids,
>
> I wonder if is it possible to do this in Sq.  We need to long HTTP/GET
> response of what users are surfing.
>
> I mean to know one file per http-sesion (not ip, because nat-ed fw) and
> then
> inside that file I could see what does is this user doing.  How could you
> reach that configuration?
>

Yes possible, provided your clients use proxy authentication. Or can be
identified out-of-band with an external-acl which provide the username
info.

Otherwise the best you can hope for is IP-based analysis.

Amos



RE: [squid-users] New Setup help

2009-02-23 Thread Jim Lawrence
cat /etc/squid/allowed_sites.squid
*.americas-pet-store.com
*.petfrenzy.com
*.google.com
[r...@virt1 ~]#

I did a service squid restart 
And for good measure  service squid reload

-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Sent: Monday, February 23, 2009 8:45 PM
To: Jim Lawrence
Cc: Amos Jeffries; squid-users@squid-cache.org
Subject: RE: [squid-users] New Setup help

> Current config
>
> http_port 192.168.31.3:3128
> hierarchy_stoplist cgi-bin ?
> acl QUERY urlpath_regex cgi-bin \?
> cache deny QUERY
> acl apache rep_header Server ^Apache
> broken_vary_encoding allow apache
>  cache_dir ufs /var/spool/squid 1000 16 256
> access_log /var/log/squid/access.log squid
> dns_nameservers 192.168.31.11
> refresh_pattern ^ftp:   144020% 10080
> refresh_pattern ^gopher:14400%  1440
> refresh_pattern .   0   20% 4320
> acl all src 0.0.0.0/0.0.0.0
> acl manager proto cache_object
> acl localhost src 127.0.0.1/255.255.255.255
> acl to_localhost dst 127.0.0.0/8
> acl SSL_ports port 443
> acl CONNECT method CONNECT
> acl good_url dstdomain "/etc/squid/allowed_sites.squid"
> acl pnc_network src 192.168.31.0/255.255.255.0
> http_access allow manager localhost
> http_access deny manager
> http_access deny !Safe_ports
> http_access deny CONNECT !SSL_ports
> http_access allow good_url
> http_access deny all
> visible_hostname VIRT1
> coredump_dir /var/spool/squid
>
>
> [r...@virt1 ~]# tail -12 /var/log/squid/access.log
> 1235431489.584  1 192.168.31.12 TCP_DENIED/403 1420 GET
> http://mail.google.com/mail/channel/test? - NONE/- text/html
> 1235431489.599  0 192.168.31.12 TCP_DENIED/403 1434 GET
> http://mail.google.com/mail/images/cleardot.gif? - NONE/- text/html
> 1235431513.168  0 192.168.31.12 TCP_DENIED/403 1382 GET
> http://www.google.com/ - NONE/- text/html
> 1235431526.782  0 192.168.31.12 TCP_DENIED/403 1406 GET
> http://www.americas-pet.store.com/ - NONE/- text/html
> 1235431547.499  0 192.168.31.12 TCP_DENIED/403 1450 GET
> http://wiki.squid-cache.org/KnowledgeBase/DebugSections? - NONE/-
> text/html
> 1235431851.235  0 192.168.31.12 TCP_DENIED/403 1406 GET
> http://www.americas-pet-store.com/ - NONE/- text/html
> 1235431851.577  0 192.168.31.12 TCP_DENIED/403 1428 GET
> http://www.americas-pet-store.com/favicon.ico - NONE/- text/html
> 1235432020.747  2 192.168.31.12 TCP_DENIED/403 1406 GET
> http://www.americas-pet-store.com/ - NONE/- text/html
> 1235432022.176  2 192.168.31.12 TCP_DENIED/403 1406 GET
> http://www.americas-pet-store.com/ - NONE/- text/html
> 1235432030.656  4 192.168.31.12 TCP_DENIED/403 1450 GET
> http://wiki.squid-cache.org/KnowledgeBase/DebugSections? - NONE/-
> text/html
> 1235432036.294  2 192.168.31.12 TCP_DENIED/403 1382 GET
> http://www.google.com/ - NONE/- text/html
> 1235432087.084  2 192.168.31.12 TCP_DENIED/403 1382 GET
> http://www.google.com/ - NONE/- text/html
> [r...@virt1 ~]#


Assuming you remembered to -k reconfigure squid.
That leaves the question:
 are any of these actually listed in your allowed_sites.squid file?

mail.google.com
www.google.com
.google.com
www.americas-pet-store.com
.americas-pet-store.com
.com
wiki.squid-cache.org
.squid-cache.org
.org


Amos

> -Original Message-
> From: Amos Jeffries [mailto:squ...@treenet.co.nz]
> Sent: Monday, February 23, 2009 5:53 PM
> To: Jim Lawrence
> Cc: squid-users@squid-cache.org
> Subject: Re: [squid-users] New Setup help
>
>> Cisco1720 router --> 4 windows based servers 1 centos virtual server
1
>> centos squid server.
>> Client computers (8)
>>
>> Would like to have all web traffic blocked except websites defined in
> a
>> allowed_sites.squid config file.
>> My squid.conf file
>>
>> Should my squid server have 2 network cards or can I leave it with
the
> one
>> ?
>
> One or two, it does not matter to the problem you currently have.
>
>>
>> +++
>> [r...@virt1 ~]# cat /etc/squid/squid.conf | sed '/ *#/d; /^ *$/d'
>> http_port 192.168.31.3:3128
>> hierarchy_stoplist cgi-bin ?
>> acl QUERY urlpath_regex cgi-bin \?
>> cache deny QUERY
>> acl apache rep_header Server ^Apache
>> broken_vary_encoding allow apache
>>  cache_dir ufs /var/spool/squid 1000 16 256
>> access_log /var/log/squid/access.log squid
>> dns_nameservers 192.168.31.11
>> refresh_pattern ^ftp:   144020% 10080
>> refresh_pattern ^gopher:14400%  1440
>> refresh_pattern .   0   20% 4320
>> acl all src 0.0.0.0/0.0.0.0
>> acl manager proto cache_object
>> acl localhost src 127.0.0.1/255.255.255.255
>> acl to_localhost dst 127.0.0.0/8
>> acl SSL_ports port 443
>> acl CONNECT method CONNECT
>> acl good_url dstdomain "/etc/squid/allowed_sites.squid"
>> acl pnc_network src 192.168.31.0/255.255.255.0
>> http_access allow manager localhost
>> http_access deny manager
>> http_access deny !Safe_ports
>> http_access deny CONNECT !SSL_ports
>
>> http_access allow good_url
>
>  * permit

RE: [squid-users] New Setup help

2009-02-23 Thread Amos Jeffries
> Current config
>
> http_port 192.168.31.3:3128
> hierarchy_stoplist cgi-bin ?
> acl QUERY urlpath_regex cgi-bin \?
> cache deny QUERY
> acl apache rep_header Server ^Apache
> broken_vary_encoding allow apache
>  cache_dir ufs /var/spool/squid 1000 16 256
> access_log /var/log/squid/access.log squid
> dns_nameservers 192.168.31.11
> refresh_pattern ^ftp:   144020% 10080
> refresh_pattern ^gopher:14400%  1440
> refresh_pattern .   0   20% 4320
> acl all src 0.0.0.0/0.0.0.0
> acl manager proto cache_object
> acl localhost src 127.0.0.1/255.255.255.255
> acl to_localhost dst 127.0.0.0/8
> acl SSL_ports port 443
> acl CONNECT method CONNECT
> acl good_url dstdomain "/etc/squid/allowed_sites.squid"
> acl pnc_network src 192.168.31.0/255.255.255.0
> http_access allow manager localhost
> http_access deny manager
> http_access deny !Safe_ports
> http_access deny CONNECT !SSL_ports
> http_access allow good_url
> http_access deny all
> visible_hostname VIRT1
> coredump_dir /var/spool/squid
>
>
> [r...@virt1 ~]# tail -12 /var/log/squid/access.log
> 1235431489.584  1 192.168.31.12 TCP_DENIED/403 1420 GET
> http://mail.google.com/mail/channel/test? - NONE/- text/html
> 1235431489.599  0 192.168.31.12 TCP_DENIED/403 1434 GET
> http://mail.google.com/mail/images/cleardot.gif? - NONE/- text/html
> 1235431513.168  0 192.168.31.12 TCP_DENIED/403 1382 GET
> http://www.google.com/ - NONE/- text/html
> 1235431526.782  0 192.168.31.12 TCP_DENIED/403 1406 GET
> http://www.americas-pet.store.com/ - NONE/- text/html
> 1235431547.499  0 192.168.31.12 TCP_DENIED/403 1450 GET
> http://wiki.squid-cache.org/KnowledgeBase/DebugSections? - NONE/-
> text/html
> 1235431851.235  0 192.168.31.12 TCP_DENIED/403 1406 GET
> http://www.americas-pet-store.com/ - NONE/- text/html
> 1235431851.577  0 192.168.31.12 TCP_DENIED/403 1428 GET
> http://www.americas-pet-store.com/favicon.ico - NONE/- text/html
> 1235432020.747  2 192.168.31.12 TCP_DENIED/403 1406 GET
> http://www.americas-pet-store.com/ - NONE/- text/html
> 1235432022.176  2 192.168.31.12 TCP_DENIED/403 1406 GET
> http://www.americas-pet-store.com/ - NONE/- text/html
> 1235432030.656  4 192.168.31.12 TCP_DENIED/403 1450 GET
> http://wiki.squid-cache.org/KnowledgeBase/DebugSections? - NONE/-
> text/html
> 1235432036.294  2 192.168.31.12 TCP_DENIED/403 1382 GET
> http://www.google.com/ - NONE/- text/html
> 1235432087.084  2 192.168.31.12 TCP_DENIED/403 1382 GET
> http://www.google.com/ - NONE/- text/html
> [r...@virt1 ~]#


Assuming you remembered to -k reconfigure squid.
That leaves the question:
 are any of these actually listed in your allowed_sites.squid file?

mail.google.com
www.google.com
.google.com
www.americas-pet-store.com
.americas-pet-store.com
.com
wiki.squid-cache.org
.squid-cache.org
.org


Amos

> -Original Message-
> From: Amos Jeffries [mailto:squ...@treenet.co.nz]
> Sent: Monday, February 23, 2009 5:53 PM
> To: Jim Lawrence
> Cc: squid-users@squid-cache.org
> Subject: Re: [squid-users] New Setup help
>
>> Cisco1720 router --> 4 windows based servers 1 centos virtual server 1
>> centos squid server.
>> Client computers (8)
>>
>> Would like to have all web traffic blocked except websites defined in
> a
>> allowed_sites.squid config file.
>> My squid.conf file
>>
>> Should my squid server have 2 network cards or can I leave it with the
> one
>> ?
>
> One or two, it does not matter to the problem you currently have.
>
>>
>> +++
>> [r...@virt1 ~]# cat /etc/squid/squid.conf | sed '/ *#/d; /^ *$/d'
>> http_port 192.168.31.3:3128
>> hierarchy_stoplist cgi-bin ?
>> acl QUERY urlpath_regex cgi-bin \?
>> cache deny QUERY
>> acl apache rep_header Server ^Apache
>> broken_vary_encoding allow apache
>>  cache_dir ufs /var/spool/squid 1000 16 256
>> access_log /var/log/squid/access.log squid
>> dns_nameservers 192.168.31.11
>> refresh_pattern ^ftp:   144020% 10080
>> refresh_pattern ^gopher:14400%  1440
>> refresh_pattern .   0   20% 4320
>> acl all src 0.0.0.0/0.0.0.0
>> acl manager proto cache_object
>> acl localhost src 127.0.0.1/255.255.255.255
>> acl to_localhost dst 127.0.0.0/8
>> acl SSL_ports port 443
>> acl CONNECT method CONNECT
>> acl good_url dstdomain "/etc/squid/allowed_sites.squid"
>> acl pnc_network src 192.168.31.0/255.255.255.0
>> http_access allow manager localhost
>> http_access deny manager
>> http_access deny !Safe_ports
>> http_access deny CONNECT !SSL_ports
>
>> http_access allow good_url
>
>  * permits anyone who can contact your squid to connect to any of the
> listed sites. Probably don't want that ...
>
>  * Or maybe you intended to be a reverse-proxy/accelerator for internal
> sites?
> http://wiki.squid-cache.org/ConfigExamples/Reverse/BasicAccelerator
>
> To enact your stated "all web traffic blocked except websites defined in
> a
> allowed_sites.squid config file"
>
> Add here:
>   http_access

[squid-users] HTML loggin

2009-02-23 Thread Luis Daniel Lucio Quiroz
Hi Squids,

I wonder if is it possible to do this in Sq.  We need to long HTTP/GET 
response of what users are surfing.

I mean to know one file per http-sesion (not ip, because nat-ed fw) and then 
inside that file I could see what does is this user doing.  How could you 
reach that configuration?

TIA

LD


RE: [squid-users] No SSL to SSL redirection problem

2009-02-23 Thread Amos Jeffries
> I think url_rewrite_access is not supported by Squid 2.5 and supported on
> Squid 2.6+.
>
> I was looking and I found this
> http://www.squid-cache.org/mail-archive/squid-users/200502/0150.html but I
> do not want to limit access on port 80.
>
> Any ideas?


Step 1: upgrade to a current Squid which support your requirements.

Step 2: try the advised rewriter access controls.


Amos

>
> Thank you,
>
> Roberto O. Fernández Crisial
>
>
> -Original Message-
> From: John Doe [mailto:jd...@yahoo.com]
> Sent: Lunes 23 de Febrero de 2009 14:41
> To: squid-users@squid-cache.org
> Subject: Re: [squid-users] No SSL to SSL redirection problem
>
>
>> > > “http://...”, even after be matched with script, and makes an
>> infinite loop
>> > > requests (the script redirects to https but the Squid take it as
>> http and
>> > > make the redirection again). What I can do? How can I make the
>> “http” to
>> > > “https” to work fine?
>> >
>> > What is your acl for the rewrite?
>> > Maybe that would prevent the loops...
>> >
>> >   url_rewrite_access allow !SSL_ports
>> >
>> I do not have a line " url_rewrite_access allow !SSL_ports" I have one
>> like
>> this "http_access deny CONNECT !SSL_ports"..
>
> This access is just basic security.
>
> I was suggesting:
>url_rewrite_access allow !SSL_ports
> in order to only rewrite non-https URLs to avoid the loops.
>
> JD
>
>
>
>
>




RE: [squid-users] New Setup help

2009-02-23 Thread Jim Lawrence
Current config 

http_port 192.168.31.3:3128
hierarchy_stoplist cgi-bin ?
acl QUERY urlpath_regex cgi-bin \?
cache deny QUERY
acl apache rep_header Server ^Apache
broken_vary_encoding allow apache
 cache_dir ufs /var/spool/squid 1000 16 256
access_log /var/log/squid/access.log squid
dns_nameservers 192.168.31.11
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern .   0   20% 4320
acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443
acl CONNECT method CONNECT
acl good_url dstdomain "/etc/squid/allowed_sites.squid"
acl pnc_network src 192.168.31.0/255.255.255.0
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow good_url
http_access deny all
visible_hostname VIRT1
coredump_dir /var/spool/squid


[r...@virt1 ~]# tail -12 /var/log/squid/access.log
1235431489.584  1 192.168.31.12 TCP_DENIED/403 1420 GET
http://mail.google.com/mail/channel/test? - NONE/- text/html
1235431489.599  0 192.168.31.12 TCP_DENIED/403 1434 GET
http://mail.google.com/mail/images/cleardot.gif? - NONE/- text/html
1235431513.168  0 192.168.31.12 TCP_DENIED/403 1382 GET
http://www.google.com/ - NONE/- text/html
1235431526.782  0 192.168.31.12 TCP_DENIED/403 1406 GET
http://www.americas-pet.store.com/ - NONE/- text/html
1235431547.499  0 192.168.31.12 TCP_DENIED/403 1450 GET
http://wiki.squid-cache.org/KnowledgeBase/DebugSections? - NONE/-
text/html
1235431851.235  0 192.168.31.12 TCP_DENIED/403 1406 GET
http://www.americas-pet-store.com/ - NONE/- text/html
1235431851.577  0 192.168.31.12 TCP_DENIED/403 1428 GET
http://www.americas-pet-store.com/favicon.ico - NONE/- text/html
1235432020.747  2 192.168.31.12 TCP_DENIED/403 1406 GET
http://www.americas-pet-store.com/ - NONE/- text/html
1235432022.176  2 192.168.31.12 TCP_DENIED/403 1406 GET
http://www.americas-pet-store.com/ - NONE/- text/html
1235432030.656  4 192.168.31.12 TCP_DENIED/403 1450 GET
http://wiki.squid-cache.org/KnowledgeBase/DebugSections? - NONE/-
text/html
1235432036.294  2 192.168.31.12 TCP_DENIED/403 1382 GET
http://www.google.com/ - NONE/- text/html
1235432087.084  2 192.168.31.12 TCP_DENIED/403 1382 GET
http://www.google.com/ - NONE/- text/html
[r...@virt1 ~]#
-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Sent: Monday, February 23, 2009 5:53 PM
To: Jim Lawrence
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] New Setup help

> Cisco1720 router --> 4 windows based servers 1 centos virtual server 1
> centos squid server.
> Client computers (8)
>
> Would like to have all web traffic blocked except websites defined in
a
> allowed_sites.squid config file.
> My squid.conf file
>
> Should my squid server have 2 network cards or can I leave it with the
one
> ?

One or two, it does not matter to the problem you currently have.

>
> +++
> [r...@virt1 ~]# cat /etc/squid/squid.conf | sed '/ *#/d; /^ *$/d'
> http_port 192.168.31.3:3128
> hierarchy_stoplist cgi-bin ?
> acl QUERY urlpath_regex cgi-bin \?
> cache deny QUERY
> acl apache rep_header Server ^Apache
> broken_vary_encoding allow apache
>  cache_dir ufs /var/spool/squid 1000 16 256
> access_log /var/log/squid/access.log squid
> dns_nameservers 192.168.31.11
> refresh_pattern ^ftp:   144020% 10080
> refresh_pattern ^gopher:14400%  1440
> refresh_pattern .   0   20% 4320
> acl all src 0.0.0.0/0.0.0.0
> acl manager proto cache_object
> acl localhost src 127.0.0.1/255.255.255.255
> acl to_localhost dst 127.0.0.0/8
> acl SSL_ports port 443
> acl CONNECT method CONNECT
> acl good_url dstdomain "/etc/squid/allowed_sites.squid"
> acl pnc_network src 192.168.31.0/255.255.255.0
> http_access allow manager localhost
> http_access deny manager
> http_access deny !Safe_ports
> http_access deny CONNECT !SSL_ports

> http_access allow good_url

 * permits anyone who can contact your squid to connect to any of the
listed sites. Probably don't want that ...

 * Or maybe you intended to be a reverse-proxy/accelerator for internal
sites?
http://wiki.squid-cache.org/ConfigExamples/Reverse/BasicAccelerator

To enact your stated "all web traffic blocked except websites defined in
a
allowed_sites.squid config file"

Add here:
  http_access deny all

drop the following http_access lines:

> http_access deny pnc_network
> http_access allow localhost
> http_access deny all
> http_reply_access allow all
> icp_access allow all
> visible_hostname VIRT1
> coredump_dir /var/spool/squid
> 
>
>
>
> client's cannot access anything.

Is the content of "/etc/squid/allowed_sites.squid"
correctly formatted for dstdomain?

A list of domain names one per line with the following style:

 example.com  - matches only example.com domain.

Re: [squid-users] authentication mechanism selected based on ip-address

2009-02-23 Thread Amos Jeffries
> Amos Jeffries wrote:
>> Joseph Spadavecchia wrote:
>>> Hi all,
>>>
>>> We have a requirement to use different authentication mechanisms
>>> based on the subnet/ip-address of the client.
>>>
>>> For example, a client from one subnet would authenticate against ntlm
>>> while a client from another subnet would authenticate against an LDAP
>>> server.
>>>
>>> AFAIK, this is normally done by running multiple instances of squid;
>>> but we have the requirement to do it with a single instance.  One way
>>> of achieving this would be to modify squid to pass the client's
>>> ip-address along with the authentication information.  However, I'd
>>> like to do it cleanly without modifying squid.
>>>
>>> Can anyone offer suggestions for doing this cleanly, without
>>> modifications to squid.
>>>
>>> Thanks in advance.
>>> Joseph
>>
>> External ACL taking client IP and Proxy-authentication header contents.
>> Then doing whatever you like and returning "OK user=XX\n" or "ERR\n"
>>
>> Amos
> Thanks Amos--- your suggestion seems to work.
>
> I created a custom authenticator that always returns "OK" and linked it
> to the external acl.
>
>  squid.conf 
>
> auth_param basic program /usr/local/bin/my-auth.pl
>
> external_acl_type myAclType %SRC %LOGIN %{Proxy-Authorization}
> /usr/local/bin/my-acl.pl
>
> acl MyAcl external myAclType
>
> http_access allow MyAcl
>
> * Note myAclType's dependence on %LOGIN is required for triggering
> authentication and, thus, setting %{Proxy-Authorization}.
>
>
>  my-auth.pl 
>
> #!/usr/bin/perl -Wl
>
> $|=1;
>
> while (<>) {
> print "OK";
> }
>
>
>  my-acl.pl 
>
> #!/usr/bin/perl -Wl
>
> use URI::Escape;
> use MIME::Base64;
>
> $|=1;
>
> while (<>) {
> ($ip,$user,$auth) = split();
> $auth = uri_unescape($auth);
> ($type,$authData) = split(/ /, $auth);
> $authString = decode_base64($authData);
> ($username,$password) = split(/:/, $authString);
>
> print my_awsome_auth($ip, $username, $password);
> }
>
> Thanks.
> Joseph
>

Excellent thank you for this wonderful write-up.
I've added it to the wiki
http://wiki.squid-cache.org/ConfigExamples/Authenticate/MultipleSources

Amos



Re: [squid-users] Problem with IE crashing when accessing through a Squid Proxy

2009-02-23 Thread Amos Jeffries
> Hi
> we have a strange problem which I hope someone can give me a pointer
> towards resolving.
> Our setup consists of Squid 2.5 acting as a caching proxy interfacing with
> Websense 6.3.2 to provide access filtering. The issue below has been
> replicated using Squid 2.5 and Squid 2.7 acting soley as a caching proxy
> without Websense integration.
> When accessing bebo.com using Internet Explorer, once logged in clicking
> on any of the "headline" tabs (Friends, Profile, Mail, Home) causes IE to
> crash. the error report seems to refer to mshtml.dll as the
> culprit.current version of IE is fully patched 6 but also present in fully
> patched IE 7. Firefox works perfectly. Bypassing the proxy removes the
> problem. Firefox works perfectly. bebo is tyhe only affected site so far.
>
> Any thoughts appreciated.
>

Could be anything.
Can you get any sort of trace about how far into the page IE gets before
crashing? That should lead you to the particular page component which is
crashing IE.

I suspect its probably a script issue from one of the many plugins bebo
use  than Squid.

Amos




Re: [squid-users] New Setup help

2009-02-23 Thread Amos Jeffries
> Cisco1720 router --> 4 windows based servers 1 centos virtual server 1
> centos squid server.
> Client computers (8)
>
> Would like to have all web traffic blocked except websites defined in a
> allowed_sites.squid config file.
> My squid.conf file
>
> Should my squid server have 2 network cards or can I leave it with the one
> ?

One or two, it does not matter to the problem you currently have.

>
> +++
> [r...@virt1 ~]# cat /etc/squid/squid.conf | sed '/ *#/d; /^ *$/d'
> http_port 192.168.31.3:3128
> hierarchy_stoplist cgi-bin ?
> acl QUERY urlpath_regex cgi-bin \?
> cache deny QUERY
> acl apache rep_header Server ^Apache
> broken_vary_encoding allow apache
>  cache_dir ufs /var/spool/squid 1000 16 256
> access_log /var/log/squid/access.log squid
> dns_nameservers 192.168.31.11
> refresh_pattern ^ftp:   144020% 10080
> refresh_pattern ^gopher:14400%  1440
> refresh_pattern .   0   20% 4320
> acl all src 0.0.0.0/0.0.0.0
> acl manager proto cache_object
> acl localhost src 127.0.0.1/255.255.255.255
> acl to_localhost dst 127.0.0.0/8
> acl SSL_ports port 443
> acl CONNECT method CONNECT
> acl good_url dstdomain "/etc/squid/allowed_sites.squid"
> acl pnc_network src 192.168.31.0/255.255.255.0
> http_access allow manager localhost
> http_access deny manager
> http_access deny !Safe_ports
> http_access deny CONNECT !SSL_ports

> http_access allow good_url

 * permits anyone who can contact your squid to connect to any of the
listed sites. Probably don't want that ...

 * Or maybe you intended to be a reverse-proxy/accelerator for internal
sites?
http://wiki.squid-cache.org/ConfigExamples/Reverse/BasicAccelerator

To enact your stated "all web traffic blocked except websites defined in a
allowed_sites.squid config file"

Add here:
  http_access deny all

drop the following http_access lines:

> http_access deny pnc_network
> http_access allow localhost
> http_access deny all
> http_reply_access allow all
> icp_access allow all
> visible_hostname VIRT1
> coredump_dir /var/spool/squid
> 
>
>
>
> client's cannot access anything.

Is the content of "/etc/squid/allowed_sites.squid"
correctly formatted for dstdomain?

A list of domain names one per line with the following style:

 example.com  - matches only example.com domain.

 .example.com   - matches example.com and ALL *.example.com sub-domains.


Amos



Re: [squid-users] Squid cache cgi-bin

2009-02-23 Thread Amos Jeffries
> Hi,
>
> I have some questions about squid as reverse proxy.
>
> The web server I´m accelerating (cache_peer) has dynamic content
> (cgi-
> bin).
>
> At the beginning I left the default cache refresh values (so for cgi-bin \
>  /
> ? has a value "0") and the hierarchy list for cgi-bin and  "no_cache
> deny
> all".
>
> Now this pages contain some elements like .gif that I´d like to cash:
> these
> elements have not the path http://nameserver/cgi-bi/... but a path like
> http:
> //nameserver/icons...
>
> I tried with a normal ACL  elements url_regex  .gif .html .jpeg
>
> and then
>
> cache allow static
>
> But it seems squid is not caching nothing!!!
>
> Could you give me any kind of advice?
>
> Thanks in advance
>


It's a little unclear what config you are having trouble using.
The various options you mention above are a mix of current, obsolete,
deprecated, and irrelevant.

But the use of correct options in the correct order is important for a
working Squid.

What version are you using?

And in the order listed in yoru squid.conf, what lines do you have that
start with:
cache, no_cache, refresh_pattern, acl, or cache_peer*


Amos



RE: [squid-users] No SSL to SSL redirection problem

2009-02-23 Thread Roberto O. Fernández Crisial
I think url_rewrite_access is not supported by Squid 2.5 and supported on Squid 
2.6+.

I was looking and I found this 
http://www.squid-cache.org/mail-archive/squid-users/200502/0150.html but I do 
not want to limit access on port 80.

Any ideas?

Thank you,

Roberto O. Fernández Crisial


-Original Message-
From: John Doe [mailto:jd...@yahoo.com] 
Sent: Lunes 23 de Febrero de 2009 14:41
To: squid-users@squid-cache.org
Subject: Re: [squid-users] No SSL to SSL redirection problem


> > > “http://...”, even after be matched with script, and makes an infinite 
> > > loop
> > > requests (the script redirects to https but the Squid take it as http and
> > > make the redirection again). What I can do? How can I make the “http” to
> > > “https” to work fine?
> > 
> > What is your acl for the rewrite?
> > Maybe that would prevent the loops...
> > 
> >   url_rewrite_access allow !SSL_ports
> > 
> I do not have a line " url_rewrite_access allow !SSL_ports" I have one like 
> this "http_access deny CONNECT !SSL_ports"..

This access is just basic security.

I was suggesting:
   url_rewrite_access allow !SSL_ports
in order to only rewrite non-https URLs to avoid the loops.

JD


  



RE: [squid-users] New Setup help

2009-02-23 Thread Jim Lawrence
I look at the log files 
tail -30 /var/log/squid/access.log

1235404880.957  0 192.168.31.75 TCP_DENIED/403 1380 CONNECT 
urs.microsoft.com:443 - NONE/- text/html
1235404880.959  0 192.168.31.75 TCP_DENIED/403 1380 CONNECT 
urs.microsoft.com:443 - NONE/- text/html
1235404880.977  0 192.168.31.75 TCP_DENIED/403 1380 CONNECT 
urs.microsoft.com:443 - NONE/- text/html
1235404880.979  0 192.168.31.75 TCP_DENIED/403 1380 CONNECT 
urs.microsoft.com:443 - NONE/- text/html
1235404888.122  0 192.168.31.75 TCP_DENIED/403 1382 GET 
http://www.google.com/ - NONE/- text/html
1235404893.279  0 192.168.31.75 TCP_DENIED/403 1406 GET 
http://www.americas-pet-store.com/ - NONE/- text/html

-Original Message-
From: da...@davidwbrown.name [mailto:da...@davidwbrown.name] 
Sent: Monday, February 23, 2009 11:39 AM
To: Jim Lawrence
Subject: Re: [squid-users] New Setup help 

Hello Jim, what in the way of logging are you monitoring? Regards, David.

Jim Lawrence wrote ..
> Cisco1720 router --> 4 windows based servers 1 centos virtual server 1 centos 
> squid
> server. 
> Client computers (8) 
> 
> Would like to have all web traffic blocked except websites defined in a 
> allowed_sites.squid
> config file.  
> My squid.conf file 
> 
> Should my squid server have 2 network cards or can I leave it with the one ? 
> 
> +++
> [r...@virt1 ~]# cat /etc/squid/squid.conf | sed '/ *#/d; /^ *$/d' 
> http_port 192.168.31.3:3128
> hierarchy_stoplist cgi-bin ?
> acl QUERY urlpath_regex cgi-bin \?
> cache deny QUERY
> acl apache rep_header Server ^Apache
> broken_vary_encoding allow apache
>  cache_dir ufs /var/spool/squid 1000 16 256
> access_log /var/log/squid/access.log squid
> dns_nameservers 192.168.31.11
> refresh_pattern ^ftp:   144020% 10080
> refresh_pattern ^gopher:14400%  1440
> refresh_pattern .   0   20% 4320
> acl all src 0.0.0.0/0.0.0.0
> acl manager proto cache_object
> acl localhost src 127.0.0.1/255.255.255.255
> acl to_localhost dst 127.0.0.0/8
> acl SSL_ports port 443
> acl CONNECT method CONNECT
> acl good_url dstdomain "/etc/squid/allowed_sites.squid"
> acl pnc_network src 192.168.31.0/255.255.255.0
> http_access allow manager localhost
> http_access deny manager
> http_access deny !Safe_ports
> http_access deny CONNECT !SSL_ports
> http_access allow good_url
> http_access deny pnc_network 
> http_access allow localhost
> http_access deny all
> http_reply_access allow all
> icp_access allow all
> visible_hostname VIRT1
> coredump_dir /var/spool/squid
> 
> 
> 
> 
> client's cannot access anything.  
> 
> Any help would be appreciated 
> 
> Jim


Re: [squid-users] No SSL to SSL redirection problem

2009-02-23 Thread John Doe

> > > “http://...”, even after be matched with script, and makes an infinite 
> > > loop
> > > requests (the script redirects to https but the Squid take it as http and
> > > make the redirection again). What I can do? How can I make the “http” to
> > > “https” to work fine?
> > 
> > What is your acl for the rewrite?
> > Maybe that would prevent the loops...
> > 
> >   url_rewrite_access allow !SSL_ports
> > 
> I do not have a line " url_rewrite_access allow !SSL_ports" I have one like 
> this "http_access deny CONNECT !SSL_ports"..

This access is just basic security.

I was suggesting:
   url_rewrite_access allow !SSL_ports
in order to only rewrite non-https URLs to avoid the loops.

JD






[squid-users] New Setup help

2009-02-23 Thread Jim Lawrence
Cisco1720 router --> 4 windows based servers 1 centos virtual server 1 centos 
squid server. 
Client computers (8) 

Would like to have all web traffic blocked except websites defined in a 
allowed_sites.squid config file.  
My squid.conf file 

Should my squid server have 2 network cards or can I leave it with the one ? 

+++
[r...@virt1 ~]# cat /etc/squid/squid.conf | sed '/ *#/d; /^ *$/d' 
http_port 192.168.31.3:3128
hierarchy_stoplist cgi-bin ?
acl QUERY urlpath_regex cgi-bin \?
cache deny QUERY
acl apache rep_header Server ^Apache
broken_vary_encoding allow apache
 cache_dir ufs /var/spool/squid 1000 16 256
access_log /var/log/squid/access.log squid
dns_nameservers 192.168.31.11
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern .   0   20% 4320
acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443
acl CONNECT method CONNECT
acl good_url dstdomain "/etc/squid/allowed_sites.squid"
acl pnc_network src 192.168.31.0/255.255.255.0
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow good_url
http_access deny pnc_network 
http_access allow localhost
http_access deny all
http_reply_access allow all
icp_access allow all
visible_hostname VIRT1
coredump_dir /var/spool/squid




client's cannot access anything.  

Any help would be appreciated 

Jim



RE: [squid-users] No SSL to SSL redirection problem

2009-02-23 Thread Roberto O. Fernández Crisial
JD,

The exits are for testing and should not be at the example I wrote.

The access.log shows (after redirection):

1235404323.937  0 200.127.215.7 TCP_MISS/301 181 GET http://xxx.yyy.com/ - 
NONE/- -
1235404324.445  0 200.127.215.7 TCP_MISS/301 181 GET http://xxx.yyy.com / - 
NONE/- -

I do not have a line " url_rewrite_access allow !SSL_ports" I have one 
like this "http_access deny CONNECT !SSL_ports".


Regards,
Roberto.


-Original Message-
From: John Doe [mailto:jd...@yahoo.com] 
Sent: Lunes 23 de Febrero de 2009 13:55
To: squid-users@squid-cache.org
Subject: Re: [squid-users] No SSL to SSL redirection problem


> I’m using Squid 2.5-STABLE14 with SSL support. I need to rewrite every url
> with “http://...“ request to “https://...” so I use this script at the
> redirect_program line:

Old version... ^_^

> #!/usr/bin/perl
> 
> $|=1;
> 
> while (<>)
> {
> @X = split;
> $url = $X[0];
> 
> if ($url =~ /^http:\/\//)
> {
> $url =~ s/http/https/;
> print "301:$url\n";
> exit;
> }
> else
> {
> print "$url\n";
> exit;
> }
> }

I think you should remove the 2 exits...

> The problem is that access.log shows every GET with
> “http://...”, even after be matched with script, and makes an infinite loop
> requests (the script redirects to https but the Squid take it as http and
> make the redirection again). What I can do? How can I make the “http” to
> “https” to work fine?

What is your acl for the rewrite?
Maybe that would prevent the loops...

  url_rewrite_access allow !SSL_ports

JD


  



Re: [squid-users] No SSL to SSL redirection problem

2009-02-23 Thread John Doe

> I’m using Squid 2.5-STABLE14 with SSL support. I need to rewrite every url
> with “http://...“ request to “https://...” so I use this script at the
> redirect_program line:

Old version... ^_^

> #!/usr/bin/perl
> 
> $|=1;
> 
> while (<>)
> {
> @X = split;
> $url = $X[0];
> 
> if ($url =~ /^http:\/\//)
> {
> $url =~ s/http/https/;
> print "301:$url\n";
> exit;
> }
> else
> {
> print "$url\n";
> exit;
> }
> }

I think you should remove the 2 exits...

> The problem is that access.log shows every GET with
> “http://...”, even after be matched with script, and makes an infinite loop
> requests (the script redirects to https but the Squid take it as http and
> make the redirection again). What I can do? How can I make the “http” to
> “https” to work fine?

What is your acl for the rewrite?
Maybe that would prevent the loops...

  url_rewrite_access allow !SSL_ports

JD






[squid-users] Cisco router IOS version for WCCP

2009-02-23 Thread Vivek

Hi All,



Which IOS version in 12.4 series is best for Squid+Tproxy+Wccp setup?. 
Some versions has bugs in traffic redirection.




Please post the version details.



Thanks,

Vivek N.

You are invited to Get a Free AOL Email ID. - http://webmail.aol.in



[squid-users] Problem with IE crashing when accessing through a Squid Proxy

2009-02-23 Thread alastair . marshall
Hi
we have a strange problem which I hope someone can give me a pointer towards 
resolving.
Our setup consists of Squid 2.5 acting as a caching proxy interfacing with 
Websense 6.3.2 to provide access filtering. The issue below has been replicated 
using Squid 2.5 and Squid 2.7 acting soley as a caching proxy without Websense 
integration.
When accessing bebo.com using Internet Explorer, once logged in clicking on any 
of the "headline" tabs (Friends, Profile, Mail, Home) causes IE to crash. the 
error report seems to refer to mshtml.dll as the culprit.current version of IE 
is fully patched 6 but also present in fully patched IE 7. Firefox works 
perfectly. Bypassing the proxy removes the problem. Firefox works perfectly. 
bebo is tyhe only affected site so far.

Any thoughts appreciated.
 

--
Alastair Marshall
IT Support Analyst


West Lothian is the UK Council of the Year 2006

This message, together with any attachments, is sent subject to the
following statements:

1.  It is sent in confidence for the addressee only.  It may
contain legally privileged information.  The contents are 
not to be disclosed to anyone other than the addressee.  
Unauthorised recipients are requested to preserve this 
confidentiality and to advise the sender immediately.
2.  It does not constitute a representation which is legally 
binding on the Council or which is capable of constituting 
a contract and may not be founded upon in any proceedings 
following hereon unless specifically indicated otherwise.

http://www.westlothian.gov.uk


This email has been scanned for all viruses by the MessageLabs SkyScan
service on behalf of West Lothian Council.
For more information on a proactive anti-virus service working
around the clock, around the globe, visit http://www.messagelabs.com



[squid-users] No SSL to SSL redirection problem

2009-02-23 Thread Roberto O. Fernández Crisial
Hi,

My name is Roberto and I’m a new user of the list. I having a trouble and I
want to know if you can help me with it.

I’m using Squid 2.5-STABLE14 with SSL support. I need to rewrite every url
with “http://...“ request to “https://...” so I use this script at the
redirect_program line:

#!/usr/bin/perl

$|=1;

while (<>)
{
    @X = split;
    $url = $X[0];

    if ($url =~ /^http:\/\//)
    {
    $url =~ s/http/https/;
    print "301:$url\n";
    exit;
    }
    else
    {
    print "$url\n";
    exit;
    }
}

    The problem is that access.log shows every GET with
“http://...”, even after be matched with script, and makes an infinite loop
requests (the script redirects to https but the Squid take it as http and
make the redirection again). What I can do? How can I make the “http” to
“https” to work fine?

Thank you,

Roberto O. Fernández Crisial.




Re: [squid-users] authentication mechanism selected based on ip-address

2009-02-23 Thread Joseph Spadavecchia

Amos Jeffries wrote:

Joseph Spadavecchia wrote:

Hi all,

We have a requirement to use different authentication mechanisms 
based on the subnet/ip-address of the client.


For example, a client from one subnet would authenticate against ntlm 
while a client from another subnet would authenticate against an LDAP 
server.


AFAIK, this is normally done by running multiple instances of squid; 
but we have the requirement to do it with a single instance.  One way 
of achieving this would be to modify squid to pass the client's 
ip-address along with the authentication information.  However, I'd 
like to do it cleanly without modifying squid.


Can anyone offer suggestions for doing this cleanly, without 
modifications to squid.


Thanks in advance.
Joseph


External ACL taking client IP and Proxy-authentication header contents.
Then doing whatever you like and returning "OK user=XX\n" or "ERR\n"

Amos

Thanks Amos--- your suggestion seems to work.

I created a custom authenticator that always returns "OK" and linked it 
to the external acl.


 squid.conf 

auth_param basic program /usr/local/bin/my-auth.pl

external_acl_type myAclType %SRC %LOGIN %{Proxy-Authorization} 
/usr/local/bin/my-acl.pl


acl MyAcl external myAclType

http_access allow MyAcl

* Note myAclType's dependence on %LOGIN is required for triggering 
authentication and, thus, setting %{Proxy-Authorization}.



 my-auth.pl 

#!/usr/bin/perl -Wl

$|=1;

while (<>) {
   print "OK";
}


 my-acl.pl 

#!/usr/bin/perl -Wl

use URI::Escape;
use MIME::Base64;

$|=1;

while (<>) {
   ($ip,$user,$auth) = split();
   $auth = uri_unescape($auth);
   ($type,$authData) = split(/ /, $auth);
   $authString = decode_base64($authData);
   ($username,$password) = split(/:/, $authString);
  
   print my_awsome_auth($ip, $username, $password);

}

Thanks.
Joseph

--
Joseph Spadavecchia



t. +44 (0)1506 426 976
f. +44 (0)1506 691 408
e. mailto:jspadavecc...@bloxx.com
w. http://www.bloxx.com/

Awards:
http://www.bloxx.com/corporate/newsreleases_more.php?id=39  |  http://www.bloxx.com/corporate/newsreleases_more.php?id=36 
http://www.bloxx.com/corporate/newsreleases_more.php?id=31  |  http://www.bloxx.com/corporate/newsreleases_more.php?id=33

--
Bloxx Ltd.: Registered in the UK No. SC202264. Geddes House, Kirkton North, Livingston EH54 6GU, UK. 
International Offices: Bloxx Inc. t. +1 781 229 0980 | Bloxx Europe t. +31 (0) 70 320 5009 |  Bloxx Australia t. +61 1800 225 699




[squid-users] Squid cache cgi-bin

2009-02-23 Thread projpr...@libero.it
Hi,

I have some questions about squid as reverse proxy.

The web server I´m accelerating (cache_peer) has dynamic content (cgi-
bin).

At the beginning I left the default cache refresh values (so for cgi-bin \  / 
? has a value "0") and the hierarchy list for cgi-bin and  "no_cache deny 
all".

Now this pages contain some elements like .gif that I´d like to cash: these 
elements have not the path http://nameserver/cgi-bi/... but a path like http:
//nameserver/icons...

I tried with a normal ACL  elements url_regex  .gif .html .jpeg

and then

cache allow static

But it seems squid is not caching nothing!!!

Could you give me any kind of advice?

Thanks in advance


Re: [squid-users] check in the squid logs if dynamic page was cached

2009-02-23 Thread Matus UHLAR - fantomas
> squid proxy wrote:
> >howto check in the squid logs if dynamic page (asp, cgi-bin etc.) was
> >cached or not?

everything that is not excluded from caching by '(no_)cache deny' directive,
is cached by fefault.

You apparently mean if the cached content was provided to any clients, which
means that the caching was useful.

On 21.02.09 00:25, Amos Jeffries wrote:
> If the page ever gets a *_HIT in access log it's been/being cached.

store log contains status of objects fetched, released, stored to disk, if
it's tuned on (it's off by default). But that's probably not what you are
interested in...

-- 
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
There's a long-standing bug relating to the x86 architecture that
allows you to install Windows.   -- Matthew D. Fuller


Re: [squid-users] reload_into_ims on

2009-02-23 Thread John Doe

> A packet trace on the outbound side of squid.
> The more interesting thing would be a packet trace of the whole squid-server 
> communication and see as I suggested, whether that 304 contains a body object 
> or 
> not.
> 
> Run this on the squid box:
>   tcpdump -w $SERVERIP.trace -i $IFACE host $SERVERIP
> where:
>   SERVERIP is the IP of the remote server.
>   IFACE is the internet-facing interface on the squid box.
> 
> And while its capturing, run your simple reload test.
> 
> The file $SERVERIP.trace can be browsed with ethereal/tethereal/wireshark to 
> view the traffic.

Hum... my bad.
I turned on headers logging in the web server and squid does not re-fetch the 
files...
I took squid's status code response to the client for the webserver's response 
to squid...
But, at the same time,  I found these headers from squid in the web server logs:
  Pragma: no-cache
  Via: 1.1 test.here:80 (squid)
  X-Forwarded-For: 192.168.16.23
  Cache-Control: no-cache, max-age=31536000
So I am still a bit confused...  ^_^

Thx,
JD


  



RE: [squid-users] Helper protocol issue with wbinfo_group.pl

2009-02-23 Thread Benedict White

-Original Message-
From: crobert...@gci.net [mailto:crobert...@gci.net] 
Sent: 19 February 2009 20:37
To: squid-users@squid-cache.org
Subject: Re: [squid-users] Helper protocol issue with wbinfo_group.pl

Benedict White wrote:
> When I use wbinfo_group.pl it clearly can and does go and check if a given 
> users is in the specified Active Directory group, which is good.
>
> The problem is that it returns "OK" to squid which squid does not seem to 
> like.
>
> Here is the relevent line in squid.conf:
> external_acl_type nt_group ttl=5 protocol=2.5 concurrency=5 %LOGIN 
> /usr/lib/squid/wbinfo_group.pl -d
>
> and here is what the logile shows:
>
> helperHandleRead: unexpected reply on channel -1 from nt_group #1 'OK'
>   

Channel -1?  That looks like the script isn't set up to handle 
concurrency.  Try "children=5" instead of "concurrency=5" in the 
external_acl_type definition and see if that works better.

>>>

Many thanks Chris that solved the problem.

It seems that in more than one place people have put the wring thing on the 
web. Still at least this is right.

Kind Regards

Benedict White
 
Tel: 01444 238070 Tech: 01444 238080 Fax: 01444 238099 Web: 
http://www.cse-ltd.co.uk   Registered in England and Wales No: 3066242
27 Victoria Gardens, Burgess Hill, West Sussex, RH15 9NB



Re: [squid-users] Anonymize surfing

2009-02-23 Thread Matus UHLAR - fantomas
> On Tue, Feb 03, 2009 at 05:21:50PM +0100, Matus UHLAR - fantomas wrote:
> > On 03.02.09 21:18, Vikram Goyal wrote:
> > > I want to anonymize surfing for that I have squid version 3.0 running in
> > > transparent mode. I have 
> > > 
> > > request_header_access From deny all
> > > request_header_access Referer deny all
> > > request_header_access Server deny all
> > > request_header_access User-Agent deny all
> > > request_header_access Via deny all
> > > request_header_access WWW-Authenticate deny all
> > > request_header_access X-Forwarded-For deny all
> > > header_replace Refererunknown
> > > header_replace User-Agent Mozilla/5.0 (Windows; U; MSIE 7.0; Windows NT 
> > > 6.0; en-US)
> > > header_replace Via127.0.0.1
> > > header_replace X-Forwarded-For 127.0.0.1
> > > 
> > > But it is still spewing the info on the net.
> > > 
> > > What could be wrong with this config?
> > 
> > are you sure users really use the proxy?

On 06.02.09 22:37, Vikram Goyal wrote:
> Well, I am the only user. Squid is running in trasparent mode.

does it really see the requests?

> This is an experimental setup. Also the http request is being diverted to
> dansguardian and then squid through iptables. The squid is logging the
> requests but checking on sites like http://grc.com identifies the browser
> and OS in use through browser requests.

are you sure _all_ requests are intercepted? aren't there any connections to
different port(s)?

-- 
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
"Two words: Windows survives." - Craig Mundie, Microsoft senior strategist
"So does syphillis. Good thing we have penicillin." - Matthew Alton


Re: [squid-users] One More Problem !!!2009/02/23 14:04:25| tunnelReadServer: FD 37: read failure: (0) Success

2009-02-23 Thread Shekhar Gupta
I have 2 of the system which i upgraded to 3.0 13 and on both of them
i am getting the same .

On Mon, Feb 23, 2009 at 4:14 PM, Amos Jeffries  wrote:
> Shekhar Gupta wrote:
>>
>> 2009/02/23 13:09:00| tunnelReadServer: FD 350: read failure: (0) Success
>> 2009/02/23 13:28:36| tunnelReadServer: FD 332: read failure: (0) Success
>> 2009/02/23 13:37:51| tunnelReadServer: FD 401: read failure: (0) Success
>> 2009/02/23 13:38:38| tunnelReadServer: FD 395: read failure: (0) Success
>> 2009/02/23 13:38:48| tunnelReadServer: FD 569: read failure: (0) Success
>> 2009/02/23 13:39:42| tunnelReadServer: FD 425: read failure: (0) Success
>> 2009/02/23 14:04:25| tunnelReadServer: FD 37: read failure: (0) Success
>> 2009/02/23 14:21:09| tunnelReadServer: FD 542: read failure: (0) Success
>> 2009/02/23 14:28:44| tunnelReadServer: FD 758: read failure: (0) Success
>> 2009/02/23 14:29:30| tunnelReadServer: FD 713: read failure: (0) Success
>> 2009/02/23 14:35:59| tunnelReadServer: FD 126: read failure: (0) Success
>> 2009/02/23 14:37:52| tunnelReadServer: FD 356: read failure: (0) Success
>> 2009/02/23 14:38:08| tunnelReadServer: FD 165: read failure: (0) Success
>> 2009/02/23 14:40:36| tunnelReadServer: FD 543: read failure: (0) Success
>> 2009/02/23 14:40:39| tunnelReadServer: FD 429: read failure: (0) Success
>> 2009/02/23 14:40:47| tunnelReadServer: FD 396: read failure: (0) Success
>> 2009/02/23 14:41:25| tunnelReadServer: FD 504: read failure: (32) Broken
>> pipe
>> 2009/02/23 14:42:43| tunnelReadServer: FD 383: read failure: (0) Success
>>
>> Can any one tell me what does this mean
>
> Might mean a read or write problem. might mean nothing.
> We are looking for someone able to reproduce this message reliably and also
> having the skills to track down why it displays in the occasions that squid
> keeps going afterwards without an additional fatal error cropping up.
>
> Amos
> --
> Please be using
>  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE13
>  Current Beta Squid 3.1.0.5
>


Re: [squid-users] One More Problem !!!2009/02/23 14:04:25| tunnelReadServer: FD 37: read failure: (0) Success

2009-02-23 Thread Amos Jeffries

Shekhar Gupta wrote:

2009/02/23 13:09:00| tunnelReadServer: FD 350: read failure: (0) Success
2009/02/23 13:28:36| tunnelReadServer: FD 332: read failure: (0) Success
2009/02/23 13:37:51| tunnelReadServer: FD 401: read failure: (0) Success
2009/02/23 13:38:38| tunnelReadServer: FD 395: read failure: (0) Success
2009/02/23 13:38:48| tunnelReadServer: FD 569: read failure: (0) Success
2009/02/23 13:39:42| tunnelReadServer: FD 425: read failure: (0) Success
2009/02/23 14:04:25| tunnelReadServer: FD 37: read failure: (0) Success
2009/02/23 14:21:09| tunnelReadServer: FD 542: read failure: (0) Success
2009/02/23 14:28:44| tunnelReadServer: FD 758: read failure: (0) Success
2009/02/23 14:29:30| tunnelReadServer: FD 713: read failure: (0) Success
2009/02/23 14:35:59| tunnelReadServer: FD 126: read failure: (0) Success
2009/02/23 14:37:52| tunnelReadServer: FD 356: read failure: (0) Success
2009/02/23 14:38:08| tunnelReadServer: FD 165: read failure: (0) Success
2009/02/23 14:40:36| tunnelReadServer: FD 543: read failure: (0) Success
2009/02/23 14:40:39| tunnelReadServer: FD 429: read failure: (0) Success
2009/02/23 14:40:47| tunnelReadServer: FD 396: read failure: (0) Success
2009/02/23 14:41:25| tunnelReadServer: FD 504: read failure: (32) Broken pipe
2009/02/23 14:42:43| tunnelReadServer: FD 383: read failure: (0) Success

Can any one tell me what does this mean


Might mean a read or write problem. might mean nothing.
We are looking for someone able to reproduce this message reliably and 
also having the skills to track down why it displays in the occasions 
that squid keeps going afterwards without an additional fatal error 
cropping up.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE13
  Current Beta Squid 3.1.0.5


Re: [squid-users] WARNING! Your cache is running out of filedescriptors -------Version 3.0.STABLE13

2009-02-23 Thread Shekhar Gupta
I think this is some bug as the same machine with 2.6 swuid version
were not having any of these messages , I still have 3 machine on the
older squid version and i upgraded 2 machine to 3.0 13 version and i
am finding this problem .

On Mon, Feb 23, 2009 at 3:53 PM, Amos Jeffries  wrote:
> Shekhar Gupta wrote:
>>
>> Amos,
>>
>> I only configured it with delay pool , so you are saying that i have
>> to recompile the squid with that option .  do i have to do ant
>> thing else apart from it like something in OS .
>
> I would hope nothing in OS is needed. But I don't know RHEL very well.
> The option is equivalent to --with-maxfd from 2.6. With the same usage and
> related settings.
>
> Amos
>
>>
>> On Mon, Feb 23, 2009 at 3:12 PM, Amos Jeffries 
>> wrote:
>>>
>>> Shekhar Gupta wrote:

 Guys , i tried fixing this however most of the derivatives are not
 working with this verision and can any one throw some light how to
 make this fix in Version 3.0.STABLE13 running on RHEL 5.3..
>>>
>>> Check you are using the configure option: --with-filedescriptors=N
>>> 3.0 uses a different option name than 2.6 did.
>>>
>>> Amos
>>> --
>>> Please be using
>>>  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE13
>>>  Current Beta Squid 3.1.0.5
>>>
>
>
> --
> Please be using
>  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE13
>  Current Beta Squid 3.1.0.5
>


Re: [squid-users] WARNING! Your cache is running out of filedescriptors -------Version 3.0.STABLE13

2009-02-23 Thread Amos Jeffries

Shekhar Gupta wrote:

Amos,

I only configured it with delay pool , so you are saying that i have
to recompile the squid with that option .  do i have to do ant
thing else apart from it like something in OS .


I would hope nothing in OS is needed. But I don't know RHEL very well.
The option is equivalent to --with-maxfd from 2.6. With the same usage 
and related settings.


Amos



On Mon, Feb 23, 2009 at 3:12 PM, Amos Jeffries  wrote:

Shekhar Gupta wrote:

Guys , i tried fixing this however most of the derivatives are not
working with this verision and can any one throw some light how to
make this fix in Version 3.0.STABLE13 running on RHEL 5.3..

Check you are using the configure option: --with-filedescriptors=N
3.0 uses a different option name than 2.6 did.

Amos
--
Please be using
 Current Stable Squid 2.7.STABLE6 or 3.0.STABLE13
 Current Beta Squid 3.1.0.5




--
Please be using
  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE13
  Current Beta Squid 3.1.0.5


Re: [squid-users] WARNING! Your cache is running out of filedescriptors -------Version 3.0.STABLE13

2009-02-23 Thread Shekhar Gupta
Amos,

I only configured it with delay pool , so you are saying that i have
to recompile the squid with that option .  do i have to do ant
thing else apart from it like something in OS .

On Mon, Feb 23, 2009 at 3:12 PM, Amos Jeffries  wrote:
> Shekhar Gupta wrote:
>>
>> Guys , i tried fixing this however most of the derivatives are not
>> working with this verision and can any one throw some light how to
>> make this fix in Version 3.0.STABLE13 running on RHEL 5.3..
>
> Check you are using the configure option: --with-filedescriptors=N
> 3.0 uses a different option name than 2.6 did.
>
> Amos
> --
> Please be using
>  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE13
>  Current Beta Squid 3.1.0.5
>


[squid-users] One More Problem !!!2009/02/23 14:04:25| tunnelReadServer: FD 37: read failure: (0) Success

2009-02-23 Thread Shekhar Gupta
2009/02/23 13:09:00| tunnelReadServer: FD 350: read failure: (0) Success
2009/02/23 13:28:36| tunnelReadServer: FD 332: read failure: (0) Success
2009/02/23 13:37:51| tunnelReadServer: FD 401: read failure: (0) Success
2009/02/23 13:38:38| tunnelReadServer: FD 395: read failure: (0) Success
2009/02/23 13:38:48| tunnelReadServer: FD 569: read failure: (0) Success
2009/02/23 13:39:42| tunnelReadServer: FD 425: read failure: (0) Success
2009/02/23 14:04:25| tunnelReadServer: FD 37: read failure: (0) Success
2009/02/23 14:21:09| tunnelReadServer: FD 542: read failure: (0) Success
2009/02/23 14:28:44| tunnelReadServer: FD 758: read failure: (0) Success
2009/02/23 14:29:30| tunnelReadServer: FD 713: read failure: (0) Success
2009/02/23 14:35:59| tunnelReadServer: FD 126: read failure: (0) Success
2009/02/23 14:37:52| tunnelReadServer: FD 356: read failure: (0) Success
2009/02/23 14:38:08| tunnelReadServer: FD 165: read failure: (0) Success
2009/02/23 14:40:36| tunnelReadServer: FD 543: read failure: (0) Success
2009/02/23 14:40:39| tunnelReadServer: FD 429: read failure: (0) Success
2009/02/23 14:40:47| tunnelReadServer: FD 396: read failure: (0) Success
2009/02/23 14:41:25| tunnelReadServer: FD 504: read failure: (32) Broken pipe
2009/02/23 14:42:43| tunnelReadServer: FD 383: read failure: (0) Success

Can any one tell me what does this mean


Re: [squid-users] WARNING! Your cache is running out of filedescriptors -------Version 3.0.STABLE13

2009-02-23 Thread Amos Jeffries

Shekhar Gupta wrote:

Guys , i tried fixing this however most of the derivatives are not
working with this verision and can any one throw some light how to
make this fix in Version 3.0.STABLE13 running on RHEL 5.3..


Check you are using the configure option: --with-filedescriptors=N
3.0 uses a different option name than 2.6 did.

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE13
  Current Beta Squid 3.1.0.5


[squid-users] WARNING! Your cache is running out of filedescriptors -------Version 3.0.STABLE13

2009-02-23 Thread Shekhar Gupta
Guys , i tried fixing this however most of the derivatives are not
working with this verision and can any one throw some light how to
make this fix in Version 3.0.STABLE13 running on RHEL 5.3..