Re: [squid-users] Run squid2.5.6 and dansguardian got error message: (111) Connection refused

2008-11-10 Thread zhang yikai
10.0.2.110 is the machine run squid and dansguardian, thank you for your reply.
- Original Message - 
From: "Henrik Nordstrom" <[EMAIL PROTECTED]>
To: "zhang yikai" <[EMAIL PROTECTED]>
Cc: "Amos Jeffries" <[EMAIL PROTECTED]>; "Kinkie" <[EMAIL PROTECTED]>; 

Sent: Tuesday, November 11, 2008 3:32 PM
Subject: Re: [squid-users] Run squid2.5.6 and dansguardian got error message: 
(111) Connection refused



Re: [squid-users] Run squid2.5.6 and dansguardian got error message: (111) Connection refused

2008-11-10 Thread Henrik Nordstrom
On tis, 2008-11-11 at 11:36 +0800, zhang yikai wrote:

> - DIRECT/10.0.2.110 text/html
> 1226418445.662137 127.0.0.1 TCP_MISS/503 1883 GET http://www.google.com/ 
> - DIRECT/10.0.2.110 text/html

Why does your Squid server resolve www.google.com to 10.0.2.110?

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


RE: [squid-users] parseHTTPRequest problem with SQUID3

2008-11-10 Thread Henrik Nordstrom
On tis, 2008-11-11 at 15:24 +1300, Amos Jeffries wrote:

> Not fully 1.1, but from (0.9 + 1.0) to fully 1.0 + partial 1.1. Which is
> weird because 2.6 went almost fully 1.0 as well quite a while back.

From this discussion it seems Squid-3 no longer accepts the obsolete
HTTP/0.9 style requests.

Squid-2 do support HTTP/0.9 in accelerator mode, including returning a
HTTP/0.9 style response (no headers in either request or response).

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


[squid-users] How to interrupt ongoing transfer

2008-11-10 Thread kaustav_deybiswas

Hi,
I am a squid newbie. I am trying to set up daily download quotas for NCSA
authorized users. I have a daemon running which checks the log files, and
whnever the download limit is reached (for a particular user), it blocks
that user in the config and reconfigures squid (squid -k reconfigure) for
the changes to take effect. The problem is, if an http/ftp transfer is on
(for that user), the changes made in the config doesnt take effect until
that transfer session completes. Is there any way I can interrupt the
download somehow (or say, force squid to re-read its ACL) without affecting
sessions of other users?
Thanks,
Kaustav
-- 
View this message in context: 
http://www.nabble.com/How-to-interrupt-ongoing-transfer-tp20434551p20434551.html
Sent from the Squid - Users mailing list archive at Nabble.com.



Re: [squid-users] Run squid2.5.6 and dansguardian got error message: (111) Connection refused

2008-11-10 Thread zhang yikai
 Now I found when run squid and dansguardian in different machine it will be 
ok, but can't work in one, what is the problem?

 
Thanks for your help
- Original Message - 
From: "Kinkie" <[EMAIL PROTECTED]>
To: "zhang yikai" <[EMAIL PROTECTED]>
Cc: 
Sent: Monday, November 10, 2008 9:13 PM
Subject: Re: [squid-users] Run squid2.5.6 and dansguardian got error message: 
(111) Connection refused


> On Mon, Nov 10, 2008 at 11:11 AM, zhang yikai <[EMAIL PROTECTED]> wrote:
>>
>>
>> hi all,
>>
>> I installed squid and it work properly,  then I run dansguardian, connect to 
>> squid port 3128 ok, but when I using dansguardian port 8080 as a proxy, I 
>> got the error message (111) Connection refused, I don't know what is the 
>> problem? thank you.
> 
> Are you sure that dansguardian is running and that squid accepted and
> understood your forwarding instructions?
> It would seem that either squid is forwarding to the wrong parent, or
> that the parent is not running.
> 
> 
> -- 
>/kinkie

Re: [squid-users] Run squid2.5.6 and dansguardian got error message: (111) Connection refused

2008-11-10 Thread zhang yikai

- Original Message - 
From: "Amos Jeffries" <[EMAIL PROTECTED]>
To: "zhang yikai" <[EMAIL PROTECTED]>
Cc: "Kinkie" <[EMAIL PROTECTED]>; 
Sent: Tuesday, November 11, 2008 10:38 AM
Subject: Re: [squid-users] Run squid2.5.6 and dansguardian got error message: 
(111) Connection refused


>> thanks for your help, I run wget
>>
>> [EMAIL PROTECTED] logs]# wget www.google.com
>> --09:19:40--  http://www.google.com/
>>=> `index.html'
>> Connecting to 10.0.2.110:9090... connected.
>> Proxy request sent, awaiting response... 403 Forbidden
>> 09:19:41 ERROR 403: Forbidden.
>>
>>
>> this is the info from access.log files:
>>
>> in squid access.log:
>>
>> 1226413180.997 13 127.0.0.1 TCP_DENIED/403 1847 GET
>> http://www.google.com/ - NONE/- text/html
>>
>>
>> this is the dansguardian access.log file:
>>
>> 2008.11.11 9:19:41 - 10.0.2.110 http://www.google.com *EXCEPTION*
>> Exception client IP match. GET 1512
>>
>>
>> my squid.conf file:
>> 
>> acl CONNECT method CONNECT
>> http_access allow manager localhost
>> http_access deny manager
>>
>> http_access deny !Safe_ports
>> http_access deny CONNECT !SSL_ports
>>
>> http_access allow localnet
> 
> Needs "http_access allow localhost" here to accept traffic from
> dansguardian through 127.0.0.1
> 
> Amos
> 
>



[EMAIL PROTECTED] logs]# wget www.google.com
--11:39:56--  http://www.google.com/
   => `index.html.1'
Connecting to 10.0.2.110:9090... connected.
Proxy request sent, awaiting response... 503 Service Unavailable
11:39:57 ERROR 503: Service Unavailable.


now the log info change to:


- DIRECT/10.0.2.110 text/html
1226418445.662137 127.0.0.1 TCP_MISS/503 1883 GET http://www.google.com/ - 
DIRECT/10.0.2.110 text/html
1226418462.424146 127.0.0.1 TCP_MISS/503 1883 GET http://www.google.com/ - 
DIRECT/10.0.2.110 text/html
1226418488.169142 127.0.0.1 TCP_MISS/503 2490 GET http://www.google.com/ - 
DIRECT/10.0.2.110 text/html
1226418495.286142 127.0.0.1 TCP_MISS/503 2266 GET http://www.google.com/ - 
DIRECT/10.0.2.110 text/html



Re: [squid-users] Unable to forward this request at this time.

2008-11-10 Thread ░▒▓ ɹɐzǝupɐɥʞ ɐzɹıɯ ▓▒░
On Tue, Nov 11, 2008 at 9:31 AM, Amos Jeffries <[EMAIL PROTECTED]> wrote:
>
>
> Ahh okay. "cache_peer 202.169.51.118" should be the web server IP as seen
> from Squid (internal IP if squid is internal, external IP if squid is
> external, localhost maybe if squid is on same machine).
>
> Amos
>

so i should to change it to local ip ? or what ?


Pengin punya Layanan SMS PREMIUM ?
Contact me ASAP. dapatkan Share revenue MAXIMAL tanpa syarat traffic...


Re: [squid-users] Run squid2.5.6 and dansguardian got error message: (111) Connection refused

2008-11-10 Thread Amos Jeffries
> thanks for your help, I run wget
>
> [EMAIL PROTECTED] logs]# wget www.google.com
> --09:19:40--  http://www.google.com/
>=> `index.html'
> Connecting to 10.0.2.110:9090... connected.
> Proxy request sent, awaiting response... 403 Forbidden
> 09:19:41 ERROR 403: Forbidden.
>
>
> this is the info from access.log files:
>
> in squid access.log:
>
> 1226413180.997 13 127.0.0.1 TCP_DENIED/403 1847 GET
> http://www.google.com/ - NONE/- text/html
>
>
> this is the dansguardian access.log file:
>
> 2008.11.11 9:19:41 - 10.0.2.110 http://www.google.com *EXCEPTION*
> Exception client IP match. GET 1512
>
>
> my squid.conf file:
> 
> acl CONNECT method CONNECT
> http_access allow manager localhost
> http_access deny manager
>
> http_access deny !Safe_ports
> http_access deny CONNECT !SSL_ports
>
> http_access allow localnet

Needs "http_access allow localhost" here to accept traffic from
dansguardian through 127.0.0.1

Amos




Re: [squid-users] Unable to forward this request at this time.

2008-11-10 Thread Amos Jeffries
> all :
> it's works now
> but from Internal
> from External ( internet ) it's look like Domain without any space hosting
>
> i remove :
>> http_port 80 accel defaultsite=monitor.gpi-g.com
>> cache_peer 202.169.51.118 parent 80 0 no-query originserver name=myAccel
>> acl our_sites dstdomain monitor.gpi-g.com
>> http_access allow our_sites
>> cache_peer_access myAccel allow our_sites
>
>
> to amos :
> My squid address = 192.168.222.2 <-- to the lan
> 202.169.51.118 <-- my public ip
> same machine
>

Ahh okay. "cache_peer 202.169.51.118" should be the web server IP as seen
from Squid (internal IP if squid is internal, external IP if squid is
external, localhost maybe if squid is on same machine).

Amos

>
>
> On Mon, Nov 10, 2008 at 9:47 PM, Henrik Nordstrom
> <[EMAIL PROTECTED]> wrote:
>> On tis, 2008-11-11 at 03:14 +1300, Amos Jeffries wrote:
>>> Henrik Nordstrom wrote:
>>> > From the error it sounds like it has declared the peer down.
>>>
>>> But why? I'm thinking forwarding loops.
>>
>> Forwarding loops is logged very aggressively in cache.log as such, and
>> don't result in an error to the user. All Squid does on a forwarding
>> loop is to try to go direct to the origin.
>>
>> The error message seen here was "Unable to forward" which means that
>> never_direct is in effect (on by default on accelerated requests), and
>> that it did not find a parent where to forward the request.
>>
>> Regards
>> Henrik
>>
>
>
>
> --
> -=-=-=-=
> http://amyhost.com
> Dollar naik ? Krisis ? Kami tetap mempertahankan harga jual domain Rp.
> 75.000 rupiah
> 
> Pengin punya Layanan SMS PREMIUM ?
> Contact me ASAP. dapatkan Share revenue MAXIMAL tanpa syarat traffic...
>




RE: [squid-users] parseHTTPRequest problem with SQUID3

2008-11-10 Thread Amos Jeffries
> Thanks for your response
>
>> That message means there was no HTTP/1.0 tag on the request line.
>> Squid begins assuming HTTP/0.9 traffic.
>>
>>
>>> Squid 2.6 handled these fine, and my configuration hasnt changed, so
> was
>>> there something introduced in Squid3 that demands a hostname?
>>
>> no.
>
> Something has to have changed, because I ported my config over as-is
> (aside from undefining the 'all' acl element, as specified in the
> release notes)
>
> For a minute I thought Squid had gone HTTP/1.1 and I needed my health
> checks to supply a Host header, but my capture shows the response as:
>

Not fully 1.1, but from (0.9 + 1.0) to fully 1.0 + partial 1.1. Which is
weird because 2.6 went almost fully 1.0 as well quite a while back.

> P...HTTP/1.0.400.Bad.Request..Server:.squid/3.0.STABLE10..Mime-Versi
> on:.1.0..Date:.Mon,.10.Nov.2008.22:49:53 (+content)
>
>
>>> acl our_site dstdomain cached.whatever.com
>>> acl Origin-Whatever dst 1.1.1.1
>>> acl acceleratedPort port 80
>>> acl HealthChecks urlpath_regex mgmt/alive
>>> always_direct allow HealthChecks
>>
>> This forces HealthChecks to take an abnormal path. Try just letting
> them
>> go the same way as regular accelerated request. It will be more
> accurate
>> to match the health of client requests.
>
> I thought always_direct kept requests from being checked against the
> cache/siblings?

always_direct prevents the requests going through peers. Nothing more.
if the domain itself resolves to allow direct requests its okay, but
accelerators should be setup so the domain resolves to Squid which can
cause issues.

>  I don't want them cached or logged, just proxied from
> the origin - so keep 'cache deny HealthChecks' and dump the
> 'always_direct allow HealthChecks'?  I actually tried that during my
> troubleshooting phase, and it didn't seem to change anything, but I
> would to be using everything properly.

Yes, to prevent storing them use 'cache deny HealthChecks'.
To prevent logging use 'access_log ... !HealthChecks'

>
>
>>> cache deny HealthChecks
>>> cache allow Origin-Whatever
>>> http_access allow Origin-Whatever acceleratedPort
>>
>> I'd say the above two lines are the problem. Unless you are juggling
> DNS
>> perfectly to make clients resolve the domain as Squid, and squid
> resolve
>> the domain as web server, the 'dst' ACL will fail to work properly on
>> accelerated requests.
>> The dstdomain our_site should be used here instead.
>
> I juggle, yes.  The load balancer uses a virtual IP, to which the
> cached.whatever.com record points to, which pools traffic to my Squid
> boxes.  I use /etc/hosts on the Squid boxes to point cached.whatever.com
> to an internal virtual IP that pools traffic to my origin servers.  This
> provides the flexibility and redundancy we need for this setup, and this
> configuration has always worked fine with 2.6.

Okay. Should have worked the same in 3.x. see my last comment.

>
>> Try the config fixes above, and if it still fails can you post a
> complete
>> byte-wise exact copy of the failing health check headers please?
>>
>> Amos
>
> I did notice that if I edited my hosts file to point cached.whatever.com
> to my new squid3 box, and requested
> http://cached.whatever.com/mgmt/alive, I got my 200 response.  However
> if I telnet'ed to the new squid3 box on port 80, typed 'GET /mgmt/alive'
> and hit enter twice, I would get that 400.  That really leads me to
> believe that a hostname is required, as opposed to problems with my
> config.
>
> Thanks again for your thoughts on this
>

Okay. That confirms my idea that the HealthChecks request is missing the '
HTTP/1.0' part of the request string. The first line of every valid
accelerated request should look something like this:
  "GET /mgmt/alive HTTP/1.0\n"

Amos



RE: [squid-users] parseHTTPRequest problem with SQUID3

2008-11-10 Thread Gregori Parker
Thanks for your response

> That message means there was no HTTP/1.0 tag on the request line.
> Squid begins assuming HTTP/0.9 traffic.
>
>
>> Squid 2.6 handled these fine, and my configuration hasnt changed, so
was
>> there something introduced in Squid3 that demands a hostname?
>
> no.

Something has to have changed, because I ported my config over as-is
(aside from undefining the 'all' acl element, as specified in the
release notes)

For a minute I thought Squid had gone HTTP/1.1 and I needed my health
checks to supply a Host header, but my capture shows the response as:

P...HTTP/1.0.400.Bad.Request..Server:.squid/3.0.STABLE10..Mime-Versi
on:.1.0..Date:.Mon,.10.Nov.2008.22:49:53 (+content)


>> acl our_site dstdomain cached.whatever.com
>> acl Origin-Whatever dst 1.1.1.1
>> acl acceleratedPort port 80
>> acl HealthChecks urlpath_regex mgmt/alive
>> always_direct allow HealthChecks
>
> This forces HealthChecks to take an abnormal path. Try just letting
them
> go the same way as regular accelerated request. It will be more
accurate
> to match the health of client requests.

I thought always_direct kept requests from being checked against the
cache/siblings?  I don't want them cached or logged, just proxied from
the origin - so keep 'cache deny HealthChecks' and dump the
'always_direct allow HealthChecks'?  I actually tried that during my
troubleshooting phase, and it didn't seem to change anything, but I
would to be using everything properly.


>> cache deny HealthChecks
>> cache allow Origin-Whatever
>> http_access allow Origin-Whatever acceleratedPort
>
> I'd say the above two lines are the problem. Unless you are juggling
DNS
> perfectly to make clients resolve the domain as Squid, and squid
resolve
> the domain as web server, the 'dst' ACL will fail to work properly on
> accelerated requests.
> The dstdomain our_site should be used here instead.

I juggle, yes.  The load balancer uses a virtual IP, to which the
cached.whatever.com record points to, which pools traffic to my Squid
boxes.  I use /etc/hosts on the Squid boxes to point cached.whatever.com
to an internal virtual IP that pools traffic to my origin servers.  This
provides the flexibility and redundancy we need for this setup, and this
configuration has always worked fine with 2.6.

> Try the config fixes above, and if it still fails can you post a
complete
> byte-wise exact copy of the failing health check headers please?
> 
> Amos

I did notice that if I edited my hosts file to point cached.whatever.com
to my new squid3 box, and requested
http://cached.whatever.com/mgmt/alive, I got my 200 response.  However
if I telnet'ed to the new squid3 box on port 80, typed 'GET /mgmt/alive'
and hit enter twice, I would get that 400.  That really leads me to
believe that a hostname is required, as opposed to problems with my
config.

Thanks again for your thoughts on this

- Gregori




Re: [squid-users] Run squid2.5.6 and dansguardian got error message: (111) Connection refused

2008-11-10 Thread zhang yikai
thanks for your help, I run wget

[EMAIL PROTECTED] logs]# wget www.google.com
--09:19:40--  http://www.google.com/
   => `index.html'
Connecting to 10.0.2.110:9090... connected.
Proxy request sent, awaiting response... 403 Forbidden
09:19:41 ERROR 403: Forbidden.


this is the info from access.log files:

in squid access.log:

1226413180.997 13 127.0.0.1 TCP_DENIED/403 1847 GET http://www.google.com/ 
- NONE/- text/html


this is the dansguardian access.log file:

2008.11.11 9:19:41 - 10.0.2.110 http://www.google.com *EXCEPTION* Exception 
client IP match. GET 1512


my squid.conf file:



#Default:
# acl all src all
#
#Recommended minimum configuration:
acl manager proto cache_object
acl localhost src 127.0.0.1/32
acl to_localhost dst 127.0.0.0/8
#
# Example rule allowing access from your local networks.
# Adapt to list your (internal) IP networks from where browsing
# should be allowed
acl localnet src 10.0.2.0/24# RFC1918 possible internal network
acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
#
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl Safe_ports port 9090# dansguardian port
acl CONNECT method CONNECT
http_access allow manager localhost
http_access deny manager

http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports

http_access allow localnet

http_access deny all







- Original Message - 
From: "Kinkie" <[EMAIL PROTECTED]>
To: "zhang yikai" <[EMAIL PROTECTED]>
Cc: 
Sent: Monday, November 10, 2008 9:13 PM
Subject: Re: [squid-users] Run squid2.5.6 and dansguardian got error message: 
(111) Connection refused


> On Mon, Nov 10, 2008 at 11:11 AM, zhang yikai <[EMAIL PROTECTED]> wrote:
>>
>>
>> hi all,
>>
>> I installed squid and it work properly,  then I run dansguardian, connect to 
>> squid port 3128 ok, but when I using dansguardian port 8080 as a proxy, I 
>> got the error message (111) Connection refused, I don't know what is the 
>> problem? thank you.
> 
> Are you sure that dansguardian is running and that squid accepted and
> understood your forwarding instructions?
> It would seem that either squid is forwarding to the wrong parent, or
> that the parent is not running.
> 
> 
> -- 
>/kinkie

Re: [squid-users] Unable to forward this request at this time.

2008-11-10 Thread ░▒▓ ɹɐzǝupɐɥʞ ɐzɹıɯ ▓▒░
all :
it's works now
but from Internal
from External ( internet ) it's look like Domain without any space hosting

i remove :
> http_port 80 accel defaultsite=monitor.gpi-g.com
> cache_peer 202.169.51.118 parent 80 0 no-query originserver name=myAccel
> acl our_sites dstdomain monitor.gpi-g.com
> http_access allow our_sites
> cache_peer_access myAccel allow our_sites


to amos :
My squid address = 192.168.222.2 <-- to the lan
202.169.51.118 <-- my public ip
same machine



On Mon, Nov 10, 2008 at 9:47 PM, Henrik Nordstrom
<[EMAIL PROTECTED]> wrote:
> On tis, 2008-11-11 at 03:14 +1300, Amos Jeffries wrote:
>> Henrik Nordstrom wrote:
>> > From the error it sounds like it has declared the peer down.
>>
>> But why? I'm thinking forwarding loops.
>
> Forwarding loops is logged very aggressively in cache.log as such, and
> don't result in an error to the user. All Squid does on a forwarding
> loop is to try to go direct to the origin.
>
> The error message seen here was "Unable to forward" which means that
> never_direct is in effect (on by default on accelerated requests), and
> that it did not find a parent where to forward the request.
>
> Regards
> Henrik
>



-- 
-=-=-=-=
http://amyhost.com
Dollar naik ? Krisis ? Kami tetap mempertahankan harga jual domain Rp.
75.000 rupiah

Pengin punya Layanan SMS PREMIUM ?
Contact me ASAP. dapatkan Share revenue MAXIMAL tanpa syarat traffic...


Re: [squid-users] parseHTTPRequest problem with SQUID3

2008-11-10 Thread Amos Jeffries
> I've just rolled back a failed Squid migration from 2.6 to 3.0, and I'm
> looking for reasons why it failed.  I have been successfully using the
> latest Squid 2.6 to http-accel a pool of backend web servers, with a
> load-balancer in front to direct traffic.
>
> The load-balancer hits the squid server with a health check, i.e. GET
> /mgmt/alive and expects an HTTP 200, before allowing it to have traffic.
> When I turned up Squid3, all health checks failed...showing the
> following in access.log:
>
> 1226355682.853  0  NONE/400 1931 GET
> http://cached.whatever.com/ps/management/alive - NONE/- text/html
> 1226355684.875  0  NONE/400 1931 GET
> http://cached.whatever.com/ps/management/alive - NONE/- text/html
> 1226355687.905  0  NONE/400 1931 GET
> http://cached.whatever.com/ps/management/alive - NONE/- text/html
>
> After some troubleshooting and turning debug_options up, it appears that
> perhaps it's the request done without a hostname that's the problem,
> because I see 'parseHttpRequest: Missing HTTP identifier' in cache.log
> with debug_options set to ALL,3.

That message means there was no HTTP/1.0 tag on the request line.
Squid begins assuming HTTP/0.9 traffic.

>
> Squid 2.6 handled these fine, and my configuration hasnt changed, so was
> there something introduced in Squid3 that demands a hostname?

no.

>  I know
> from packet captures that my load-balancer literally connects to the
> squid server on port 80 and does a GET /mgmt/alive (not GET
> http://cached.whatever.com/mgmt/alive)
>
> Here are the relevant portions of my config:
>
> http_port 80 accel defaultsite=cached.whatever.com vhost
> cache_dir null /tmp
>
> cache_peer 1.1.1.1 parent 80 0 no-query no-digest originserver
> name=Cached-Whatever
> cache_peer_domain Cached-Whatever cached.whatever.com
>
> acl our_site dstdomain cached.whatever.com
> acl Origin-Whatever dst 1.1.1.1
> acl acceleratedPort port 80
> acl HealthChecks urlpath_regex mgmt/alive
>
> always_direct allow HealthChecks

This forces HealthChecks to take an abnormal path. Try just letting them
go the same way as regular accelerated request. It will be more accurate
to match the health of client requests.

> cache deny HealthChecks

> cache allow Origin-Whatever
> http_access allow Origin-Whatever acceleratedPort

I'd say the above two lines are the problem. Unless you are juggling DNS
perfectly to make clients resolve the domain as Squid, and squid resolve
the domain as web server, the 'dst' ACL will fail to work properly on
accelerated requests.
The dstdomain our_site should be used here instead.

> http_access deny all
> http_reply_access allow all
>
> access_log /var/log/squid/access.log squid !HealthChecks
> visible_hostname cached.whatever.com
> unique_hostname squid03
>

Try the config fixes above, and if it still fails can you post a complete
byte-wise exact copy of the failing health check headers please?

Amos



[squid-users] parseHTTPRequest problem with SQUID3

2008-11-10 Thread Gregori Parker
I've just rolled back a failed Squid migration from 2.6 to 3.0, and I'm
looking for reasons why it failed.  I have been successfully using the
latest Squid 2.6 to http-accel a pool of backend web servers, with a
load-balancer in front to direct traffic.

The load-balancer hits the squid server with a health check, i.e. GET
/mgmt/alive and expects an HTTP 200, before allowing it to have traffic.
When I turned up Squid3, all health checks failed...showing the
following in access.log:

1226355682.853  0  NONE/400 1931 GET
http://cached.whatever.com/ps/management/alive - NONE/- text/html
1226355684.875  0  NONE/400 1931 GET
http://cached.whatever.com/ps/management/alive - NONE/- text/html
1226355687.905  0  NONE/400 1931 GET
http://cached.whatever.com/ps/management/alive - NONE/- text/html

After some troubleshooting and turning debug_options up, it appears that
perhaps it's the request done without a hostname that's the problem,
because I see 'parseHttpRequest: Missing HTTP identifier' in cache.log
with debug_options set to ALL,3.

Squid 2.6 handled these fine, and my configuration hasnt changed, so was
there something introduced in Squid3 that demands a hostname?  I know
from packet captures that my load-balancer literally connects to the
squid server on port 80 and does a GET /mgmt/alive (not GET
http://cached.whatever.com/mgmt/alive)

Here are the relevant portions of my config:

http_port 80 accel defaultsite=cached.whatever.com vhost 
cache_dir null /tmp

cache_peer 1.1.1.1 parent 80 0 no-query no-digest originserver
name=Cached-Whatever
cache_peer_domain Cached-Whatever cached.whatever.com

acl our_site dstdomain cached.whatever.com
acl Origin-Whatever dst 1.1.1.1
acl acceleratedPort port 80
acl HealthChecks urlpath_regex mgmt/alive

always_direct allow HealthChecks
cache deny HealthChecks
cache allow Origin-Whatever
http_access allow Origin-Whatever acceleratedPort
http_access deny all
http_reply_access allow all

access_log /var/log/squid/access.log squid !HealthChecks
visible_hostname cached.whatever.com
unique_hostname squid03


Thanks - Gregori



Re: [squid-users] squid 3.1 is stable enough for production / testing?

2008-11-10 Thread Amos Jeffries
> 3.1 is certainly ready for testing. That's why we started making beta
> releases (3.1.0.X).
>
> Please give it a try and report back your findings. I don't think this
> is a setup that is commonly tested so it's very good if you can test
> this now while the release is actively being tested.
>
> Regards
> Henrik
>
> On tis, 2008-11-11 at 00:25 +0800, John Mok wrote:
>> Hi,
>>
>> I would like to setup squid proxy server for NTLM proxying (i.e.
>> connection pinning) + ICAP (clamav). I hope someone could advise if
>> there is any catch I need to pay attention with.
>>

A few bugs are still open. You will need to see if one pops up in your
testing before production can be considered.
On specifics, the squid_kerb_auth helper upgrade is having some teething
problems still on 3.1.0.1 and 3.1.0.2.  Should be resolved soon though.

Amos



Re: [squid-users] About squid ICAP implementation

2008-11-10 Thread Henrik Nordstrom
On sön, 2008-11-09 at 16:11 +0900, Mikio Kishi wrote:
> Hi, would you tell me the ICAP implementation on squid.
> 
> - Question.1
>   If there is no "icap_access" setting,
>   The default icap access control is "allow" or "deny" ?
>   It looks "allow"...

Should be deny.. icap_access selects which icap class to forward the
request via, and without any icap_access directive there is no selected
icap class..

> - Question.2
>   Could we set "more than two" REQMOD icap servers (per request) ?

Only one is supported at this stage.

> - Question.3
>   squid "always" sends "Allow: 204" header to icap server, right ?

Yes, unless forcibly disabled by setting icap_preview_enable off.

Regards
Henrik



signature.asc
Description: This is a digitally signed message part


Re: [squid-users] squid 3.1 is stable enough for production / testing?

2008-11-10 Thread Henrik Nordstrom
3.1 is certainly ready for testing. That's why we started making beta
releases (3.1.0.X).

Please give it a try and report back your findings. I don't think this
is a setup that is commonly tested so it's very good if you can test
this now while the release is actively being tested.

Regards
Henrik

On tis, 2008-11-11 at 00:25 +0800, John Mok wrote:
> Hi,
> 
> I would like to setup squid proxy server for NTLM proxying (i.e. 
> connection pinning) + ICAP (clamav). I hope someone could advise if 
> there is any catch I need to pay attention with.
> 
> Thanks a lot.
> 
> John Mok


signature.asc
Description: This is a digitally signed message part


[squid-users] squid 3.1 is stable enough for production / testing?

2008-11-10 Thread John Mok

Hi,

I would like to setup squid proxy server for NTLM proxying (i.e. 
connection pinning) + ICAP (clamav). I hope someone could advise if 
there is any catch I need to pay attention with.


Thanks a lot.

John Mok


Re: [squid-users] Unable to forward this request at this time.

2008-11-10 Thread Henrik Nordstrom
On tis, 2008-11-11 at 03:14 +1300, Amos Jeffries wrote:
> Henrik Nordstrom wrote:
> > From the error it sounds like it has declared the peer down.
> 
> But why? I'm thinking forwarding loops.

Forwarding loops is logged very aggressively in cache.log as such, and
don't result in an error to the user. All Squid does on a forwarding
loop is to try to go direct to the origin.

The error message seen here was "Unable to forward" which means that
never_direct is in effect (on by default on accelerated requests), and
that it did not find a parent where to forward the request.

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


RE: [squid-users] URL Filtering for Squid

2008-11-10 Thread Alex Huxham
Ubuntu - apt-get install ufdbGuard worked here, may have to locate the
correct repository for it though.
Not sure on any other formats

Alex

-Original Message-
From: a bv [mailto:[EMAIL PROTECTED] 
Sent: 10 November 2008 14:27
To: Alex Huxham
Subject: Re: [squid-users] URL Filtering for Squid

Thanks for the information but as i see on site the software is
provided as source. Are there any official /non-official packets (rpm,
deb etc)?

Regards

2008/11/10 Alex Huxham <[EMAIL PROTECTED]>:
>
>
> -Original Message-
> From: Alex Huxham
> Sent: 10 November 2008 09:52
> To: 'a bv'
> Subject: RE: [squid-users] URL Filtering for Squid
>
> Yes and no, there are free ones, however http://urlblacklist.com
allows
> you a FREE copy, only ONCE though, your first download of the
blacklist
> is free, its worth giving this a go, pretty much everything is
> contained, you may need to just create a custom blacklist, white list
to
> allow/disallow sites not including in the lists.
>
> QUOTE: TRY FOR FREE: You can try the service by downloading the
> blacklist once for free.
>
> Alex
>
> -Original Message-
> From: a bv [mailto:[EMAIL PROTECTED]
> Sent: 10 November 2008 09:49
> To: Alex Huxham
> Subject: Re: [squid-users] URL Filtering for Squid
>
> Am i right of the software is free but for teh database we have to pay
> ?  if so are there any free databases for use with this software?
>
>
> Regards
>
>
>
> 2008/11/10 Alex Huxham <[EMAIL PROTECTED]>:
>> Yet another yes from me, used within our school, and works perfectly
> for
>> 900+ students and 200+ staff. Easy to configure, documented well and
>> there are plenty of resources on a google search to get you going
>> perfectly.
>>
>> Alex
>>
>> -Original Message-
>> From: Marcus Kool [mailto:[EMAIL PROTECTED]
>> Sent: 10 November 2008 09:18
>> To: a bv
>> Cc: squid-users@squid-cache.org
>> Subject: Re: [squid-users] URL Filtering for Squid
>>
>> I am the author of ufdbGuard which is based on squidGuard.
>> ufdbGuard is free and can be used with both free and commercial
>> databases.
>>
>> -Marcus
>>
>>
>> a bv wrote:
>>> Hi,
>>>
>>> What is /are the popular /commanly used  open source (and maybe also
>>> the other free ones) URL/content filtering solution/software. And
who
>>> are maintaining url databases?
>>>
>>> Regards
>>>
>>>
>>
>


Re: [squid-users] squid and loadbalancing option

2008-11-10 Thread Amos Jeffries

Martin Mulder wrote:

Hi,

I have (maybee a stupid) question.
I have an apache server as reverse proxy, squid as caching server and
Zope/Plone as backend servers.

Senario:

1) Apache gets a request for my.domain.com
2) Apache does a ProxyPass to my balancer
3) I have 2 "sticky" vhosts in apache which are the balancer members.
These are not reachable from the outside, and are called sticky1 and
sticky2.
These "sticky" vhost creates a cookie, which is uses by the balancer,
to decide the sticky server.
these sticky vhost forwards the request to Squid.

4) Squid is running @ 127.0.0.1:3389 and 127.0.0.2:3389
5) Sticky1 vhost proxies a request to http://sticky1.domain.com:3389,
sticky2 vhost proxies a request to http://sticky2.domain.com:3389
( where sticky1.domain.com resolves to 127.0.0.1 and sticky2.domain.com
resolves to 127.0.0.2 )

At the moment I have the following configuration:
## Backend server 1
cache_peer 192.168.2.3 parent 8100 0 no-query originserver
name=server1
cache_peer_domain server1 sticky1.domain.com

## Backend server 2
cache_peer 192.168.2.4 parent 8100 0 no-query originserver
name=server2
cache_peer_domain server2 sticky2.domain.com


This results in:

a webrequest comes to apache ( without an host-cookie ) on
my.domain.com
the request will be proxied to a balancermember ( based on the
balancing policy )
this vhost creates a cookie ( like: BALANCEID: balancer.sticky1 or
BALANCEID: balancer.sticky2 )
the vhosts proxied the request to http://sticky1.domain.com:3389 or
http://sticky2.domain.com:3389 


-- The request reaches squid --

the request comes to the cache_peer depending on the domain ( sticky1
of sticky2 )
squid delivers the page ( or from cache, or from originserver )

In this case squid can deliver a 500 or 503 if the requested backend
server is down.
So if my request has a cookie for sticky1.domain.com and 192.168.2.3
originserver is down, the users with this cookie get an error.
So if my request has a cookie for sticky2.domain.com and 192.168.2.4
originserver is down, the users with this cookie get an error.


What I try, but can't get working:

every cache_peer_domain has 2 cache_peers:


* name= is a unique identifier. It cannot be used twice.



## Backend server 1
cache_peer 192.168.2.3 parent 8100 0 no-query originserver
name=server1
cache_peer 192.168.2.4 parent 8100 0 no-query originserver name=server2
// This server doesn't have the session information
cache_peer_domain server1 sticky1.domain.com

## Backend server 2
cache_peer 192.168.2.4 parent 8100 0 no-query originserver
name=server2
cache_peer 192.168.2.3 parent 8100 0 no-query originserver name=server1
// This server doesn't have the session information
cache_peer_domain server2 sticky2.domain.com

But the cache peer which hasn't the session may only be used it the
other cache_peer is down.

is the possible with squid 2.6?



Not the way you are trying.

Assuming that the use of cookies means the visitors need to always 
request from the same origin.


I think the simpler approach you need is to have Squid as front-end 
accelerator using sourcehash to load balance over the real origin servers.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE5 or 3.0.STABLE10
  Current Beta Squid 3.1.0.2


Re: [squid-users] how to configure wccp load balancing with squid.

2008-11-10 Thread Egi Konomi

Hello Gregory,

While setting up a squid+wccp solution i found this information really 
helpful:


http://www.reub.net/node/3

Best Regards!

Egi

Gregory Machin wrote:

Hi
I'm looking for a howto or some docs to show how to do load balancing
. I have a single cisco router and would like to have two or more
squid caches. in a load balanced configuration .. Any suggestions ?
Thanks


  





Re: [squid-users] Unable to forward this request at this time.

2008-11-10 Thread Amos Jeffries

Henrik Nordstrom wrote:

From the error it sounds like it has declared the peer down.


But why? I'm thinking forwarding loops.



On mån, 2008-11-10 at 11:35 +0700, ░▒▓ ɹɐzǝupɐɥʞ ɐzɹıɯ ▓▒░ wrote:

here is my squid .conf
===
http_port 2210 transparent
icp_port 3130
snmp_port 3401
cache_mgr admin
emulate_httpd_log off
cache_replacement_policy heap LFUDA
maximum_object_size_in_memory 50 KB
maximum_object_size 50 MB

http_port 80 accel defaultsite=monitor.gpi-g.com
cache_peer 202.169.51.118 parent 80 0 no-query originserver name=myAccel


 ░▒▓ ɹɐzǝupɐɥʞ ɐzɹıɯ ▓▒░   you said:
 my public IP 202.169.51.118
 and monitor.gpi-g.com is 202.169.51.118 too

What is Squid IP?

Is web server actually running on 202.169.51.118:80?





On Mon, Nov 10, 2008 at 11:28 AM, Amos Jeffries <[EMAIL PROTECTED]> wrote:

??? ??z?up??? ?z??? ??? wrote:

my squid working for other site :(
fyi :
my public IP 202.169.51.118
and monitor.gpi-g.com is 202.169.51.118 too

and i can browse mail.gpi-g.com - 202.169.51.119
( same server farm with different server - one line/level with 118 )

Then it's probably a configuration problem.
By your description (IPs the same) your squid is supposed to be an
accelerator for that website. Which means the cache_peer configuration needs
to be checked.

Amos


On Mon, Nov 10, 2008 at 11:21 AM, Amos Jeffries <[EMAIL PROTECTED]>
wrote:

??? ??z?up??? ?z??? ??? wrote:

i cant browse from outside and local

ERROR
The requested URL could not be retrieved

While trying to retrieve the URL: http://monitor.gpi-g.com/

The following error was encountered:

  * Unable to forward this request at this time.

This request could not be forwarded to the origin server or to any
parent caches. The most likely cause for this error is that:

  * The cache administrator does not allow this cache to make direct
connections to origin servers, and
  * All configured parent caches are currently unreachable.

Your cache administrator is [EMAIL PROTECTED]
Generated Mon, 10 Nov 2008 03:18:16 GMT by gpi-g.com
(squid/2.6.STABLE18)




So it seems.

Check your squid configuration and Internet connection.

Amos
--
Please be using
 Current Stable Squid 2.7.STABLE5 or 3.0.STABLE10
 Current Beta Squid 3.1.0.1






--
Please be using
 Current Stable Squid 2.7.STABLE5 or 3.0.STABLE10
 Current Beta Squid 3.1.0.1







--
Please be using
  Current Stable Squid 2.7.STABLE5 or 3.0.STABLE10
  Current Beta Squid 3.1.0.1


[squid-users] Re: Partition suggestions for Squid using a 500Gb drive?

2008-11-10 Thread Ed Flecko
Thanks Chuck.

Unfortunately, this little 1U server does not have room for more than
1 hard drive. :-(

Do you have any specific "config options" in mind about how to best
use the memory?

Thank You,
Ed

On Sun, Nov 9, 2008 at 6:12 PM, Chuck Kollars <[EMAIL PROTECTED]> wrote:
>>  ... It's my understanding that Squid doesn't really need a fast
>> processor, ...
>
> Yep, Squid is very seldom CPU-bound.
>
>>  ... 16G of memory ...
>
> Good, the more RAM the better. There are some config options to make best use 
> of this memory; the key is to "cache" stuff in memory rather than in RAM up 
> to the amount of real memory you have. (If you don't get the config options 
> right you can wind up "wasting" most of the RAM; Squid won't figure it out 
> automagically.)
>
>>  ...Here's the partition setup ... I welcome ALL of your suggestions ...
>
> While Squid is not CPU-bound, it IS IO BOUND!. There's no right way to 
> partition the disk; forget it. Instead ADD A SECOND DISK (perhaps small, but 
> _fast_). If possible use the "other"/"new" IO channel for the disk, mount the 
> single partition on the second disk with 'noatime', and locate only the Squid 
> cache (nothing else) on it.
>
> thanks! -Chuck Kollars
>
>
>
>
>


[squid-users] how to configure wccp load balancing with squid.

2008-11-10 Thread Gregory Machin
Hi
I'm looking for a howto or some docs to show how to do load balancing
. I have a single cisco router and would like to have two or more
squid caches. in a load balanced configuration .. Any suggestions ?
Thanks


Re: [squid-users] Run squid2.5.6 and dansguardian got error message: (111) Connection refused

2008-11-10 Thread Kinkie
On Mon, Nov 10, 2008 at 11:11 AM, zhang yikai <[EMAIL PROTECTED]> wrote:
>
>
> hi all,
>
> I installed squid and it work properly,  then I run dansguardian, connect to 
> squid port 3128 ok, but when I using dansguardian port 8080 as a proxy, I got 
> the error message (111) Connection refused, I don't know what is the problem? 
> thank you.

Are you sure that dansguardian is running and that squid accepted and
understood your forwarding instructions?
It would seem that either squid is forwarding to the wrong parent, or
that the parent is not running.


-- 
/kinkie


[squid-users] squid and loadbalancing option

2008-11-10 Thread Martin Mulder
Hi,

I have (maybee a stupid) question.
I have an apache server as reverse proxy, squid as caching server and
Zope/Plone as backend servers.

Senario:

1) Apache gets a request for my.domain.com
2) Apache does a ProxyPass to my balancer
3) I have 2 "sticky" vhosts in apache which are the balancer members.
These are not reachable from the outside, and are called sticky1 and
sticky2.
These "sticky" vhost creates a cookie, which is uses by the balancer,
to decide the sticky server.
these sticky vhost forwards the request to Squid.

4) Squid is running @ 127.0.0.1:3389 and 127.0.0.2:3389
5) Sticky1 vhost proxies a request to http://sticky1.domain.com:3389,
sticky2 vhost proxies a request to http://sticky2.domain.com:3389
( where sticky1.domain.com resolves to 127.0.0.1 and sticky2.domain.com
resolves to 127.0.0.2 )

At the moment I have the following configuration:
## Backend server 1
cache_peer 192.168.2.3 parent 8100 0 no-query originserver
name=server1
cache_peer_domain server1 sticky1.domain.com

## Backend server 2
cache_peer 192.168.2.4 parent 8100 0 no-query originserver
name=server2
cache_peer_domain server2 sticky2.domain.com


This results in:

a webrequest comes to apache ( without an host-cookie ) on
my.domain.com
the request will be proxied to a balancermember ( based on the
balancing policy )
this vhost creates a cookie ( like: BALANCEID: balancer.sticky1 or
BALANCEID: balancer.sticky2 )
the vhosts proxied the request to http://sticky1.domain.com:3389 or
http://sticky2.domain.com:3389 

-- The request reaches squid --

the request comes to the cache_peer depending on the domain ( sticky1
of sticky2 )
squid delivers the page ( or from cache, or from originserver )

In this case squid can deliver a 500 or 503 if the requested backend
server is down.
So if my request has a cookie for sticky1.domain.com and 192.168.2.3
originserver is down, the users with this cookie get an error.
So if my request has a cookie for sticky2.domain.com and 192.168.2.4
originserver is down, the users with this cookie get an error.


What I try, but can't get working:

every cache_peer_domain has 2 cache_peers:

## Backend server 1
cache_peer 192.168.2.3 parent 8100 0 no-query originserver
name=server1
cache_peer 192.168.2.4 parent 8100 0 no-query originserver name=server2
// This server doesn't have the session information
cache_peer_domain server1 sticky1.domain.com

## Backend server 2
cache_peer 192.168.2.4 parent 8100 0 no-query originserver
name=server2
cache_peer 192.168.2.3 parent 8100 0 no-query originserver name=server1
// This server doesn't have the session information
cache_peer_domain server2 sticky2.domain.com

But the cache peer which hasn't the session may only be used it the
other cache_peer is down.

is the possible with squid 2.6?




Re: [squid-users] No password to internal addresses

2008-11-10 Thread Henrik Nordstrom
On mån, 2008-11-10 at 09:50 +0100, yagh mur wrote:

> http_access allow to_mynetwork users1
> http_access allow password users1
> http_access allow mynetwork
> http_access deny all


I think the above should be

http_access allow mynetwork to_mynetwork
http_access allow mynetwork users1
http_access deny all

which allows mynetwork anonymous access to the local network withour
requiring login and access to anything else requiring a password.


The password acl is redundant as users1 will also ask for
authentication. Any acl requiring a username will ask for authentication
if not already provided. "proxy_auth REQUIRED" is just a magic acl for
"any authenticated user".

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Unable to forward this request at this time.

2008-11-10 Thread Henrik Nordstrom
From the error it sounds like it has declared the peer down.

On mån, 2008-11-10 at 11:35 +0700, ░▒▓ ɹɐzǝupɐɥʞ ɐzɹıɯ ▓▒░ wrote:
> here is my squid .conf
> ===
> http_port 2210 transparent
> icp_port 3130
> snmp_port 3401
> cache_mgr admin
> emulate_httpd_log off
> cache_replacement_policy heap LFUDA
> maximum_object_size_in_memory 50 KB
> maximum_object_size 50 MB
> 
> http_port 80 accel defaultsite=monitor.gpi-g.com
> cache_peer 202.169.51.118 parent 80 0 no-query originserver name=myAccel
> acl our_sites dstdomain monitor.gpi-g.com
> http_access allow our_sites
> cache_peer_access myAccel allow our_sites
> 
> dead_peer_timeout 10 seconds
> acl QUERY urlpath_regex cgi-bin \?
> no_cache deny QUERY
> visible_hostname gpi-g.com
> cache_mem 5 MB
> memory_pools off
> log_icp_queries on
> buffered_logs on
> quick_abort_min 0 KB
> quick_abort_max 0 KB
> quick_abort_pct 95
> cache_swap_low 70%
> cache_swap_high 90%
> cache_dir aufs /var/spool/squid 4000 16 256
> cache_dir aufs /var/spool/squid1 4000 16 256
> cache_dir aufs /var/spool/squid2 4000 16 256
> cache_dir aufs /var/spool/squid3 4000 16 256
> cache_access_log /var/log/squid/access.log
> cache_log /var/log/squid/cache.log
> cache_store_log /var/log/squid/store.log
> pid_filename /var/run/squid.pid
> forwarded_for on
> half_closed_clients off
> 
> cache_effective_user proxy
> cache_effective_group proxy
> cache_mgr [EMAIL PROTECTED]
> refresh_pattern ^ftp: 1440 20% 10080
> refresh_pattern ^gopher: 1440 0% 1440
> refresh_pattern . 0 20% 4320
> 
> [cut]
> ===
> 
> On Mon, Nov 10, 2008 at 11:28 AM, Amos Jeffries <[EMAIL PROTECTED]> wrote:
> > ??? ??z?up??? ?z??? ??? wrote:
> >>
> >> my squid working for other site :(
> >> fyi :
> >> my public IP 202.169.51.118
> >> and monitor.gpi-g.com is 202.169.51.118 too
> >>
> >> and i can browse mail.gpi-g.com - 202.169.51.119
> >> ( same server farm with different server - one line/level with 118 )
> >
> > Then it's probably a configuration problem.
> > By your description (IPs the same) your squid is supposed to be an
> > accelerator for that website. Which means the cache_peer configuration needs
> > to be checked.
> >
> > Amos
> >
> >>
> >> On Mon, Nov 10, 2008 at 11:21 AM, Amos Jeffries <[EMAIL PROTECTED]>
> >> wrote:
> >>>
> >>> ??? ??z?up??? ?z??? ??? wrote:
> 
>  i cant browse from outside and local
>  
>  ERROR
>  The requested URL could not be retrieved
> 
>  While trying to retrieve the URL: http://monitor.gpi-g.com/
> 
>  The following error was encountered:
> 
>    * Unable to forward this request at this time.
> 
>  This request could not be forwarded to the origin server or to any
>  parent caches. The most likely cause for this error is that:
> 
>    * The cache administrator does not allow this cache to make direct
>  connections to origin servers, and
>    * All configured parent caches are currently unreachable.
> 
>  Your cache administrator is [EMAIL PROTECTED]
>  Generated Mon, 10 Nov 2008 03:18:16 GMT by gpi-g.com
>  (squid/2.6.STABLE18)
>  
> 
> 
> >>> So it seems.
> >>>
> >>> Check your squid configuration and Internet connection.
> >>>
> >>> Amos
> >>> --
> >>> Please be using
> >>>  Current Stable Squid 2.7.STABLE5 or 3.0.STABLE10
> >>>  Current Beta Squid 3.1.0.1
> >>>
> >>
> >>
> >>
> >
> >
> > --
> > Please be using
> >  Current Stable Squid 2.7.STABLE5 or 3.0.STABLE10
> >  Current Beta Squid 3.1.0.1
> >
> 
> 
> 


signature.asc
Description: This is a digitally signed message part


Re: [squid-users] Squid memory usage

2008-11-10 Thread nitesh naik
Henrik,

I read FAQ and implemented almost most of the suggestion to reduce
memory usage. I am not much concern about memory usage as there plenty
of available memory but the issue is CPU usage goes high up to 100%
and slows down squid response once squid grows beyond allocated
cache_mem size . Does that mean squid is spending most of time in
releasing the objects from cache ? Most of the objects stored in cache
has TTL of 1 hour.



Following are few lines from squid.conf file.

http_port 0.0.0.0:80 accel defaultsite=s1.xyz.com vhost protocol=http
cache_peer 10.0.0.175 Parent 80 0 no-query round-robin originserver
monitorurl=http://10.0.0.175:80/healthcheck.gif
cache_peer 10.0.0.177 Parent 80 0 no-query round-robin originserver
monitorurl=http://10.0.0.177:80/healthcheck.gif
cache_peer 10.0.0.179 Parent 80 0 no-query round-robin originserver
monitorurl=http://10.0.0.179:80/healthcheck.gif
cache_peer 10.0.0.181 Parent 80 0 no-query round-robin originserver
monitorurl=http://10.0.0.181:80/healthcheck.gif
dead_peer_timeout 10 seconds
hierarchy_stoplist cgi-bin
hierarchy_stoplist ?
cache_mem 4294967296 bytes
maximum_object_size_in_memory 1048576 bytes
memory_replacement_policy lru
cache_replacement_policy lru
cache_dir null /empty
cache_swap_low 60
cache_swap_high 80
refresh_pattern ^ftp: 1440 20% 10080
refresh_pattern ^gopher: 1440 0% 1440
refresh_pattern (cgi-bin|\?) 0 0% 0
refresh_pattern . 1800 20% 3600

Regards
Nitesh



On Sat, Nov 8, 2008 at 2:06 AM, Henrik Nordstrom
<[EMAIL PROTECTED]> wrote:
> Have you read the faq section on memory usage?
>
>
>
> On fre, 2008-11-07 at 20:02 +0530, nitesh naik wrote:
>> Henrik / Amos,
>>
>> Do you all think I should reduce cache_mem to lesser value ? Squid
>> stops responding as memory usage of squid grows upto 12GB. I have
>> allocate 8 GB cache_mem.
>>
>> We are using 64 bit machine running on Suse 10.1.
>>
>> Regards
>> Nitesh
>>
>>
>> On Thu, Nov 6, 2008 at 11:35 PM, nitesh naik <[EMAIL PROTECTED]> wrote:
>> > Thanks everyone for your reply.
>> >
>> > I went through all these docs and also compiled squid with dmalloc
>> > option and disabled memory_pool. Squid memory usage grows upto 12GB+
>> > and squid stops responding when we try to rotate logs using squid -k
>> > rotate.
>> >
>> > I want squid up and running all the time even if its memory usage
>> > grows double the allocate cache_mem value.
>> >
>> > Regards
>> > Nitesh
>> > On Thu, Nov 6, 2008 at 3:58 PM, Adam Carter <[EMAIL PROTECTED]> wrote:
>> >>> Squid memory usage grows beyond allocate cache_mem size of 8 GB.
>> >>
>> >> http://wiki.squid-cache.org/SquidFaq/SquidMemory
>> >>
>> >
>


[squid-users] Run squid2.5.6 and dansguardian got error message: (111) Connection refused

2008-11-10 Thread zhang yikai


hi all, 

I installed squid and it work properly,  then I run dansguardian, connect to 
squid port 3128 ok, but when I using dansguardian port 8080 as a proxy, I got 
the error message (111) Connection refused, I don't know what is the problem? 
thank you.

[squid-users] URL Filtering for Squid

2008-11-10 Thread Alex Huxham


-Original Message-
From: Alex Huxham 
Sent: 10 November 2008 09:52
To: 'a bv'
Subject: RE: [squid-users] URL Filtering for Squid

Yes and no, there are free ones, however http://urlblacklist.com allows
you a FREE copy, only ONCE though, your first download of the blacklist
is free, its worth giving this a go, pretty much everything is
contained, you may need to just create a custom blacklist, white list to
allow/disallow sites not including in the lists.

QUOTE: TRY FOR FREE: You can try the service by downloading the
blacklist once for free.

Alex

-Original Message-
From: a bv [mailto:[EMAIL PROTECTED] 
Sent: 10 November 2008 09:49
To: Alex Huxham
Subject: Re: [squid-users] URL Filtering for Squid

Am i right of the software is free but for teh database we have to pay
?  if so are there any free databases for use with this software?


Regards



2008/11/10 Alex Huxham <[EMAIL PROTECTED]>:
> Yet another yes from me, used within our school, and works perfectly
for
> 900+ students and 200+ staff. Easy to configure, documented well and
> there are plenty of resources on a google search to get you going
> perfectly.
>
> Alex
>
> -Original Message-
> From: Marcus Kool [mailto:[EMAIL PROTECTED]
> Sent: 10 November 2008 09:18
> To: a bv
> Cc: squid-users@squid-cache.org
> Subject: Re: [squid-users] URL Filtering for Squid
>
> I am the author of ufdbGuard which is based on squidGuard.
> ufdbGuard is free and can be used with both free and commercial
> databases.
>
> -Marcus
>
>
> a bv wrote:
>> Hi,
>>
>> What is /are the popular /commanly used  open source (and maybe also
>> the other free ones) URL/content filtering solution/software. And who
>> are maintaining url databases?
>>
>> Regards
>>
>>
>


RE: [squid-users] URL Filtering for Squid

2008-11-10 Thread Alex Huxham
Yet another yes from me, used within our school, and works perfectly for
900+ students and 200+ staff. Easy to configure, documented well and
there are plenty of resources on a google search to get you going
perfectly.

Alex

-Original Message-
From: Marcus Kool [mailto:[EMAIL PROTECTED] 
Sent: 10 November 2008 09:18
To: a bv
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] URL Filtering for Squid

I am the author of ufdbGuard which is based on squidGuard.
ufdbGuard is free and can be used with both free and commercial
databases.

-Marcus


a bv wrote:
> Hi,
> 
> What is /are the popular /commanly used  open source (and maybe also
> the other free ones) URL/content filtering solution/software. And who
> are maintaining url databases?
> 
> Regards
> 
> 


Re: [squid-users] URL Filtering for Squid

2008-11-10 Thread Marcus Kool

I am the author of ufdbGuard which is based on squidGuard.
ufdbGuard is free and can be used with both free and commercial databases.

-Marcus


a bv wrote:

Hi,

What is /are the popular /commanly used  open source (and maybe also
the other free ones) URL/content filtering solution/software. And who
are maintaining url databases?

Regards




RE: [squid-users] URL Filtering for Squid

2008-11-10 Thread Thomas Raef
ufdbGuard is the best. You can get it from www.urlfilterdb.com

I like it because it's fast, updated frequently, easy to use and
customize. And the guy running it is extremely helpful.

No I am not related. I've used others in the past and when I finally
upon this one, I felt such a relief. The guy really is fantastic at
helping you achieve what you'd like (if possible) with his product.

-Original Message-
From: a bv [mailto:[EMAIL PROTECTED] 
Sent: Monday, November 10, 2008 3:12 AM
To: squid-users@squid-cache.org
Subject: [squid-users] URL Filtering for Squid

Hi,

What is /are the popular /commanly used  open source (and maybe also
the other free ones) URL/content filtering solution/software. And who
are maintaining url databases?

Regards


[squid-users] URL Filtering for Squid

2008-11-10 Thread a bv
Hi,

What is /are the popular /commanly used  open source (and maybe also
the other free ones) URL/content filtering solution/software. And who
are maintaining url databases?

Regards


[squid-users] No password to internal addresses

2008-11-10 Thread yagh mur
Hi all,
I've to configure Squid2 in order to ask for a password if a user go
to an external address
and no password have to be asked if the destination address is internal

I've this rules:

acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8

http_access allow manager localhost
http_access deny manager

acl mynetwork src a.b.c.d/x
acl to_mynetwork dst a.b.c.d/x

external_acl_type NT_global_group %LOGIN c:/squid/libexec/win32_check_group.exe
acl users1 external NT_global_group Users
acl password proxy_auth REQUIRED

http_access allow to_mynetwork users1
http_access allow password users1
http_access allow mynetwork
http_access deny all

but in this way squid require a password also if the dst address is in
my network,
what's the correct way to perform this task?