Re: [squid-users] Troubles with cachemgr.cgi

2007-02-02 Thread Chris Robertson

Henrik Nordstrom wrote:

tor 2007-02-01 klockan 09:28 -0900 skrev Chris Robertson:

  
Testing would indicate otherwise.  At least when using Squid 2.5 and the 
keyword "all".  Using any other name (or none) allowed me to log in 
(despite an error appearing in cache.log), but stated authentication was 
required when I clicked any links in the cachemgr menu.  Here's a 
cache.log snippet:



Works for me.. Tested with cachemgr.cgi from both 2.3.STABLE1 (what i
had on my web server) and current 2.6 sources.

Which version of cachemgr.cgi are you using? Shown at the footer of
every page including the login page..

The only known limitation is that things will get a bit confused if you
use | in the login or password.. and also it's a bit hard to get to
entries protected by another password if the menu is password protected.

Regards
Henrik
  


Tests were performed from cachemgr 2.3STABLE3 to Squid 2.5STABLE13.

Chris


Re: [squid-users] Can not shutdown squid

2007-02-02 Thread Thomas-Martin Seck
* Henrik Nordstrom ([EMAIL PROTECTED]):

> fre 2007-02-02 klockan 11:32 +0530 skrev Santosh Rani:
> > Processor : Intel P4 3.06
> > Intel motherboard
> > SATA Hard Disk
> > 
> > SQUID VERSION: squid 2.6.3
> > My trouble is that I can not stop squid.
> > 
> > I passed the following command,
> > 
> > # /usr/local/etc/rc.d/squid stop
> > 
> > The shutdown_life time option is;
> > 
> >  shutdown_lifetime 5 seconds
> > 
> > Result of this command is:
> > 
> > Stopping squid.
> > squid: ERROR: Could not send signal 15 to process 1014: (3) No such process
> 
> Either Squid is not running, or your pid file has been corrupted

I assume that the OP is using FreeBSD (from the looks of it). If so,
there must be a problem with the pid information, because the script
first calls 'squid -k shutdown' and then waits for the 'squid' processes
to disappear. Since the shutdown call is not delivered properly, the
script waits forever because Squid keeps running.
 
> If Squid is running then use ps / top to fin the pid of Squid and kill
> it manually.
> 
>   kill pid
> 
> If it isn't running then something is wrong in your rc script.

Does 'squid -k shutdown' exit != 0 when it fails like in the above case?
If yes, I could bail out early and avoid the infinite waiting loop.


RE: [squid-users] squid stuck on old site

2007-02-02 Thread Dave Rhodes
Forgot one:  Does the firewall have a "hosts" file or equivalent?
Dave

-Original Message-
From: John Oliver [mailto:[EMAIL PROTECTED] 
Sent: Friday, February 02, 2007 1:41 PM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] squid stuck on old site


On Fri, Feb 02, 2007 at 01:25:19PM -0500, Dave Rhodes wrote:
> John,
> I'm guessing you meant that you got the new site when you bypassed 
> squid?

Yup.

> Did the IP address of the site change?  If so, have you cleared DNS 
> cache?  What happens if you go directly to the website's IP through 
> squid?

That was my first thought.  But no, same IP.

I've found something really bizarre... when our internal firewall is set
to transparently redirect all HTTP traffic to the squid server, we see
the old page.  When traffic goes through the squid server because of the
autoconf.pac script, we see the new page.  No, there is no proxying or
caching or anything on the firewall... it's very weird, and I continue
to poke and prod at it.

-- 
***
* John Oliver http://www.john-oliver.net/ *
* *
***


RE: [squid-users] squid stuck on old site

2007-02-02 Thread Dave Rhodes
Is the old site still up and accessible?  If so can you disable it?
What happens then?  Maybe a static route on the firewall to an
alternative IP on the same or another server? What's a trace route show
from a workstation? 

Since it's definitely weird, I'm throwing out some weird possibilities
but, I've had to do stuff like it to get things working the way I
wanted.
Dave

-Original Message-
From: John Oliver [mailto:[EMAIL PROTECTED] 
Sent: Friday, February 02, 2007 1:41 PM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] squid stuck on old site


On Fri, Feb 02, 2007 at 01:25:19PM -0500, Dave Rhodes wrote:
> John,
> I'm guessing you meant that you got the new site when you bypassed 
> squid?

Yup.

> Did the IP address of the site change?  If so, have you cleared DNS 
> cache?  What happens if you go directly to the website's IP through 
> squid?

That was my first thought.  But no, same IP.

I've found something really bizarre... when our internal firewall is set
to transparently redirect all HTTP traffic to the squid server, we see
the old page.  When traffic goes through the squid server because of the
autoconf.pac script, we see the new page.  No, there is no proxying or
caching or anything on the firewall... it's very weird, and I continue
to poke and prod at it.

-- 
***
* John Oliver http://www.john-oliver.net/ *
* *
***


Re: [squid-users] squid stuck on old site

2007-02-02 Thread John Oliver
On Fri, Feb 02, 2007 at 01:25:19PM -0500, Dave Rhodes wrote:
> John,
> I'm guessing you meant that you got the new site when you bypassed
> squid?

Yup.

> Did the IP address of the site change?  If so, have you cleared DNS
> cache?  What happens if you go directly to the website's IP through
> squid?

That was my first thought.  But no, same IP.

I've found something really bizarre... when our internal firewall is set
to transparently redirect all HTTP traffic to the squid server, we see
the old page.  When traffic goes through the squid server because of the
autoconf.pac script, we see the new page.  No, there is no proxying or
caching or anything on the firewall... it's very weird, and I continue
to poke and prod at it.

-- 
***
* John Oliver http://www.john-oliver.net/ *
* *
***


RE: [squid-users] squid stuck on old site

2007-02-02 Thread Dave Rhodes
John,
I'm guessing you meant that you got the new site when you bypassed
squid?

Did the IP address of the site change?  If so, have you cleared DNS
cache?  What happens if you go directly to the website's IP through
squid?
Dave

-Original Message-
From: John Oliver [mailto:[EMAIL PROTECTED] 
Sent: Friday, February 02, 2007 1:19 PM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] squid stuck on old site


On Thu, Feb 01, 2007 at 08:27:34PM -0500, Chris Nighswonger wrote:
> >I cleaned out the cache directory and used squid -z to rebuild it, 
> >and was still seeing the same old site.  I have a hard time picturing

> >how that's even possible ;-)
> 
> 
> Maybe you have tried, but have you bypassed squid to see if your 
> browsers can see the new site direct?

Of course... :-)

-- 
***
* John Oliver http://www.john-oliver.net/ *
* *
***


Re: [squid-users] squid stuck on old site

2007-02-02 Thread John Oliver
On Thu, Feb 01, 2007 at 08:27:34PM -0500, Chris Nighswonger wrote:
> >I cleaned out the cache directory and used squid -z to rebuild it, and
> >was still seeing the same old site.  I have a hard time picturing how
> >that's even possible ;-)
> 
> 
> Maybe you have tried, but have you bypassed squid to see if your
> browsers can see the new site direct?

Of course... :-)

-- 
***
* John Oliver http://www.john-oliver.net/ *
* *
***


Re: [squid-users] commBind: Cannot bind socket FD 98 to *:0: (98) Address already in use

2007-02-02 Thread Stefan Bohm
Hi Henrik,

thanks for your hints.

I increased the ip_local_port_range as suggested. It was 10k to 32k before.
But, how can I maintain persistent connections from the front-side
Apache to the squid?

The setup is as follows:

World -> apache2 (using mod_rewrite and mod_proxy) -> squid -> App-Server

As far as I understand, the connection from apache2 to squid should be 
persistent, right?
This might be a problem, because Apache's mod_proxy doesn't seem to support 
persistent
connections. If I'm wrong, can anyone give me a clue how to get this working?

Regards
Stefan

Henrik Nordstrom wrote:
> fre 2007-02-02 klockan 14:11 +0100 skrev Stefan Bohm:
>> Hi all,
>>
>> yesterday we had some strange problems running our reverse-proxy squid 
>> cluster.
>> During some high-traffic sports event, squid starts to emit messages
>> like:
>>
>> commBind: Cannot bind socket FD 98 to *:0: (98) Address already in use
> 
> You have run out of free ports, all available ports occupied by
> TIME_WAIT sockets.
> 
> Things to look into
> 
> 1. Make sure you internally use persistent connections between Squid and
> the web servers. This cuts down on the number of initiated connections/s
> considerably.
> 
> 2. Configure the unassigned port range as big as possible in your OS. On
> Linux this is set in /proc/sys/net/ipv4/ip_local_port_range. The biggest
> possible range is 1024-65535 and can sustain up to at least 500
> connections/s continuous load squid->webservers.
> 
> Regards
> Henrik


Re: [squid-users] dstdomain/port acl question

2007-02-02 Thread Chris Nighswonger

On 2/2/07, Henrik Nordstrom <[EMAIL PROTECTED]> wrote:

tor 2007-02-01 klockan 16:26 -0500 skrev Chris Nighswonger:
> The following is my setup to handle the direct connections:
>
> acl streamserver dstdomain .streamserver.com
> acl streamport 1234
> http_access deny streamserver streamport
> deny_info http://192.168.0.x:8000/mountpt streamserver streamport

Where is this in relation to your other http_access rules?


http_access allow manager localhost
http_access allow manager masada1
http_access deny manager
http_access deny CONNECT !SSL_ports
http_access allow localhost UnauthAccess
http_access allow localhost WindowsUpdate
http_access allow localhost Java
http_access allow cnighswonger-lt
http_access allow localhost PURGE
http_access allow localhost AuthorizedUsers
# Deny connections from inside to the outside webradio stream and
redirect them to the inside stream
# The first two entries handle direct stream requests. The last two
handle file list requests.
http_access deny streamserver streamport
deny_info http://192.168.0.238:8000/mountpt streamserver streamport
http_access deny streamlink
deny_info http://192.168.0.238:8000/list.m3u streamlink
#
http_access deny !Safe_ports
http_access deny all



And what is said in access.log?


The access.log shows two TCP_DENIED and one TCP_MISS all looking at
the outside streaming server.

1170362412.967  5 127.0.0.1 TCP_DENIED/407 1903 GET
http://streamserver.com:7590/ - NONE/- text/html
1170362413.015 41 127.0.0.1 TCP_DENIED/407 2136 GET
http://streamserver.com:7590/ - NONE/- text/html
1170362431.237  1 127.0.0.1 TCP_DENIED/407 1903 GET
http://streamserver.com:7590/ - NONE/- text/html
1170362431.270  18222 127.0.0.1 TCP_MISS/600 4515 GET
http://streamserver.com:7590/ Administrator DIRECT/69.5.81.71 -
1170362431.285  5 127.0.0.1 TCP_DENIED/407 2136 GET
http://streamserver.com:7590/ - NONE/- text/html
1170362431.530  1 127.0.0.1 TCP_DENIED/407 1903 GET
http://streamserver.com:7590/ - NONE/- text/html
1170362431.532243 127.0.0.1 TCP_MISS/600 8859 GET
http://streamserver.com:7590/ Administrator DIRECT/69.5.81.71 -



But for this task of directing users to a local mirror even if they
request the original Internet address I'd recommend you to use a url
rewriter. This way you can get the local mirror completely transparent
to your users, not even knowing they access the local mirror.


I have had some difficulty setting up for two redirectors (adzapper
and squirm). I saw your post on this route and decided to give it a
try. :)

Chris


Re: [squid-users] File Descriptors

2007-02-02 Thread Michel Santos

Henrik Nordstrom disse na ultima mensagem:
> fre 2007-02-02 klockan 10:54 +0800 skrev Adrian Chadd:
>
>> If your system or process FD limits are lower than what Squid believes
>> it
>> to be, then yup. It'll get unhappy.
>
> Only temporarily. It automatically adjusts fd usage to what the system
> can sustain when hitting the limit (see fdAdjustReserved)
>
> But this also causes problems if there is a temporary system-wide
> shortage of filedescriptors due to other processes opening too many
> files. Once Squid has detected a filedescriptor limitation it won't go
> above the number of filedescriptor it used at that time, and you need to
> restart Squid to recover after fixing the cause to the system wide
> filedescriptor shortage.
>


In a former msg you said:

>When Squid sees it's short of filedescriptors it stops accepting
> new requests, focusing on finishing what it has already accepted.

isn't this conflicting with what you said before?

do squid recover or do it need to be restarted?

Michel
...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] commBind: Cannot bind socket FD 98 to *:0: (98) Address already in use

2007-02-02 Thread Henrik Nordstrom
fre 2007-02-02 klockan 14:11 +0100 skrev Stefan Bohm:
> Hi all,
> 
> yesterday we had some strange problems running our reverse-proxy squid 
> cluster.
> During some high-traffic sports event, squid starts to emit messages
> like:
> 
> commBind: Cannot bind socket FD 98 to *:0: (98) Address already in use

You have run out of free ports, all available ports occupied by
TIME_WAIT sockets.

Things to look into

1. Make sure you internally use persistent connections between Squid and
the web servers. This cuts down on the number of initiated connections/s
considerably.

2. Configure the unassigned port range as big as possible in your OS. On
Linux this is set in /proc/sys/net/ipv4/ip_local_port_range. The biggest
possible range is 1024-65535 and can sustain up to at least 500
connections/s continuous load squid->webservers.

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


Re: [squid-users] Can not shutdown squid

2007-02-02 Thread Henrik Nordstrom
fre 2007-02-02 klockan 11:32 +0530 skrev Santosh Rani:
> Processor : Intel P4 3.06
> Intel motherboard
> SATA Hard Disk
> 
> SQUID VERSION: squid 2.6.3
> My trouble is that I can not stop squid.
> 
> I passed the following command,
> 
> # /usr/local/etc/rc.d/squid stop
> 
> The shutdown_life time option is;
> 
>  shutdown_lifetime 5 seconds
> 
> Result of this command is:
> 
> Stopping squid.
> squid: ERROR: Could not send signal 15 to process 1014: (3) No such process

Either Squid is not running, or your pid file has been corrupted

If Squid is running then use ps / top to fin the pid of Squid and kill
it manually.

  kill pid

If it isn't running then something is wrong in your rc script.

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


Re: [squid-users] File Descriptors

2007-02-02 Thread Henrik Nordstrom
fre 2007-02-02 klockan 10:54 +0800 skrev Adrian Chadd:

> If your system or process FD limits are lower than what Squid believes it
> to be, then yup. It'll get unhappy.

Only temporarily. It automatically adjusts fd usage to what the system
can sustain when hitting the limit (see fdAdjustReserved)

But this also causes problems if there is a temporary system-wide
shortage of filedescriptors due to other processes opening too many
files. Once Squid has detected a filedescriptor limitation it won't go
above the number of filedescriptor it used at that time, and you need to
restart Squid to recover after fixing the cause to the system wide
filedescriptor shortage.

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


Re: [squid-users] File Descriptors

2007-02-02 Thread Henrik Nordstrom
tor 2007-02-01 klockan 20:01 -0600 skrev Matt:
> What does Squid do or act like when its out of file descriptors?

When Squid sees it's short of filedescriptors it stops accepting new
requests, focusing on finishing what it has already accepted.

And long before there is a shortage it disables the use of persistent
connections to limit the pressure on concurrent filedescriptors.

> If cachemgr says it still has some left could it still really be out?

If you get to cachemgr then it's not out of filedescriptors, at least
not right then...

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


Re: [squid-users] Log File: Field Meanings

2007-02-02 Thread Henrik Nordstrom
tor 2007-02-01 klockan 16:48 -0800 skrev Karl R. Balsmeier:
> Hi Henrik, all,
> 
> 469 65.80.145.195 TCP_HIT/200 12615 GET
> 
> When looking at the access_log -what does the first field [469] mean, 
> and what does the field after TCP_HIT/200 [12615] mean?

See FAQ.

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


Re: [squid-users] dstdomain/port acl question

2007-02-02 Thread Henrik Nordstrom
tor 2007-02-01 klockan 16:26 -0500 skrev Chris Nighswonger:
> The following is my setup to handle the direct connections:
> 
> acl streamserver dstdomain .streamserver.com
> acl streamport 1234
> http_access deny streamserver streamport
> deny_info http://192.168.0.x:8000/mountpt streamserver streamport

Where is this in relation to your other http_access rules?

And what is said in access.log?

But for this task of directing users to a local mirror even if they
request the original Internet address I'd recommend you to use a url
rewriter. This way you can get the local mirror completely transparent
to your users, not even knowing they access the local mirror.

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


Re: [squid-users] Troubles with cachemgr.cgi

2007-02-02 Thread Henrik Nordstrom
tor 2007-02-01 klockan 09:28 -0900 skrev Chris Robertson:

> Testing would indicate otherwise.  At least when using Squid 2.5 and the 
> keyword "all".  Using any other name (or none) allowed me to log in 
> (despite an error appearing in cache.log), but stated authentication was 
> required when I clicked any links in the cachemgr menu.  Here's a 
> cache.log snippet:

Works for me.. Tested with cachemgr.cgi from both 2.3.STABLE1 (what i
had on my web server) and current 2.6 sources.

Which version of cachemgr.cgi are you using? Shown at the footer of
every page including the login page..

The only known limitation is that things will get a bit confused if you
use | in the login or password.. and also it's a bit hard to get to
entries protected by another password if the menu is password protected.

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


RE: [squid-users] Reverse Proxy Sticky Sessions

2007-02-02 Thread Henrik Nordstrom
tor 2007-02-01 klockan 17:05 -0500 skrev Peters, Noah:

> I discovered that the problem with the https goes away when I split
> the config and run two separate instances of squid, one for https and
> one for http.  This is an acceptable configuration for me.

Odd.

Check your cache_peer_access/domain rules. Maybe you forgot to tell
Squid which peers should be used on which requests?

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


Re: [squid-users] WCCP issue

2007-02-02 Thread Awie
Thanks a lot Bryan !

I will try it very soon and update to this list.

Thx & Rgds,

Awie


- Original Message - 
From: "Bryan Shoebottom" <[EMAIL PROTECTED]>
To: "Awie" <[EMAIL PROTECTED]>
Cc: "Squid-users" 
Sent: Friday, February 02, 2007 22:33
Subject: Re: [squid-users] WCCP issue


> Awie,
> 
> 1. yes
> 2. use the wccp2_router instead of wccp_router
> 3. yes
> 4/5. You'll have to read up on this one, i ended up moving from a 2.4
> kernel with a wccp module compiled to a 2.6.9+ kernel with the ip_gre
> module.  I guess try your setup and if it doesn't work you can hopefully
> upgrade or test on another system.
> 
> Thanks,
>  Bryan
> 
> 
> On Fri, 2007-02-02 at 09:24 -0500, Awie wrote:
> > Hi Bryan,
> > 
> > Thanks for your suggestion. I will do your suggest but I want to make
> > sure
> > before I do the job. Please correct me.
> > 
> > 1. I will compile new version of Squid (ie. 2.6S9) with
> > "--enable-wccpv2"
> > option
> > 2. Change the squid.conf to match the setting parameter of new Squid
> > version
> > 3. Change the Cisco Router WCCP from version 1 to version 2.
> > 4. Keep the current Linux kernel 2.4.34 that support WCCPv2
> > 5. Keep the current ip_wccp.o (version 1.7) that already support
> > WCCPv2
> > 
> > Please advise
> > 
> > Thx & Rgds,
> > 
> > Awie
> > 
> > 
> > - Original Message -
> > From: "Bryan Shoebottom" <[EMAIL PROTECTED]>
> > To: "Awie" <[EMAIL PROTECTED]>
> > Cc: "Squid-users" 
> > Sent: Friday, February 02, 2007 21:33
> > Subject: Re: [squid-users] WCCP issue
> > 
> > 
> > > Awie,
> > >
> > > I had a similar problem with only a couple sites, specifically mail
> > ones
> > > like hotmail, gmail, etc.  I found a lot of documentation to work
> > with
> > > the redirect rule in the firewall on the squid server but this
> > didn't
> > > make a difference.  I finally compiled in WCCPv2 support (into the
> > 2.5S
> > > code at the time), and all my problems went away!  I am currently
> > using
> > > 2.6S9 on one server and 2.6S4 on another with --enable-wccpv2 and
> > have
> > > had no complaints.
> > > Hope this helps.
> > >
> > > --
> > > Thanks,
> > >
> > > Bryan Shoebottom CCNA
> > > Network & Systems Analyst
> > > Network Services & Computer Operations
> > > Fanshawe College
> > > Phone:  (519) 452-4430 x4904
> > > Fax:(519) 453-3231
> > > [EMAIL PROTECTED]
> > >
> > > On Fri, 2007-02-02 at 04:40 -0500, Awie wrote:
> > > > Dear all,
> > > >
> > > > After running for more than 1 year, our proxy cannot serve
> > normally
> > > > all
> > > > request to hotmail.com (only, Yahoo mail is OK). Our proxy is
> > running
> > > > Squid
> > > > 2.5S14 with WCCPv1 (WCCP module version 1.7). We tried to upgrade
> > to
> > > > 2.6S9
> > > > but we got same result.
> > > >
> > > > If we run the proxy as non-transparent or run as transparent by
> > using
> > > > "route
> > > > map" the request to hotmail.com can be served normally. Problem
> > was
> > > > always
> > > > happen with WCCP.
> > > >
> > > > Would you tell me what I should do to solve the problem? I prefer
> > to
> > > > use
> > > > WCCP instead of "route map" that consume more CPU resource and
> > create
> > > > problem when our proxy dies.
> > > >
> > > > Thanks a lot for your kind help.
> > > >
> > > > Thx & Rgds,
> > > >
> > > > Awie
> > > >
> > > >
> > > >
> > > >
> > >
> > 
> > 
> > 
> 



Re: [squid-users] WCCP issue

2007-02-02 Thread Bryan Shoebottom
Awie,

1. yes
2. use the wccp2_router instead of wccp_router
3. yes
4/5. You'll have to read up on this one, i ended up moving from a 2.4
kernel with a wccp module compiled to a 2.6.9+ kernel with the ip_gre
module.  I guess try your setup and if it doesn't work you can hopefully
upgrade or test on another system.

Thanks,
 Bryan


On Fri, 2007-02-02 at 09:24 -0500, Awie wrote:
> Hi Bryan,
> 
> Thanks for your suggestion. I will do your suggest but I want to make
> sure
> before I do the job. Please correct me.
> 
> 1. I will compile new version of Squid (ie. 2.6S9) with
> "--enable-wccpv2"
> option
> 2. Change the squid.conf to match the setting parameter of new Squid
> version
> 3. Change the Cisco Router WCCP from version 1 to version 2.
> 4. Keep the current Linux kernel 2.4.34 that support WCCPv2
> 5. Keep the current ip_wccp.o (version 1.7) that already support
> WCCPv2
> 
> Please advise
> 
> Thx & Rgds,
> 
> Awie
> 
> 
> - Original Message -
> From: "Bryan Shoebottom" <[EMAIL PROTECTED]>
> To: "Awie" <[EMAIL PROTECTED]>
> Cc: "Squid-users" 
> Sent: Friday, February 02, 2007 21:33
> Subject: Re: [squid-users] WCCP issue
> 
> 
> > Awie,
> >
> > I had a similar problem with only a couple sites, specifically mail
> ones
> > like hotmail, gmail, etc.  I found a lot of documentation to work
> with
> > the redirect rule in the firewall on the squid server but this
> didn't
> > make a difference.  I finally compiled in WCCPv2 support (into the
> 2.5S
> > code at the time), and all my problems went away!  I am currently
> using
> > 2.6S9 on one server and 2.6S4 on another with --enable-wccpv2 and
> have
> > had no complaints.
> > Hope this helps.
> >
> > --
> > Thanks,
> >
> > Bryan Shoebottom CCNA
> > Network & Systems Analyst
> > Network Services & Computer Operations
> > Fanshawe College
> > Phone:  (519) 452-4430 x4904
> > Fax:(519) 453-3231
> > [EMAIL PROTECTED]
> >
> > On Fri, 2007-02-02 at 04:40 -0500, Awie wrote:
> > > Dear all,
> > >
> > > After running for more than 1 year, our proxy cannot serve
> normally
> > > all
> > > request to hotmail.com (only, Yahoo mail is OK). Our proxy is
> running
> > > Squid
> > > 2.5S14 with WCCPv1 (WCCP module version 1.7). We tried to upgrade
> to
> > > 2.6S9
> > > but we got same result.
> > >
> > > If we run the proxy as non-transparent or run as transparent by
> using
> > > "route
> > > map" the request to hotmail.com can be served normally. Problem
> was
> > > always
> > > happen with WCCP.
> > >
> > > Would you tell me what I should do to solve the problem? I prefer
> to
> > > use
> > > WCCP instead of "route map" that consume more CPU resource and
> create
> > > problem when our proxy dies.
> > >
> > > Thanks a lot for your kind help.
> > >
> > > Thx & Rgds,
> > >
> > > Awie
> > >
> > >
> > >
> > >
> >
> 
> 
> 



Re: [squid-users] WCCP issue

2007-02-02 Thread Awie
Hi Bryan,

Thanks for your suggestion. I will do your suggest but I want to make sure
before I do the job. Please correct me.

1. I will compile new version of Squid (ie. 2.6S9) with "--enable-wccpv2"
option
2. Change the squid.conf to match the setting parameter of new Squid version
3. Change the Cisco Router WCCP from version 1 to version 2.
4. Keep the current Linux kernel 2.4.34 that support WCCPv2
5. Keep the current ip_wccp.o (version 1.7) that already support WCCPv2

Please advise

Thx & Rgds,

Awie


- Original Message - 
From: "Bryan Shoebottom" <[EMAIL PROTECTED]>
To: "Awie" <[EMAIL PROTECTED]>
Cc: "Squid-users" 
Sent: Friday, February 02, 2007 21:33
Subject: Re: [squid-users] WCCP issue


> Awie,
>
> I had a similar problem with only a couple sites, specifically mail ones
> like hotmail, gmail, etc.  I found a lot of documentation to work with
> the redirect rule in the firewall on the squid server but this didn't
> make a difference.  I finally compiled in WCCPv2 support (into the 2.5S
> code at the time), and all my problems went away!  I am currently using
> 2.6S9 on one server and 2.6S4 on another with --enable-wccpv2 and have
> had no complaints.
> Hope this helps.
>
> -- 
> Thanks,
>
> Bryan Shoebottom CCNA
> Network & Systems Analyst
> Network Services & Computer Operations
> Fanshawe College
> Phone:  (519) 452-4430 x4904
> Fax:(519) 453-3231
> [EMAIL PROTECTED]
>
> On Fri, 2007-02-02 at 04:40 -0500, Awie wrote:
> > Dear all,
> >
> > After running for more than 1 year, our proxy cannot serve normally
> > all
> > request to hotmail.com (only, Yahoo mail is OK). Our proxy is running
> > Squid
> > 2.5S14 with WCCPv1 (WCCP module version 1.7). We tried to upgrade to
> > 2.6S9
> > but we got same result.
> >
> > If we run the proxy as non-transparent or run as transparent by using
> > "route
> > map" the request to hotmail.com can be served normally. Problem was
> > always
> > happen with WCCP.
> >
> > Would you tell me what I should do to solve the problem? I prefer to
> > use
> > WCCP instead of "route map" that consume more CPU resource and create
> > problem when our proxy dies.
> >
> > Thanks a lot for your kind help.
> >
> > Thx & Rgds,
> >
> > Awie
> >
> >
> >
> >
>




Re: [squid-users] Cache problem with SSL bridge

2007-02-02 Thread Henrik Nordstrom
fre 2007-02-02 klockan 12:52 +0100 skrev [EMAIL PROTECTED]:
> I'm not sure to well understand the problem : for me, max-age=0 just tells 
> squid to revalidate data at each request but not to reload it if cached data 
> is still good.
> Above all, without SSL, in HTTP mode, I have the same headers with max-age=0, 
> but with an ethernet sniffer I can see squid revalidating data but not 
> reloading it. That seems good for me.
> Don't you think it could be a bug in squid but only in https mode ?

Works for me when I try to replicate your setup using the headers from
your log.

First request gets forwarded without If-Modified-Since, second request
has a If-Modified-Since.

GET /a HTTP/1.0
If-Modified-Since: Thu, 11 Jan 2007 10:44:04 GMT
Via: 1.0 henrik:3128 (squid/2.6.STABLE8-CVS)
X-Forwarded-For: 127.0.0.1
Host: test.ssl
Cache-Control: max-age=259200
Connection: keep-alive


https_port 127.0.0.1:4443 cert=/home/henrik/squid/etc/cert1.pem
key=/home/henrik/squid/etc/cert1_key.pem accel defaultsite=test.ssl

cache_peer 127.0.0.1 parent 4433 0 no-query originserver name=ssl
no-digest no-netdb-exchange ssl
sslcafile=/home/henrik/squid/etc/test.pem



Technically there should be a If-None-Match as well as the object has an
ETag, but that's missing for some reason. Probably not implemented for
non-Vary:ing objects..

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


Re: [squid-users] WCCP issue

2007-02-02 Thread Bryan Shoebottom
Awie,

I had a similar problem with only a couple sites, specifically mail ones
like hotmail, gmail, etc.  I found a lot of documentation to work with
the redirect rule in the firewall on the squid server but this didn't
make a difference.  I finally compiled in WCCPv2 support (into the 2.5S
code at the time), and all my problems went away!  I am currently using
2.6S9 on one server and 2.6S4 on another with --enable-wccpv2 and have
had no complaints.
Hope this helps.

-- 
Thanks,

Bryan Shoebottom CCNA
Network & Systems Analyst
Network Services & Computer Operations
Fanshawe College
Phone:  (519) 452-4430 x4904
Fax:(519) 453-3231
[EMAIL PROTECTED]

On Fri, 2007-02-02 at 04:40 -0500, Awie wrote:
> Dear all,
> 
> After running for more than 1 year, our proxy cannot serve normally
> all
> request to hotmail.com (only, Yahoo mail is OK). Our proxy is running
> Squid
> 2.5S14 with WCCPv1 (WCCP module version 1.7). We tried to upgrade to
> 2.6S9
> but we got same result.
> 
> If we run the proxy as non-transparent or run as transparent by using
> "route
> map" the request to hotmail.com can be served normally. Problem was
> always
> happen with WCCP.
> 
> Would you tell me what I should do to solve the problem? I prefer to
> use
> WCCP instead of "route map" that consume more CPU resource and create
> problem when our proxy dies.
> 
> Thanks a lot for your kind help.
> 
> Thx & Rgds,
> 
> Awie
> 
> 
> 
> 



[squid-users] commBind: Cannot bind socket FD 98 to *:0: (98) Address already in use

2007-02-02 Thread Stefan Bohm
Hi all,

yesterday we had some strange problems running our reverse-proxy squid cluster.
During some high-traffic sports event, squid starts to emit messages
like:

commBind: Cannot bind socket FD 98 to *:0: (98) Address already in use

In front of every squid resides an apache2 (on the same hardware, to handle
redirects and that stuff) that logs messages like:
(99)Cannot assign requested address: proxy: HTTP: attempt to connect to 
127.0.0.2:80 (*) failed

When the traffic came back to normal, the problems vanished.

What's that supposed to mean? Any ideas?


Thank you for your help
Stefan


[squid-users] Squid 2.6 and SCAVR

2007-02-02 Thread Dave Holland
Just a quick note to help anyone else who's trying to get SCAVR
http://www.jackal-net.at/tiki-read_article.php?articleId=1
working with Squid 2.6.

It appears the redirect_program (url_rewrite_program) is now passed an
extra field, the urlgroup. Line 683 of SquidClamAV_Redirector.py needs
to be changed to read:

  url, src_address, ident, method, urlgroup = split(squidline)

to make things work.

Dave
-- 
** Dave Holland ** Systems Support -- Special Projects Team **
** 01223 496923 ** Sanger Institute, Hinxton, Cambridge, UK **


[squid-users] round-robin parents and ip-bases sessions

2007-02-02 Thread Markus.Rietzler

we use a few parent-squids for load-balancing in our DMZ. so our inner
squids use

cache_peer proxyA round-robin ...
cache_peer proxyB round-robin ...

when there are websites which uses ip-based sessions then they will see
either ip-adress of proxyA or proxyB with each request - and so the
sessions fail...
as we want to have load-balancing and relaibility it would be a bad idea
to route one site to only one - always the same - proxy per default...
anything we can do? or is this really a problem? how would a
proxy-cluster be setup to to show only on outgoing IP-address?

thanxs

markus



Re: [squid-users] Cache problem with SSL bridge

2007-02-02 Thread [EMAIL PROTECTED]
I'm not sure to well understand the problem : for me, max-age=0 just tells 
squid to revalidate data at each request but not to reload it if cached data is 
still good.
Above all, without SSL, in HTTP mode, I have the same headers with max-age=0, 
but with an ethernet sniffer I can see squid revalidating data but not 
reloading it. That seems good for me.
Don't you think it could be a bug in squid but only in https mode ?

Regards
Philippe

> Here is an example of the headers in https mode :
> 
> 1169479056.660   1485 192.168.7.1 TCP_MISS/200 1475123 GET
> https://ged.myoffice.fr/EDM/Documents/vmscsi-1.2.0.4.flp -
> FIRST_UP_PARENT/192.168.8.3 application/octet-stream [Accept:
> image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, */*\r\nReferer:
> https://ged.myoffice.fr/EDM/Documents/\r\nAccept-Language: fr\r
> \nAccept-Encoding: gzip, deflate\r\nUser-Agent: Mozilla/4.0
> (compatible; MSIE 6.0; Windows NT 5.0)\r\nHost: ged.myoffice.fr\r
> \nConnection: Keep-Alive\r\nCookie:
> ASPSESSIONIDCSBAAQBS=EEJPIDGCECCLGIODIPBJAJLI\r\nAuthorization: Basic
> QWRtaW5pc3RyYXRvcjpyb290\r\n] [HTTP/1.1 200 OK\r\nServer:
> Microsoft-IIS/5.0\r\nDate: Mon, 22 Jan 2007 15:17:35 GMT\r
> \nMicrosoftTahoeServer: 1.0\r\nCache-Control: public\r\nConnection:
> keep-alive\r\nContent-Type: application/octet-stream\r
> \nContent-Length: 1474560\r\nETag:
> "129fae11d5da094f989294fa5051a15a00016a72"\r\nLast-Modified: Thu,
> 11 Jan 2007 10:44:04 GMT\r\nAccept-Ranges: bytes\r\nMS-WebStorage:
> 6.0.6511\r\nCache-Control: max-age=0\r\n\r]

There is authentication.. but response marked public so it's OK.

But the last Cache-Control: max-age=0 is not so good... max-age=0 tells
Squid that the content should be revalidated with the origin server on
each request, effectively not cachable. You can override this in
refresh_pattern, but I strongly advice configuring the server to return
better cache-control headers if you want the content cached as this also
applies to browser caches.

Regards
Henrik


- ALICE SECURITE ENFANTS -
Protégez vos enfants des dangers d'Internet en installant Sécurité Enfants, le 
contrôle parental d'Alice.
http://www.aliceadsl.fr/securitepc/default_copa.asp




Re: [squid-users] How to exempt ftp from squid?

2007-02-02 Thread Michel Santos

John Oliver disse na ultima mensagem:
> I banged up an autoconf.pac script (which isn't easy, considering the
> only slivers of documentation I can find are a good ten years old!).
> It looks like my browser just assumes that ftp should go through squid,
> and that doesn't seem to want to work.  Since I see no real value in
> proxying FTP, how do I exempt FTP in the autoconf.pac script?

probably a browser configuration "using same proxy for all protocols"

something like shExpMatch (url, "ftp*", return DIRECT;) might work

Michel
...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




[squid-users] WCCP issue

2007-02-02 Thread Awie
Dear all,

After running for more than 1 year, our proxy cannot serve normally all
request to hotmail.com (only, Yahoo mail is OK). Our proxy is running Squid
2.5S14 with WCCPv1 (WCCP module version 1.7). We tried to upgrade to 2.6S9
but we got same result.

If we run the proxy as non-transparent or run as transparent by using "route
map" the request to hotmail.com can be served normally. Problem was always
happen with WCCP.

Would you tell me what I should do to solve the problem? I prefer to use
WCCP instead of "route map" that consume more CPU resource and create
problem when our proxy dies.

Thanks a lot for your kind help.

Thx & Rgds,

Awie





Re: [squid-users] Squid Under High Load

2007-02-02 Thread Michel Santos

Adrian Chadd disse na ultima mensagem:
...
>
> So as long as you're able to store small objects seperately from large
> objects and make sure one doesn't starve IO from the other then you'll
> be able to both enjoy your cake and eat it too. :P
>

that is really *the* issue

I guess coss certainly is the first step in this direction

when you seperate by object size you can even tune the file system and OS
exactly for this kind of file size which can give you extreme performance
boost

since you can manage cache_dir very well by minimum_object_size and
maximum_object_size (unfortunatly there is no
minimum_object_size_in_memory ...) this is an easy approach

often it seems difficult to do it on one machine and on a small network 
may not be a budget for having 2 or 3 cache server, or the benefit does no
justify it

Now, interesting issue that I can tune the partition for larger files
since COSS is a large file and so i could use diskd together with it

I use today one squid instance with null fs cache_dir, storing only small
objects up to 300k in memory and two more instances storing mid size
objects on disk from 300k up, I have another server with very large disks
only to store objects >50Mb as parent proxy-only.

most people alert me about memory stuff and so but I do not care, hardware
is easy and cheap even if expensive because at the end it pays off because
I get a constant 30-40% tcp:80 benefit, in peaks very very much more,
200-500% to be exact. I measure the incoming tcp:80 on the outside of my
transparent proxy router and measure the outgoing tcp:80 on the inside.
Means, supposing 2MB tcp:80 incoming, I get 2.6 - 2.8Mb - that is money.
May be for some countries this is not important but here we pay about
US1200-1500 per 2Mb so each byte I get out is important, no matter how


Michel
...





Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.




Re: [squid-users] digest authentication: request interval

2007-02-02 Thread Henrik Nordstrom
tor 2007-02-01 klockan 19:52 +0100 skrev [EMAIL PROTECTED]:

> Version: 2.5.9-10sarge2

Should work, but please try upgrading. Current release is 2.6.STABLE9.
2.5.STABLE9 is almost two years old, and there has been many changes
since then.

> auth_param digest program /usr/lib/squid/digest_pw_auth 
> /etc/squid/digpass
> auth_param digest children 10
> auth_param digest 
> realm wireless
> auth_param digest nonce_garbage_interval 30 minutes
> auth_param digest nonce_max_duration 50 minutes
> auth_param digest 
> check_nonce_count on

Try disabling the nonce count check. Many clients gets this wrong.

> auth_param digest nonce_strictness off
> 
> authenticate_cache_garbage_interval 1 hour
> authenticate_ttl 1 hour
> 
> I've modified the two parameters above, and now the auth request 
> interval is more longer, but i'dont understand why.

Adjusting the TTLs shouldn't make any difference.. Hmm... but maybe it
does for Digest..  If you see the same problem with check_nonce_count
off then please file a bug report.

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel