Re: [squid-users] Forwarding loop after rebooting.
Much thanks for replies. I have blocked http_access to all except child squids to prevent exploitation. I'm still a tad confused to why this problem only happens when the master proxy is down for a short period. Maybe the negative hits were causing it to redirect to itself, and then requests were denied when the child squids expected the proxy to act as a proxy and not just an accelerator. An interesting 'gotcha' considering the setup has been running fine for about 8 months. Thanks again! On 24/04/06, Henrik Nordstrom [EMAIL PROTECTED] wrote: sön 2006-04-23 klockan 23:48 +0100 skrev Mark Stevens: 2006/04/23 23:24:23| clientAccessCheck: proxy request denied in accel_only mode This is important... your Squid is used as a peer proxy, but your configuration does not allow this Squid to be used as a proxy (only accelerator). Access log extract: 10.1.1.3 - - [23/Apr/2006:23:24:23 +0100] GET http://myurl.mydomain.com/myfolder1/ HTTP/1.0 403 1401 TCP_DENIED:NONE 10.1.1.3 - - [23/Apr/2006:23:24:23 +0100] GET http://myurl.mydomain.com/myfolder1/ HTTP/1.0 403 1427 TCP_MISS:FIRST_UP_PARENT Looks to me like your Squid uses itself as parent. What cache_peer statements do you have? Do any of these points back to yourself either directly or indirectly via cache_peer statements at that peer? Related note: If you have multiple Squids clustered by the same visible name, make sure each have a unique unique_hostname set. Regards Henrik -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.3 (GNU/Linux) iD8DBQBETAiz516QwDnMM9sRAn+hAJ9CGC4QjX6NvVEXcs3rLsDGOc7UCgCff1LH QVV+ANArd02yRSyXBgiNGsM= =5Ets -END PGP SIGNATURE-
Re: [squid-users] Forwarding loop after rebooting.
Hello again, I've managed to replicate the error in a development environment. My setup in dev is 2 squids accelerating a master squid, that is accelerating a webserver. The 2 child squids are behind a loadbalancer. To reproduce the problem, I shutdown the master squid, and generate HTTP load to the child squids via the load balancer, then after about 5 minutes start up the master squid, here is an example of the response after sending a valid query that worked prior to replication test. HTTP Request generated by wget: Connecting to myurl.mydomain.com[172.23.161.100]:80... connected. HTTP request sent, awaiting response... 1 HTTP/1.0 403 Forbidden 2 Server: squid/2.5.STABLE12 3 Mime-Version: 1.0 4 Date: Sun, 23 Apr 2006 22:24:23 GMT 5 Content-Type: text/html 6 Content-Length: 1101 7 Expires: Sun, 23 Apr 2006 22:24:23 GMT 8 X-Squid-Error: ERR_ACCESS_DENIED 0 9 X-Cache: MISS from master.mydomain.net 10 X-Cache: MISS from master.mydomain.net 11 X-Cache: MISS from sibling1.object1.com 12 Connection: close 22:18:40 ERROR 403: Forbidden. Extract from cache.log: 2006/04/23 23:24:23| The request GET http://myurl.mydomain.com:80/myfolder1/ is ALLOWED, because it matched 'all' 2006/04/23 23:24:23| clientAccessCheck: proxy request denied in accel_only mode 2006/04/23 23:24:23| The request GET http://myurl.mydomain.com/myfolder1/ is DENIED, because it matched 'all' 2006/04/23 23:24:23| storeEntryValidLength: 233 bytes too big; '8E293D7F9154EF3C2032A87976FAFCA1' 2006/04/23 23:24:23| clientReadRequest: FD 215: no data to process ((11) Resource temporarily unavailable) 2006/04/23 23:24:23| The reply for GET http://myurl.mydomain.com/myfolder1/ is ALLOWED, because it matched 'all' Access log extract: 10.1.1.3 - - [23/Apr/2006:23:24:23 +0100] GET http://myurl.mydomain.com/myfolder1/ HTTP/1.0 403 1401 TCP_DENIED:NONE 10.1.1.3 - - [23/Apr/2006:23:24:23 +0100] GET http://myurl.mydomain.com/myfolder1/ HTTP/1.0 403 1427 TCP_MISS:FIRST_UP_PARENT I have managed to remove the forwarding loop error by instructing squid not to accept requests via itself as recommended, but the content error still exists. My config doesn't contain a negative ttl entry, so I assume it is the default 5 minutes. Any ideas? TIA. Mark. On 18/03/06, Henrik Nordstrom [EMAIL PROTECTED] wrote: lör 2006-03-18 klockan 19:23 + skrev Mark Stevens: I will perform further testing against the redirect rules, however what I am finding strange is that the problem only happens after downtime, to resolve the problem I used an alternative redirect_rules file with the same squid.conf file, and the looping errors go away, How your redirector processes it's rules or not is not a Squid issue/concern. Squid relies on the redirector of your choice to do it's job. Maybe your redirector is relying on some DNS lookups or something else not yet available at the time you start Squid in the system bootup procedure? Have seen people bitten by such issues in the past. Regards Henrik -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.2.2 (GNU/Linux) iD8DBQBEHIgc516QwDnMM9sRAmx4AJ42AEoQYYVnbfdoZfa5JjygWHwXBwCfUE+u qAf9owU+M+NMy7XW6ceOw28= =MeSV -END PGP SIGNATURE-
Re: [squid-users] Forwarding loop after rebooting.
sön 2006-04-23 klockan 23:48 +0100 skrev Mark Stevens: 2006/04/23 23:24:23| clientAccessCheck: proxy request denied in accel_only mode This is important... your Squid is used as a peer proxy, but your configuration does not allow this Squid to be used as a proxy (only accelerator). Access log extract: 10.1.1.3 - - [23/Apr/2006:23:24:23 +0100] GET http://myurl.mydomain.com/myfolder1/ HTTP/1.0 403 1401 TCP_DENIED:NONE 10.1.1.3 - - [23/Apr/2006:23:24:23 +0100] GET http://myurl.mydomain.com/myfolder1/ HTTP/1.0 403 1427 TCP_MISS:FIRST_UP_PARENT Looks to me like your Squid uses itself as parent. What cache_peer statements do you have? Do any of these points back to yourself either directly or indirectly via cache_peer statements at that peer? Related note: If you have multiple Squids clustered by the same visible name, make sure each have a unique unique_hostname set. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
[squid-users] Forwarding loop after rebooting.
Hi group, my first post so please be gentle :) I'm a sysadmin who has inherited a small cluster of squid servers, the setup is as follows. 4 x Squid Slave Accelerators that accel a master squid. 1 x Master Squid running a custom made redirect script written in perl that accel a Webserver . 1 x Backend Webserver. Each slave is running 4 versions of squid accelerating separate sites. The master runs 4 instances of squid. The farm is constantly under a fair load - roughly half a million hits a day. The setup works fine, however, recently when the master server was taken down for repair, and brought back up again with the same configuration, it failed to serve content for the busiest instance, and every request returned is with a TCP_DENIED 403 error. The following error was reported in the cache.log 2006/03/18 06:04:52| WARNING: Forwarding loop detected for: GET /folder1/subfolder/subfolder/ HTTP/1.0 If-Modified-Since: Sat, 14 Jan 2006 01:44:45 GMT Host: 192.168.0.10 Accept: */* From: googlebot(at)googlebot.com User-Agent: Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html) Accept-Encoding: gzip Via: 1.1 slave1.mydomain.com:80 (webserver/webserver), 1.0 master.mydomain.com:80 (webserver/webserver), 1.0 master.mydomain.com:80 (webserver/webserver), 1.0 master.mydomain.com:80 (webserver/webserver), 1.0 master.mydomain.com:80 (webserver/webserver), 1.0 master.mydomain.com:80 (webserver/webserver), 1.0 master.mydomain.com:80 (webserver/webserver), 1.0 master.mydomain.com:80 (webserver/webserver), 1.0 master.mydomain.com:80 (webserver/webserver), 1.0 master.mydomain.com:80 (webserver/webserver), 1.0 master.mydomain.com:80 (webserver/webserver), 1.0 master.mydomain.com:80 (webserver/webserver), 1.0 master.mydomain.com:80 (webserver/webserver), 1.0 master.mydomain.com:80 (webserver/webserver), 1.0 master.mydomain.com:80 (webserver/webserver), 1.0 master.mydomain.com:80 (webserver/webserver), 1.0 master.mydomain.com:80 (webserver/webserver), 1.0 master.mydomain.com:80 (webserver/webserver), 1.0 master.mydomain.com:80 (webserver/webserver), 1.0 master.mydomain.com:80 (webserver/webserver), 1.0 master.mydomain.com:80 (webserver/webserver), 1.0 master.mydomain.com:80 (webserver/webserver), 1.0 master.mydomain.com:80 (webserver/webserver), 1.0 master.mydomain.com:80 (webserver/webserver), 1.0 master.mydomain.com:80 (webserver/webserver), 1.0 master.mydomain.com:80 (webserver/webserver), 1.0 master.mydomain.com:80 (webserver/webserver), 1.0 master.mydomain.com:80 (webserver/webserver), 1.0 master.mydomain.com:80 (webserver/webserver), 1.0 master.mydomain.com:80 (webserver/webserver), 1.0 master.mydomain.com:80 (webserver/webserver), 1.0 master.mydomain.com:80 (webserver/webserver), 1.0 master.mydomain.com:80 (webserver/webserver), 1.0 master.mydomain.com:80 (webserver/webserver), 1.0 master.mydomain.com:80 (webserver/webserver), 1.0 master.mydomain.com:80 (webserver/webserver), 1.0 master.mydomain.com:80 (webserver/webserver), 1.0 master.mydomain.com:80 (webserver/webserver), 1.0 master.mydomain.com:80 (webserver/webserver), 1.0 master.mydomain.com:80 (webserver/webserver) This has happened previously when the server rebooted, it is likely that the master squid service is getting hammered by all slaves as soon as it is brought back into service, could the fact that it's under such heavy load as soon as it starts up be causing a problem in Squid? Squid version:squid-2.5.STABLE10 O/S: 5.8 Generic_117350-12 sun4u sparc SUNW,Ultra-80 I have altered the output to respect privacy of client.
[squid-users] Forwarding loop after rebooting.
Sorry if this a double post. Squid version:squid-2.5.STABLE10 O/S: 5.8 Generic_117350-12 sun4u sparc SUNW,Ultra-80 Hi, I'm a sysadmin who has inherited a small cluster of squid servers, the setup is as follows. 4 x Squid Slave Accelerators that accel a master squid. 1 x Master Squid running a custom made redirect script written in perl that accel a Webserver . 1 x Backend Webserver. Each slave is running 4 versions of squid accelerating separate sites. The master runs 4 instances of squid. The farm is constantly under a fair load - roughly half a million hits a day. The setup works fine, however, recently when the master server was taken down for repair, and brought back up again with the same configuration, it failed to serve content for the busiest instance, and every request returned is with a TCP_DENIED 403 error. The following error was reported in the cache.log 2006/03/18 06:04:52| WARNING: Forwarding loop detected for: GET /folder1/subfolder/subfolder/ HTTP/1.0 If-Modified-Since: Sat, 14 Jan 2006 01:44:45 GMT Host: 192.168.0.10 Accept: */* From: googlebot(at)googlebot.com User-Agent: Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html) Accept-Encoding: gzip Via: 1.1 slave1.mydomain.com:80 (webserver/webserver), 1.0 master.mydomain.com:80 (webserver/webserver), 1.0 master.mydomain.com:80 (webserver/webserver), 1.0 master.mydomain.com:80 (webserver/webserver), 1.0 master.mydomain.com:80 (webserver/webserver), 1.0 master.mydomain.com:80 (webserver/webserver), 1.0 master.mydomain.com:80 (webserver/webserver), 1.0 master.mydomain.com:80 (webserver/webserver), 1.0 master.mydomain.com:80 (webserver/webserver), 1.0 master.mydomain.com:80 (webserver/webserver), 1.0 master.mydomain.com:80 (webserver/webserver), 1.0 master.mydomain.com:80 (webserver/webserver), 1.0 master.mydomain.com:80 (webserver/webserver), 1.0 master.mydomain.com:80 (webserver/webserver), 1.0 master.mydomain.com:80 (webserver/webserver), 1.0 master.mydomain.com:80 (webserver/webserver), 1.0 master.mydomain.com:80 (webserver/webserver), 1.0 master.mydomain.com:80 (webserver/webserver), 1.0 master.mydomain.com:80 (webserver/webserver), 1.0 master.mydomain.com:80 (webserver/webserver), 1.0 master.mydomain.com:80 (webserver/webserver), 1.0 master.mydomain.com:80 (webserver/webserver), 1.0 master.mydomain.com:80 (webserver/webserver), 1.0 master.mydomain.com:80 (webserver/webserver), 1.0 master.mydomain.com:80 (webserver/webserver), 1.0 master.mydomain.com:80 (webserver/webserver), 1.0 master.mydomain.com:80 (webserver/webserver), 1.0 master.mydomain.com:80 (webserver/webserver), 1.0 master.mydomain.com:80 (webserver/webserver), 1.0 master.mydomain.com:80 (webserver/webserver), 1.0 master.mydomain.com:80 (webserver/webserver), 1.0 master.mydomain.com:80 (webserver/webserver), 1.0 master.mydomain.com:80 (webserver/webserver), 1.0 master.mydomain.com:80 (webserver/webserver), 1.0 master.mydomain.com:80 (webserver/webserver), 1.0 master.mydomain.com:80 (webserver/webserver), 1.0 master.mydomain.com:80 (webserver/webserver), 1.0 master.mydomain.com:80 (webserver/webserver), 1.0 master.mydomain.com:80 (webserver/webserver), 1.0 master.mydomain.com:80 (webserver/webserver) This has happened previously when the server rebooted, it is likely that the master squid service is getting hammered by all slaves as soon as it is brought back into service, could the fact that it's under such heavy load as soon as it starts up be causing a problem in Squid? I have altered the output to respect privacy of client.
Re: [squid-users] Forwarding loop after rebooting.
lör 2006-03-18 klockan 13:47 + skrev Mark Stevens: This has happened previously when the server rebooted, it is likely that the master squid service is getting hammered by all slaves as soon as it is brought back into service, could the fact that it's under such heavy load as soon as it starts up be causing a problem in Squid? No. It's by 99.9% a configuration error. Forwarding loops occurs when the configuration in how Squid should route the requests makes Squid send the request to itself. Hmm.. you mentioned you are using a redirector to route the requests. If so then make sure you have not enabled redirector_bypass (defaults off). Also verify that the redirector is actually working. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel
Re: [squid-users] Forwarding loop after rebooting.
On 3/18/06, Mark Stevens [EMAIL PROTECTED] wrote: Sorry if this a double post. Squid version:squid-2.5.STABLE10 O/S: 5.8 Generic_117350-12 sun4u sparc SUNW,Ultra-80 Hi, I'm a sysadmin who has inherited a small cluster of squid servers, the setup is as follows. 4 x Squid Slave Accelerators that accel a master squid. 1 x Master Squid running a custom made redirect script written in perl that accel a Webserver . 1 x Backend Webserver. Each slave is running 4 versions of squid accelerating separate sites. The master runs 4 instances of squid. The farm is constantly under a fair load - roughly half a million hits a day. The setup works fine, however, recently when the master server was taken down for repair, and brought back up again with the same configuration, it failed to serve content for the busiest instance, and every request returned is with a TCP_DENIED 403 error. The following error was reported in the cache.log 2006/03/18 06:04:52| WARNING: Forwarding loop detected for: ... http://www.squid-cache.org/Doc/FAQ/FAQ-11.html#ss11.31 M.
Re: [squid-users] Forwarding loop after rebooting.
lör 2006-03-18 klockan 19:23 + skrev Mark Stevens: I will perform further testing against the redirect rules, however what I am finding strange is that the problem only happens after downtime, to resolve the problem I used an alternative redirect_rules file with the same squid.conf file, and the looping errors go away, How your redirector processes it's rules or not is not a Squid issue/concern. Squid relies on the redirector of your choice to do it's job. Maybe your redirector is relying on some DNS lookups or something else not yet available at the time you start Squid in the system bootup procedure? Have seen people bitten by such issues in the past. Regards Henrik signature.asc Description: Detta är en digitalt signerad meddelandedel