Re: [squid-users] Forwarding loop on squid 3.3.8

2014-08-06 Thread James Michels
Ok, but if NAT is expected on the Squid box exclusively, how do I
redirect all the outgoing traffic sent over the port 80 from a client
to another box (concretely the one where Squid runs) without using
such NAT?

I thought packets were not mangled over the same network unless
specifically done via iptables. Does that mean that squid3 box
currently has trouble resolving the host domain, i.e. google.com and
therefore tries relying through the original packet's IP? Seems to
resolve it via the 'host' or 'ping' commands.

Thanks

James

2014-08-06 14:52 GMT+01:00 Amos Jeffries :
> On 7/08/2014 1:26 a.m., Karma sometimes Hurts wrote:
>> Greetings,
>>
>> I'm trying to setup a transparent proxy on Squid 3.3.8, Ubuntu Trusty
>> 14.04 from the official APT official repository. All boxes including
>> the Squid box are under the same router, but the squid box is on a
>> different server than the clients. Seems that for some reason the
>> configuration on the squid3 box side is missing something, as a
>> forwarding loop is produced.
>>
>> This is the configuration of the squid3 box:
>>
>>   visible_hostname squidbox.localdomain.com
>>   acl SSL_ports port 443
>>   acl Safe_ports port 80  # http
>>   acl Safe_ports port 21  # ftp
>>   acl Safe_ports port 443 # https
>>   acl Safe_ports port 70  # gopher
>>   acl Safe_ports port 210 # wais
>>   acl Safe_ports port 1025-65535  # unregistered ports
>>   acl Safe_ports port 280 # http-mgmt
>>   acl Safe_ports port 488 # gss-http
>>   acl Safe_ports port 591 # filemaker
>>   acl Safe_ports port 777 # multiling http
>>   acl CONNECT method CONNECT
>>   http_access allow all
>>   http_access deny !Safe_ports
>>   http_access deny CONNECT !SSL_ports
>>   http_access allow localhost manager
>>   http_access deny manager
>>   http_access allow localhost
>>   http_access allow all
>>   http_port 3128 intercept
>>   http_port 0.0.0.0:3127
>>
>> This rule has been added to the client's boxes:
>>
>>   iptables -t nat -A OUTPUT -p tcp --dport 80 -j DNAT --to-destination
>> 192.168.1.100:3128
>
> Thats the problem. NAT is required on the Squid box *only*.
>
>>
>> 192.168.1.100 corresponds to the squid3 box. In the log below
>> 192.168.1.20 is one of the clients.
>
>
> When receiving intercepted traffic current Squid validate the
> destination IP address against the claimed Host: header domain DNS
> records to avoid several nasty security vulnerabilities connecting to
> that Host domain. If that fails the traffic is instead relayed to the
> original IP:port address in the TCP packet. That address arriving into
> your Squid box was 192.168.1.100:3128 ... rinse, repeat ...
>
> Use policy routing, or a tunnel (GRE, VPN, etc) that does not alter the
> packet src/dst IP addresses to get traffic onto the Squid box.
>
> Amos


Re: [squid-users] Forwarding loop on squid 3.3.8

2014-08-06 Thread Amos Jeffries
On 7/08/2014 1:26 a.m., Karma sometimes Hurts wrote:
> Greetings,
> 
> I'm trying to setup a transparent proxy on Squid 3.3.8, Ubuntu Trusty
> 14.04 from the official APT official repository. All boxes including
> the Squid box are under the same router, but the squid box is on a
> different server than the clients. Seems that for some reason the
> configuration on the squid3 box side is missing something, as a
> forwarding loop is produced.
> 
> This is the configuration of the squid3 box:
> 
>   visible_hostname squidbox.localdomain.com
>   acl SSL_ports port 443
>   acl Safe_ports port 80  # http
>   acl Safe_ports port 21  # ftp
>   acl Safe_ports port 443 # https
>   acl Safe_ports port 70  # gopher
>   acl Safe_ports port 210 # wais
>   acl Safe_ports port 1025-65535  # unregistered ports
>   acl Safe_ports port 280 # http-mgmt
>   acl Safe_ports port 488 # gss-http
>   acl Safe_ports port 591 # filemaker
>   acl Safe_ports port 777 # multiling http
>   acl CONNECT method CONNECT
>   http_access allow all
>   http_access deny !Safe_ports
>   http_access deny CONNECT !SSL_ports
>   http_access allow localhost manager
>   http_access deny manager
>   http_access allow localhost
>   http_access allow all
>   http_port 3128 intercept
>   http_port 0.0.0.0:3127
> 
> This rule has been added to the client's boxes:
> 
>   iptables -t nat -A OUTPUT -p tcp --dport 80 -j DNAT --to-destination
> 192.168.1.100:3128

Thats the problem. NAT is required on the Squid box *only*.

> 
> 192.168.1.100 corresponds to the squid3 box. In the log below
> 192.168.1.20 is one of the clients.


When receiving intercepted traffic current Squid validate the
destination IP address against the claimed Host: header domain DNS
records to avoid several nasty security vulnerabilities connecting to
that Host domain. If that fails the traffic is instead relayed to the
original IP:port address in the TCP packet. That address arriving into
your Squid box was 192.168.1.100:3128 ... rinse, repeat ...

Use policy routing, or a tunnel (GRE, VPN, etc) that does not alter the
packet src/dst IP addresses to get traffic onto the Squid box.

Amos


[squid-users] Forwarding loop on squid 3.3.8

2014-08-06 Thread Karma sometimes Hurts
Greetings,

I'm trying to setup a transparent proxy on Squid 3.3.8, Ubuntu Trusty
14.04 from the official APT official repository. All boxes including
the Squid box are under the same router, but the squid box is on a
different server than the clients. Seems that for some reason the
configuration on the squid3 box side is missing something, as a
forwarding loop is produced.

This is the configuration of the squid3 box:

  visible_hostname squidbox.localdomain.com
  acl SSL_ports port 443
  acl Safe_ports port 80  # http
  acl Safe_ports port 21  # ftp
  acl Safe_ports port 443 # https
  acl Safe_ports port 70  # gopher
  acl Safe_ports port 210 # wais
  acl Safe_ports port 1025-65535  # unregistered ports
  acl Safe_ports port 280 # http-mgmt
  acl Safe_ports port 488 # gss-http
  acl Safe_ports port 591 # filemaker
  acl Safe_ports port 777 # multiling http
  acl CONNECT method CONNECT
  http_access allow all
  http_access deny !Safe_ports
  http_access deny CONNECT !SSL_ports
  http_access allow localhost manager
  http_access deny manager
  http_access allow localhost
  http_access allow all
  http_port 3128 intercept
  http_port 0.0.0.0:3127

This rule has been added to the client's boxes:

  iptables -t nat -A OUTPUT -p tcp --dport 80 -j DNAT --to-destination
192.168.1.100:3128

192.168.1.100 corresponds to the squid3 box. In the log below
192.168.1.20 is one of the clients.

2014/08/06 15:13:05| Starting Squid Cache version 3.3.8 for
x86_64-pc-linux-gnu...
2014/08/06 15:13:27.900| client_side.cc(2316) parseHttpRequest: HTTP
Client local=192.168.1.100:3128 remote=192.168.1.20:54341 FD 8
flags=33
2014/08/06 15:13:27.901| client_side.cc(2317) parseHttpRequest: HTTP
Client REQUEST:
-
GET / HTTP/1.1
Host: www.google.com
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:24.0) Gecko/20100101 Firefox/24.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,es;q=0.8,en-US;q=0.5,en;q=0.3
Accept-Encoding: gzip, deflate
Cookie: 
PREF=ID=119a6e25e6eccb3b:U=95e37afd611b606e:FF=0:TM=1404500940:LM=1404513627:S=r7E-Xed2muOOp-ay;
NID=67=M5geOtyDtp5evLidOfam1uzfhl6likehxjXo7KcamK8c5jXptfx9zJc-5L7jhvYvnfTvtXYJ3yza7cE8fRq2x0iyVEHN9Pn2hz9urrC_Qt_xNH6IQCoT-3-eXTwb2h4f;
OGPC=5-25:
Connection: keep-alive
Pragma: no-cache
Cache-Control: no-cache

--
2014/08/06 15:13:27.902| http.cc(2204) sendRequest: HTTP Server
local=192.168.1.100:43140 remote=192.168.1.100:3128 FD 11 flags=1
2014/08/06 15:13:27.902| http.cc(2205) sendRequest: HTTP Server REQUEST:
-
GET / HTTP/1.1
Host: www.google.com
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:24.0) Gecko/20100101 Firefox/24.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,es;q=0.8,en-US;q=0.5,en;q=0.3
Accept-Encoding: gzip, deflate
Cookie: 
PREF=ID=119a6e25e6eccb3b:U=95e37afd611b606e:FF=0:TM=1404500940:LM=1404513627:S=r7E-Xed2muOOp-ay;
NID=67=M5geOtyDtp5evLidOfam1uzfhl6likehxjXo7KcamK8c5jXptfx9zJc-5L7jhvYvnfTvtXYJ3yza7cE8fRq2x0iyVEHN9Pn2hz9urrC_Qt_xNH6IQCoT-3-eXTwb2h4f;
OGPC=5-25:
Via: 1.1 squidbox.localdomain.com (squid/3.3.8)
Connection: keep-alive
Pragma: no-cache
Cache-Control: no-cache

--
2014/08/06 15:13:27.902| client_side.cc(2316) parseHttpRequest: HTTP
Client local=192.168.1.100:3128 remote=192.168.1.100:43140 FD 13
flags=33
2014/08/06 15:13:27.902| client_side.cc(2317) parseHttpRequest: HTTP
Client REQUEST:
-
GET / HTTP/1.1
Host: www.google.com
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:24.0) Gecko/20100101 Firefox/24.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,es;q=0.8,en-US;q=0.5,en;q=0.3
Accept-Encoding: gzip, deflate
Cookie: 
PREF=ID=119a6e25e6eccb3b:U=95e37afd611b606e:FF=0:TM=1404500940:LM=1404513627:S=r7E-Xed2muOOp-ay;
NID=67=M5geOtyDtp5evLidOfam1uzfhl6likehxjXo7KcamK8c5jXptfx9zJc-5L7jhvYvnfTvtXYJ3yza7cE8fRq2x0iyVEHN9Pn2hz9urrC_Qt_xNH6IQCoT-3-eXTwb2h4f;
OGPC=5-25:
Via: 1.1 squidbox.localdomain.com (squid/3.3.8)
Connection: keep-alive
Pragma: no-cache
Cache-Control: no-cache

--
2014/08/06 15:13:27.903| client_side.cc(1377) sendStartOfMessage: HTTP
Client local=192.168.1.100:3128 remote=192.168.1.100:43140 FD 13
flags=33
2014/08/06 15:13:27.903| client_side.cc(1378) sendStartOfMessage: HTTP
Client REPLY:
-
HTTP/1.1 403 Forbidden
Server: squid/3.3.8
Mime-Version: 1.0
Date: Fri, 18 Jul 2014 10:33:27 GMT
Content-Type: text/html
Content-Length: 3932
X-Squid-Error: ERR_ACCESS_DENIED 0
Vary: Accept-Language
Content-Language: en-US
X-Cache: MISS from squidbox.localdomain.com
X-Cache-Lookup: MISS from squidbox.localdomain.com:3127
Via: 1.1 squidbox.localdomain.com (squid/3.3.8)
Connection: keep-alive

--
2014/08/06 15:13:27.903| ctx: enter level  0: 'http://www.google.com/'
2014/08/06 15:13:27.903| http.cc(761) processReplyHeader: HTTP Server
local=192.168.1.100:43140 remote=192.168.1.100:3128 FD 11 fla

Re: [squid-users] Forwarding loop error after upgrading to 3.2.13 or 3.3.8

2013-08-06 Thread Athla

On 08/06/2013 08:43 PM, Amos Jeffries wrote:


Whichever of your Squid is logging this is *receiving* request traffic
from solidus. As it is your parent proxy *neither* of your Squid should
be receiving traffic from it.


Well, that forward error is on solidus (the parent) log.



I suspect this is a beak in how your transparent interception is
operating, or not operating as the case may be.


Amos


Still, i wonder why everything works fine right now as i am using 3.1.23.
I cannot pinpoint what's the problem here. :(


Re: [squid-users] Forwarding loop error after upgrading to 3.2.13 or 3.3.8

2013-08-06 Thread Antony Stone
On Tuesday 06 August 2013 at 05:43:49, yula athla wrote:

> I have parent child squid installation.
> 
> LAN->squid1->squid2->Internet.
>  ^
>  ^
>WAN
> 
> squid2 works as a transparent proxy for WAN connections.
> 
> I was using 3.2.13 for squid2 (child) and 3.1.23 (parent). Every thing
> works correctly.

I don't understand the use of parent and child here.

You say squid2 is the child.

You don't actually say that squid1 is the parent, but that's the implication, 
since there are only two machines.

For requests from the LAN, the first proxy they hit is squid1, that then passes 
on the squid2, which then makes the request to the outside world.

That means that squid1 is the child and squid2 is the parent.


Also, you say that "squid2 works as a transparent proxy for WAN connections", 
but unless the spacing on my mail client's rendition of your Ascii Art is 
messed up, that WAN connection is over the Internet (so how does your routing 
work to enable it to be a transparent proxy)?

Please could you repeat the diagram using correct hostnames and IPs, and 
explain how your WAN link gets to squid2?


Antony.

-- 
Most people have more than the average number of legs.

 Please reply to the list;
   please don't CC me.


Re: [squid-users] Forwarding loop error after upgrading to 3.2.13 or 3.3.8

2013-08-06 Thread Eliezer Croitoru
Hey there,

How is your setup works??
what is the LAN ip addresses what is the WAN IP and how do you connect
the two proxies?chain them?
What OS are you using?
is it a self compiled squid?
what is the output of "squid -v"?

We must understand first how your setup is setup and works to make sure
you are doing it the right way.

Eliezer

On 08/06/2013 06:43 AM, yula athla wrote:
> I have parent child squid installation.
> 
> LAN->squid1->squid2->Internet.
>  ^
>  ^
>WAN
> 
> squid2 works as a transparent proxy for WAN connections.
> 
> I was using 3.2.13 for squid2 (child) and 3.1.23 (parent). Every thing
> works correctly.
> 
> But, if i upgrade the parent to 3.2.13 or 3.3.8, i always get this error:
> 
> 
> 2013/08/06 09:54:29 kid1| WARNING: Forwarding loop detected for:
> GET /favicon.ico HTTP/1.1
> Host: juventus.com
> User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:18.0) Gecko/20100101 
> Firefox/18.0
> Accept: text/html,application/xhtml+
> xml,application/xml;q=0.9,*/*;q=0.8
> Accept-Language: en-US,en;q=0.5
> Accept-Encoding: gzip, deflate
> DNT: 1
> Cookie: __utma=1.525570529.1375757581.1375757581.1375757581.1;
> __utmb=1.1.10.1375757581; __utmc=1;
> __utmz=1.1375757581.1.1.utmcsr=(direct)|utm
> ccn=(direct)|utmcmd=(none); __qca=P0-214282750-1375757580942
> Via: 1.1 otacon (squid), 1.1 solidus (squid)
> X-Forwarded-For: 127.0.0.1
> Cache-Control: max-age=259200
> Connection: keep-alive
> 
> otacon is the hostname of child squid.
> solidus is the hostname of parent squid.
> 
> 
> The error only occur from LAN connection. Things work greatly for any
> WAN connection (with squid 3.1.23, 3.2.13, or 3.3.8).
> 
> Parent config:
> http://pastebin.com/tsBCPJ5r
> 
> Child config:
> http://pastebin.com/dz0u7g7b
> 
> 
> Thanks for the help. :)
> 



Re: [squid-users] Forwarding loop error after upgrading to 3.2.13 or 3.3.8

2013-08-06 Thread Amos Jeffries

On 6/08/2013 3:43 p.m., yula athla wrote:

I have parent child squid installation.

LAN->squid1->squid2->Internet.
  ^
  ^
WAN

squid2 works as a transparent proxy for WAN connections.

I was using 3.2.13 for squid2 (child) and 3.1.23 (parent). Every thing
works correctly.

But, if i upgrade the parent to 3.2.13 or 3.3.8, i always get this error:


2013/08/06 09:54:29 kid1| WARNING: Forwarding loop detected for:
GET /favicon.ico HTTP/1.1
Host: juventus.com
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:18.0) Gecko/20100101 Firefox/18.0
Accept: text/html,application/xhtml+
xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
DNT: 1
Cookie: __utma=1.525570529.1375757581.1375757581.1375757581.1;
__utmb=1.1.10.1375757581; __utmc=1;
__utmz=1.1375757581.1.1.utmcsr=(direct)|utm
ccn=(direct)|utmcmd=(none); __qca=P0-214282750-1375757580942
Via: 1.1 otacon (squid), 1.1 solidus (squid)
X-Forwarded-For: 127.0.0.1
Cache-Control: max-age=259200
Connection: keep-alive

otacon is the hostname of child squid.
solidus is the hostname of parent squid.


Whichever of your Squid is logging this is *receiving* request traffic 
from solidus. As it is your parent proxy *neither* of your Squid should 
be receiving traffic from it.


I suspect this is a beak in how your transparent interception is 
operating, or not operating as the case may be.



The error only occur from LAN connection. Things work greatly for any
WAN connection (with squid 3.1.23, 3.2.13, or 3.3.8).

Parent config:
http://pastebin.com/tsBCPJ5r

Child config:
http://pastebin.com/dz0u7g7b


Thanks for the help. :)


Amos


[squid-users] Forwarding loop error after upgrading to 3.2.13 or 3.3.8

2013-08-05 Thread yula athla
I have parent child squid installation.

LAN->squid1->squid2->Internet.
 ^
 ^
   WAN

squid2 works as a transparent proxy for WAN connections.

I was using 3.2.13 for squid2 (child) and 3.1.23 (parent). Every thing
works correctly.

But, if i upgrade the parent to 3.2.13 or 3.3.8, i always get this error:


2013/08/06 09:54:29 kid1| WARNING: Forwarding loop detected for:
GET /favicon.ico HTTP/1.1
Host: juventus.com
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:18.0) Gecko/20100101 Firefox/18.0
Accept: text/html,application/xhtml+
xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
DNT: 1
Cookie: __utma=1.525570529.1375757581.1375757581.1375757581.1;
__utmb=1.1.10.1375757581; __utmc=1;
__utmz=1.1375757581.1.1.utmcsr=(direct)|utm
ccn=(direct)|utmcmd=(none); __qca=P0-214282750-1375757580942
Via: 1.1 otacon (squid), 1.1 solidus (squid)
X-Forwarded-For: 127.0.0.1
Cache-Control: max-age=259200
Connection: keep-alive

otacon is the hostname of child squid.
solidus is the hostname of parent squid.


The error only occur from LAN connection. Things work greatly for any
WAN connection (with squid 3.1.23, 3.2.13, or 3.3.8).

Parent config:
http://pastebin.com/tsBCPJ5r

Child config:
http://pastebin.com/dz0u7g7b


Thanks for the help. :)


Re: [squid-users] Forwarding loop detected

2010-06-29 Thread Edoardo COSTA SANSEVERINO

On 06/29/2010 01:07 PM, Amos Jeffries wrote:

Edoardo COSTA SANSEVERINO wrote:

Hi all,

I'm getting the following error and I just can't figure out what I'm 
doing wrong.  It worked for a while but now i get the following error:


Browser error
-
ERROR
The requested URL could not be retrieved

While trying to retrieve the URL: http://test.example.com/

The following error was encountered:

* Access Denied.

  Access control configuration prevents your request from being 
allowed at this time. Please contact your service provider if you 
feel this is incorrect.


Your cache administrator is webmaster.
Generated Tue, 29 Jun 2010 08:01:45 GMT by localhost (squid/3.0.STABLE8)


Squid Error
---
2010/06/29 07:41:22.244| The request GET http://test.example.com/ is 
ALLOWED, because it matched 'sites_server_web'

2010/06/29 07:41:22.244| WARNING: Forwarding loop detected for:
GET / HTTP/1.0
Host: test.example.com
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.2.3) 
Gecko/20100423 Ubuntu/10.04 (lucid) Firefox/3.6.3

Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Referer: http://test.example.com/
Cookie: 
__utma=156214138.2072416337.1256440668.1263421087.1270454401.17; 
SESS404422c7e13985ed9850bca1343102d6=e6b996d3bf323193fec6e785a3356d1c; SESS4986f0d90a6abbc6006cc25a814fe1a8=1c1956864db4e7636f3e8b185b6dd6cc 


Pragma: no-cache
Via: 1.1 localhost (squid/3.0.STABLE8)
X-Forwarded-For: 192.168.1.10
Cache-Control: no-cache, max-age=259200
Connection: keep-alive


2010/06/29 07:41:22.245| The reply for GET http://test.example.com/ 
is ALLOWED, because it matched 'sites_server_web'



My current setup is as follows.  I made the page request on the 
laptop to [VMs1].



setup
-


[VMs1]--[Server/Squid/DNS/FW 1]--{ Internet }---[Server/Squid/DNS/FW 
2]-+--[VMs2]

|
   
+--[LAN]--[Laptop]




Diagram got a bit mangled. I'm guessing the Laptop was on network VMs1?




The following squid config is for [Server 1]

squid.conf
--
https_port 91.185.133.180:443 accel 
cert=/etc/ssl/mail.example.com.crt key=/etc/ssl/mail.example.com.pem 
defaultsite=mail.example.com vhost protocol=https

http_port 91.185.133.180:80 accel defaultsite=test.example.com vhost

cache_peer 192.168.122.11 parent 443 0 no-query no-digest 
originserver login=PASS ssl sslversion=3 sslflags=DONT_VERIFY_PEER 
front-end-https=on name=server_mail
cache_peer 192.168.122.12 parent 80 0 no-query originserver 
login=PASS name=server_web


acl sites_server_mail dstdomain mail.example.com
http_access allow sites_server_mail
cache_peer_access server_mail allow sites_server_mail
cache_peer_access server_mail deny all

acl sites_server_web dstdomain test.example.com test.foobar.eu 
test1.example.com

http_access allow sites_server_web
cache_peer_access server_web allow sites_server_web
cache_peer_access server_web deny all

forwarded_for on

cache_store_log none
debug_options ALL,2


The following config is for [Server 2]

squid.conf
--
https_port 192.168.1.3:443 accel 
cert=/etc/ssl/certs/deb03.example.com.crt 
key=/etc/ssl/private/deb03.example.com.pem 
defaultsite=deb03.example.com vhost protocol=https

http_port 192.168.1.1:80 accel defaultsite=deb02.example.com vhost
http_port 192.168.1.1:80 accel defaultsite=oldwww.example.com vhost

cache_peer 192.168.122.3 parent 443 0 no-query originserver 
login=PASS ssl sslversion=3 sslflags=DONT_VERIFY_PEER 
front-end-https=on name=srv03

cache_peer 192.168.122.2 parent 80 0 no-query originserver name=srv02
cache_peer 192.168.122.11 parent 80 0 no-query originserver name=srv01

acl https proto https
acl sites_srv01 dstdomain oldwww.example.com
acl sites_srv03 dstdomain deb03.example.com
acl sites_srv02 dstdomain deb02.example.com second.example.com

http_access allow sites_srv01
http_access allow sites_srv03
http_access allow sites_srv02
cache_peer_access srv01 allow sites_srv01
cache_peer_access srv03 allow sites_srv03
cache_peer_access srv02 allow sites_srv02

forwarded_for on

### Transparent proxy
http_port 192.168.1.1:3128 transparent
acl lan_network src 192.168.1.0/24
acl localnet src 127.0.0.1/255.255.255.255
http_access allow lan_network
http_access allow localnet

cache_dir ufs /var/spool/squid3 1500 16 256
###

#cache_store_log none
debug_options ALL,2


I simply can't see where the loop is.  Could someone explain this to 
me or point me to the right documentation.  I had a look arround but 
found no relevant answer.


There are two things which may be happening:

 1) Your NAT interception rules may be catching proxy #2 outbound 
requests and looping it back into #2.
  ** FIX: Make su

Re: [squid-users] Forwarding loop detected

2010-06-29 Thread Amos Jeffries

Edoardo COSTA SANSEVERINO wrote:

Hi all,

I'm getting the following error and I just can't figure out what I'm 
doing wrong.  It worked for a while but now i get the following error:


Browser error
-
ERROR
The requested URL could not be retrieved

While trying to retrieve the URL: http://test.example.com/

The following error was encountered:

* Access Denied.

  Access control configuration prevents your request from being 
allowed at this time. Please contact your service provider if you feel 
this is incorrect.


Your cache administrator is webmaster.
Generated Tue, 29 Jun 2010 08:01:45 GMT by localhost (squid/3.0.STABLE8)


Squid Error
---
2010/06/29 07:41:22.244| The request GET http://test.example.com/ is 
ALLOWED, because it matched 'sites_server_web'

2010/06/29 07:41:22.244| WARNING: Forwarding loop detected for:
GET / HTTP/1.0
Host: test.example.com
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.2.3) 
Gecko/20100423 Ubuntu/10.04 (lucid) Firefox/3.6.3

Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Referer: http://test.example.com/
Cookie: __utma=156214138.2072416337.1256440668.1263421087.1270454401.17; 
SESS404422c7e13985ed9850bca1343102d6=e6b996d3bf323193fec6e785a3356d1c; 
SESS4986f0d90a6abbc6006cc25a814fe1a8=1c1956864db4e7636f3e8b185b6dd6cc

Pragma: no-cache
Via: 1.1 localhost (squid/3.0.STABLE8)
X-Forwarded-For: 192.168.1.10
Cache-Control: no-cache, max-age=259200
Connection: keep-alive


2010/06/29 07:41:22.245| The reply for GET http://test.example.com/ is 
ALLOWED, because it matched 'sites_server_web'



My current setup is as follows.  I made the page request on the laptop 
to [VMs1].



setup
-


[VMs1]--[Server/Squid/DNS/FW 1]--{ Internet }---[Server/Squid/DNS/FW 
2]-+--[VMs2]

|
   
+--[LAN]--[Laptop]




Diagram got a bit mangled. I'm guessing the Laptop was on network VMs1?




The following squid config is for [Server 1]

squid.conf
--
https_port 91.185.133.180:443 accel cert=/etc/ssl/mail.example.com.crt 
key=/etc/ssl/mail.example.com.pem defaultsite=mail.example.com vhost 
protocol=https

http_port 91.185.133.180:80 accel defaultsite=test.example.com vhost

cache_peer 192.168.122.11 parent 443 0 no-query no-digest originserver 
login=PASS ssl sslversion=3 sslflags=DONT_VERIFY_PEER front-end-https=on 
name=server_mail
cache_peer 192.168.122.12 parent 80 0 no-query originserver login=PASS 
name=server_web


acl sites_server_mail dstdomain mail.example.com
http_access allow sites_server_mail
cache_peer_access server_mail allow sites_server_mail
cache_peer_access server_mail deny all

acl sites_server_web dstdomain test.example.com test.foobar.eu 
test1.example.com

http_access allow sites_server_web
cache_peer_access server_web allow sites_server_web
cache_peer_access server_web deny all

forwarded_for on

cache_store_log none
debug_options ALL,2


The following config is for [Server 2]

squid.conf
--
https_port 192.168.1.3:443 accel 
cert=/etc/ssl/certs/deb03.example.com.crt 
key=/etc/ssl/private/deb03.example.com.pem defaultsite=deb03.example.com 
vhost protocol=https

http_port 192.168.1.1:80 accel defaultsite=deb02.example.com vhost
http_port 192.168.1.1:80 accel defaultsite=oldwww.example.com vhost

cache_peer 192.168.122.3 parent 443 0 no-query originserver login=PASS 
ssl sslversion=3 sslflags=DONT_VERIFY_PEER front-end-https=on name=srv03

cache_peer 192.168.122.2 parent 80 0 no-query originserver name=srv02
cache_peer 192.168.122.11 parent 80 0 no-query originserver name=srv01

acl https proto https
acl sites_srv01 dstdomain oldwww.example.com
acl sites_srv03 dstdomain deb03.example.com
acl sites_srv02 dstdomain deb02.example.com second.example.com

http_access allow sites_srv01
http_access allow sites_srv03
http_access allow sites_srv02
cache_peer_access srv01 allow sites_srv01
cache_peer_access srv03 allow sites_srv03
cache_peer_access srv02 allow sites_srv02

forwarded_for on

### Transparent proxy
http_port 192.168.1.1:3128 transparent
acl lan_network src 192.168.1.0/24
acl localnet src 127.0.0.1/255.255.255.255
http_access allow lan_network
http_access allow localnet

cache_dir ufs /var/spool/squid3 1500 16 256
###

#cache_store_log none
debug_options ALL,2


I simply can't see where the loop is.  Could someone explain this to me 
or point me to the right documentation.  I had a look arround but found 
no relevant answer.


There are two things which may be happening:

 1) Your NAT interception rules may be catching proxy #2 outbound 
requests and looping it back into #2.
  ** FIX: Make sure that all the proxy machine IPv4 are listed in 

[squid-users] Forwarding loop detected

2010-06-29 Thread Edoardo COSTA SANSEVERINO

Hi all,

I'm getting the following error and I just can't figure out what I'm 
doing wrong.  It worked for a while but now i get the following error:


Browser error
-
ERROR
The requested URL could not be retrieved

While trying to retrieve the URL: http://test.example.com/

The following error was encountered:

* Access Denied.

  Access control configuration prevents your request from being 
allowed at this time. Please contact your service provider if you feel 
this is incorrect.


Your cache administrator is webmaster.
Generated Tue, 29 Jun 2010 08:01:45 GMT by localhost (squid/3.0.STABLE8)


Squid Error
---
2010/06/29 07:41:22.244| The request GET http://test.example.com/ is 
ALLOWED, because it matched 'sites_server_web'

2010/06/29 07:41:22.244| WARNING: Forwarding loop detected for:
GET / HTTP/1.0
Host: test.example.com
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.2.3) 
Gecko/20100423 Ubuntu/10.04 (lucid) Firefox/3.6.3

Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Referer: http://test.example.com/
Cookie: __utma=156214138.2072416337.1256440668.1263421087.1270454401.17; 
SESS404422c7e13985ed9850bca1343102d6=e6b996d3bf323193fec6e785a3356d1c; 
SESS4986f0d90a6abbc6006cc25a814fe1a8=1c1956864db4e7636f3e8b185b6dd6cc

Pragma: no-cache
Via: 1.1 localhost (squid/3.0.STABLE8)
X-Forwarded-For: 192.168.1.10
Cache-Control: no-cache, max-age=259200
Connection: keep-alive


2010/06/29 07:41:22.245| The reply for GET http://test.example.com/ is 
ALLOWED, because it matched 'sites_server_web'



My current setup is as follows.  I made the page request on the laptop 
to [VMs1].



setup
-


[VMs1]--[Server/Squid/DNS/FW 1]--{ Internet }---[Server/Squid/DNS/FW 
2]-+--[VMs2]


|

   +--[LAN]--[Laptop]



The following squid config is for [Server 1]

squid.conf
--
https_port 91.185.133.180:443 accel cert=/etc/ssl/mail.example.com.crt 
key=/etc/ssl/mail.example.com.pem defaultsite=mail.example.com vhost 
protocol=https

http_port 91.185.133.180:80 accel defaultsite=test.example.com vhost

cache_peer 192.168.122.11 parent 443 0 no-query no-digest originserver 
login=PASS ssl sslversion=3 sslflags=DONT_VERIFY_PEER front-end-https=on 
name=server_mail
cache_peer 192.168.122.12 parent 80 0 no-query originserver login=PASS 
name=server_web


acl sites_server_mail dstdomain mail.example.com
http_access allow sites_server_mail
cache_peer_access server_mail allow sites_server_mail
cache_peer_access server_mail deny all

acl sites_server_web dstdomain test.example.com test.foobar.eu 
test1.example.com

http_access allow sites_server_web
cache_peer_access server_web allow sites_server_web
cache_peer_access server_web deny all

forwarded_for on

cache_store_log none
debug_options ALL,2


The following config is for [Server 2]

squid.conf
--
https_port 192.168.1.3:443 accel 
cert=/etc/ssl/certs/deb03.example.com.crt 
key=/etc/ssl/private/deb03.example.com.pem defaultsite=deb03.example.com 
vhost protocol=https

http_port 192.168.1.1:80 accel defaultsite=deb02.example.com vhost
http_port 192.168.1.1:80 accel defaultsite=oldwww.example.com vhost

cache_peer 192.168.122.3 parent 443 0 no-query originserver login=PASS 
ssl sslversion=3 sslflags=DONT_VERIFY_PEER front-end-https=on name=srv03

cache_peer 192.168.122.2 parent 80 0 no-query originserver name=srv02
cache_peer 192.168.122.11 parent 80 0 no-query originserver name=srv01

acl https proto https
acl sites_srv01 dstdomain oldwww.example.com
acl sites_srv03 dstdomain deb03.example.com
acl sites_srv02 dstdomain deb02.example.com second.example.com

http_access allow sites_srv01
http_access allow sites_srv03
http_access allow sites_srv02
cache_peer_access srv01 allow sites_srv01
cache_peer_access srv03 allow sites_srv03
cache_peer_access srv02 allow sites_srv02

forwarded_for on

### Transparent proxy
http_port 192.168.1.1:3128 transparent
acl lan_network src 192.168.1.0/24
acl localnet src 127.0.0.1/255.255.255.255
http_access allow lan_network
http_access allow localnet

cache_dir ufs /var/spool/squid3 1500 16 256
###

#cache_store_log none
debug_options ALL,2


I simply can't see where the loop is.  Could someone explain this to me 
or point me to the right documentation.  I had a look arround but found 
no relevant answer.


Many thanks!
 -Ed


Re: [squid-users] Forwarding loop detected issue

2009-02-06 Thread Amos Jeffries

Ricardo Nuno wrote:
| Ah, okay, here is what I think is happening: 
| Squid1 does the ntlm auth, and converts it to BasicAuth for DG. 
| So Squid2 gets the BasicAuth form. which means at Squid2 the other 
| dummy_auth is needed to catch and log basic login details. 


Yes! That has it. Just used the dummy_auth for the wiki and it works.
Now its perfect without no loop. Just one more question Amos the delay
pools and acl's that i will start working on now i should put them all
on Squid2(Cache) right? 


Thank you very much for you time on this.

Regards,
Ricardo


Hmm, I don't think it matters, but my choice would be to put it as close 
to the clients as possible, (Squid1), to save on resources wastage.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE13
  Current Beta Squid 3.1.0.5


Re: [squid-users] Forwarding loop detected issue

2009-02-06 Thread Ricardo Nuno
| Ah, okay, here is what I think is happening: 
| Squid1 does the ntlm auth, and converts it to BasicAuth for DG. 
| So Squid2 gets the BasicAuth form. which means at Squid2 the other 
| dummy_auth is needed to catch and log basic login details. 

Yes! That has it. Just used the dummy_auth for the wiki and it works.
Now its perfect without no loop. Just one more question Amos the delay
pools and acl's that i will start working on now i should put them all
on Squid2(Cache) right? 

Thank you very much for you time on this.

Regards,
Ricardo


Re: [squid-users] Forwarding loop detected issue

2009-02-06 Thread Amos Jeffries

Ricardo Nuno wrote:

Hello Amos,

| I would have thought Squid->DG->Internet would be sufficient to meet those 
| needs. With the front squid doing cache+auth of stuff that gets past the 
| DG filtering. (and DG doing less work on cacheable things its already 
| scanned once). 


I tried that too. But it does not work.

| 
| NP: Squid2 in your setup must NOT do any peering. Remember this is the 
| EXIT. All access is direct to the Internet. It's one and only client is 
| DG. 


Yes. This solved the loop issue. Ans puting the cache_peer directive on
Squid1 with the "never_direct allow all".

| Don't include any unique stuff into both configs. 
| If you need usernames logged at Squid2 at all use the fakeauth helper and 
| LoggingOnly setup on that squid: 
| http://wiki.squid-cache.org/ConfigExamples/Authenticate/LoggingOnly 


Now here lies my new problem. I do need to login UserName+IP on the access.log
of the Squid2(Cache). Now that the loop is fixed it stop recording the UserName
only record IP, like this:

1233913862.159  6 192.168.20.140 TCP_MISS/304 250 GET 
http://m80.clix.pt/styles/m80_txt.css - DIRECT/195.23.102.200 -

I tried to use fakeauth as you suggested but when I do auth stop working.
On IE it keeps asking for my credentials and just keep denying.
I follow the docs on Squid Wiki but i get this on the log:

2009/02/06 10:03:02| authenticateDecodeAuth: Unsupported or unconfigured 
proxy-auth scheme, 'Basic c2JhdGFsaGE6bm9wYXNzd29yZA=='

This is what I added on Squid2(Cache):

auth_param ntlm program /usr/lib/squid/fakeauth_auth -d -v
auth_param ntlm children 10
auth_param ntlm realm Proxy Server
auth_param ntlm credentialsttl 1 hours
auth_param ntlm casesensitive off

acl logauth proxy_auth REQUIRED
http_access deny !logauth all

I think that i'm not using fakeauth the right way or something.
In alternative i could use the access.log from Squid1(NTML) for my reports 
because here
i get UserName+IP but I think if I use this one i will get more false positives 
like alot
of the DENIED, or i'm wrong and should just use it?


Ah, okay, here is what I think is happening:
 Squid1 does the ntlm auth, and converts it to BasicAuth for DG.
 So Squid2 gets the BasicAuth form. which means at Squid2 the other 
dummy_auth is needed to catch and log basic login details.



Amos
--
Please be using
  Current Stable Squid 2.7.STABLE6 or 3.0.STABLE13
  Current Beta Squid 3.1.0.5


Re: [squid-users] Forwarding loop detected issue

2009-02-06 Thread Ricardo Nuno

Hello Amos,

| I would have thought Squid->DG->Internet would be sufficient to meet those 
| needs. With the front squid doing cache+auth of stuff that gets past the 
| DG filtering. (and DG doing less work on cacheable things its already 
| scanned once). 

I tried that too. But it does not work.

| 
| NP: Squid2 in your setup must NOT do any peering. Remember this is the 
| EXIT. All access is direct to the Internet. It's one and only client is 
| DG. 

Yes. This solved the loop issue. Ans puting the cache_peer directive on
Squid1 with the "never_direct allow all".

| Don't include any unique stuff into both configs. 
| If you need usernames logged at Squid2 at all use the fakeauth helper and 
| LoggingOnly setup on that squid: 
| http://wiki.squid-cache.org/ConfigExamples/Authenticate/LoggingOnly 

Now here lies my new problem. I do need to login UserName+IP on the access.log
of the Squid2(Cache). Now that the loop is fixed it stop recording the UserName
only record IP, like this:

1233913862.159  6 192.168.20.140 TCP_MISS/304 250 GET 
http://m80.clix.pt/styles/m80_txt.css - DIRECT/195.23.102.200 -

I tried to use fakeauth as you suggested but when I do auth stop working.
On IE it keeps asking for my credentials and just keep denying.
I follow the docs on Squid Wiki but i get this on the log:

2009/02/06 10:03:02| authenticateDecodeAuth: Unsupported or unconfigured 
proxy-auth scheme, 'Basic c2JhdGFsaGE6bm9wYXNzd29yZA=='

This is what I added on Squid2(Cache):

auth_param ntlm program /usr/lib/squid/fakeauth_auth -d -v
auth_param ntlm children 10
auth_param ntlm realm Proxy Server
auth_param ntlm credentialsttl 1 hours
auth_param ntlm casesensitive off

acl logauth proxy_auth REQUIRED
http_access deny !logauth all

I think that i'm not using fakeauth the right way or something.
In alternative i could use the access.log from Squid1(NTML) for my reports 
because here
i get UserName+IP but I think if I use this one i will get more false positives 
like alot
of the DENIED, or i'm wrong and should just use it?

Thanks for all your help,
-- RIcardo




Re: [squid-users] Forwarding loop detected issue

2009-02-04 Thread Amos Jeffries
>
> Hi Amos,
>
> Thanks for your reply. Ill try to explain better what im trying to do
> here.
>
> | You don't appear to have a:
> |   Squid1->DG->Squid2 setup
> |
> | you do appear to have a:
> |   Squid1 -> Internet or DG -> Squid1 -> Internet setup.
> |
> | Is there any particular reason you need to have two squid?
> | The current feedback config appears to be needlessly complicated for any
> | use I can think of right now for having two instances of squid running.
>
> In the scenario DG(port 8081) --> Squid(port 3128)
> Clients are using the proxy on proxy_ip:8081
>
> Since Dansguardian cant handle NTML auth if I don't use 2 Squid instances
> then
> it will show on DG access log only the IP of the client and not the
> username.
>
> DG access log will look like this (only IP is logged):
> 2009.2.4 15:12:01 - 192.168.20.11
> http://adimgs.sapo.pt/2009/odisseias/massagem.jpg *SCANNED*  GET 1956
>
> and on the Squid access log it will always show the localhost since the
> connenction comes from DG:
> 1233760323.286  8 127.0.0.1 TCP_MISS/200 1597 GET
> http://h.s.sl.pt/pub/botao.html?rand=&tile=36871 - DIRECT/213.13.146.180
> text/html
>
> This would prevent me of doing reports on users usage and use I think
> delay pools.

I would have thought Squid->DG->Internet would be sufficient to meet those
needs. With the front squid doing cache+auth of stuff that gets past the
DG filtering. (and DG doing less work on cacheable things its already
scanned once).
Oh well. Lets get rid of your loop anyways.

>
> In the scenario Squid1(port 3128 for ntml_auth) -> DG(port 8081) -->
> Squid2(port 8080 for cache)
> Clients are using the proxy on proxy_ip:3128
>
> DG access log will look like this (now user and IP are logged):
> 2009.2.4 16:01:12 rnuno 192.168.20.11
> http://imgs.sapo.pt/images/footer/pt.gif *SCANNED*  GET 804
>
> and on the Squid access log:
> 1233763558.911  0 192.168.20.11 TCP_DENIED/407 2169 GET
> http://cache02.stormap.sapo.pt/vidstore02/thumbnais/66/91/02/15666_eDQus.jpg
> - NONE/- text/html
> 1233763558.917 21 127.0.0.1 TCP_MISS/200 2860 GET
> http://cache01.stormap.sapo.pt/vidstore02/thumbnais/05/64/67/ma_swing.jpg
> - DIRECT/212.55.154.131 image/jpeg
>
> So basically this setup is working in a way that allows me to do my
> reports and use delay pools
> but the error keeps on my log I thought that I has doing something wrong
> on the cache_peer line.
>
> 2009/02/04 16:09:15| WARNING: Forwarding loop detected for:
> Client: 127.0.0.1 http_port: 127.0.0.1:8080
> GET
> http://cache03.stormap.sapo.pt/vidstore03/thumbnais/57/ed/03/731347_L4An1.jpg
> HTTP/1.0
> Accept: */*
> Referer: http://videos.sapo.pt/
> Accept-Language: en-US
> UA-CPU: x86
> Accept-Encoding: identity,gzip,deflate
> User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; .NET CLR
> 1.1.4322; .NET CLR 2.0.50727)
> Host: cache03.stormap.sapo.pt
> Cookie: _swa_v=158287575757761020; _swa_uv=3752565023748371500
> Via: 1.1 squid-ntml:8080 (squid/2.7.STABLE3)
> X-Forwarded-For: 192.168.20.11
> Proxy-Authorization: Basic cm51bm86bm9wYXNzd29yZA==
> Cache-Control: max-age=259200
> X-Forwarded-For: 192.168.20.11
>
> I made some changes according to your advice but i still get the error. Do
> you have any suggestion
> on how to fix it or maybe another way to do what i want?
>
> Below is the conf's files im using now.
>
> Thank you once more.
>
> regards,
> -- Ricardo
>
>
>
> My changes in dansguardian.conf:
> filterip = 127.0.0.1
> filterport = 8081
> proxyip = 127.0.0.1
> proxyport = 8080
> usernameidmethodproxyauth = on

Great DG goes (DG)8081 -> (Squid2):8080

>
> # SQUID.CONF
> #
> -
> unique_hostname squid-cache
> http_port 8080
>

This is Squid2 then?

> hierarchy_stoplist cgi-bin ?
> acl QUERY urlpath_regex cgi-bin \?
> cache deny QUERY
> acl apache rep_header Server ^Apache
> broken_vary_encoding allow apache
>
> cache_mem 1024 MB
> maximum_object_size 8096 KB
>
> cache_dir ufs /cache/squid 2 16 256
> access_log /var/log/squid/access.log squid
>
> cache_peer 127.0.0.1 parent 8081 0 no-digest no-netdb-exchange
> name=squid-cache no-query login=*:nopassword
>
> acl localhost src 127.0.0.1
> #cache_peer_access squid-cache deny localhost
>

NP: Squid2 in your setup must NOT do any peering. Remember this is the
EXIT. All access is direct to the Internet. It's one and only client is
DG.

> include /etc/squid/squid-ntml.conf

Don't include any unique stuff into both configs.
If you need usernames logged at Squid2 at all use the fakeauth helper and
LoggingOnly setup on that squid:
 http://wiki.squid-cache.org/ConfigExamples/Authenticate/LoggingOnly

>
> #Suggested default:
> refresh_pattern ^ftp:   144020% 10080
> refresh_pattern ^gopher:14400%  1440
> refresh_pattern .   0   20% 4320
>
> #Recommended minimum configuration:
> acl all src 0.0.0.0/0.0.0.0
> acl manager proto cache_object
> acl to_loc

Re: [squid-users] Forwarding loop detected issue

2009-02-04 Thread Ricardo Nuno

Hi Amos,

Thanks for your reply. Ill try to explain better what im trying to do here.

| You don't appear to have a:
|   Squid1->DG->Squid2 setup
|
| you do appear to have a:
|   Squid1 -> Internet or DG -> Squid1 -> Internet setup.
|
| Is there any particular reason you need to have two squid?
| The current feedback config appears to be needlessly complicated for any
| use I can think of right now for having two instances of squid running.

In the scenario DG(port 8081) --> Squid(port 3128)
Clients are using the proxy on proxy_ip:8081

Since Dansguardian cant handle NTML auth if I don't use 2 Squid instances then
it will show on DG access log only the IP of the client and not the username.

DG access log will look like this (only IP is logged):
2009.2.4 15:12:01 - 192.168.20.11 
http://adimgs.sapo.pt/2009/odisseias/massagem.jpg *SCANNED*  GET 1956

and on the Squid access log it will always show the localhost since the 
connenction comes from DG:
1233760323.286  8 127.0.0.1 TCP_MISS/200 1597 GET 
http://h.s.sl.pt/pub/botao.html?rand=&tile=36871 - DIRECT/213.13.146.180 
text/html

This would prevent me of doing reports on users usage and use I think delay 
pools.

In the scenario Squid1(port 3128 for ntml_auth) -> DG(port 8081) --> 
Squid2(port 8080 for cache)
Clients are using the proxy on proxy_ip:3128

DG access log will look like this (now user and IP are logged):
2009.2.4 16:01:12 rnuno 192.168.20.11 http://imgs.sapo.pt/images/footer/pt.gif 
*SCANNED*  GET 804

and on the Squid access log:
1233763558.911  0 192.168.20.11 TCP_DENIED/407 2169 GET 
http://cache02.stormap.sapo.pt/vidstore02/thumbnais/66/91/02/15666_eDQus.jpg - 
NONE/- text/html
1233763558.917 21 127.0.0.1 TCP_MISS/200 2860 GET 
http://cache01.stormap.sapo.pt/vidstore02/thumbnais/05/64/67/ma_swing.jpg - 
DIRECT/212.55.154.131 image/jpeg

So basically this setup is working in a way that allows me to do my reports and 
use delay pools
but the error keeps on my log I thought that I has doing something wrong on the 
cache_peer line.

2009/02/04 16:09:15| WARNING: Forwarding loop detected for:
Client: 127.0.0.1 http_port: 127.0.0.1:8080
GET 
http://cache03.stormap.sapo.pt/vidstore03/thumbnais/57/ed/03/731347_L4An1.jpg 
HTTP/1.0
Accept: */*
Referer: http://videos.sapo.pt/
Accept-Language: en-US
UA-CPU: x86
Accept-Encoding: identity,gzip,deflate
User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; .NET CLR 
1.1.4322; .NET CLR 2.0.50727)
Host: cache03.stormap.sapo.pt
Cookie: _swa_v=158287575757761020; _swa_uv=3752565023748371500
Via: 1.1 squid-ntml:8080 (squid/2.7.STABLE3)
X-Forwarded-For: 192.168.20.11
Proxy-Authorization: Basic cm51bm86bm9wYXNzd29yZA==
Cache-Control: max-age=259200
X-Forwarded-For: 192.168.20.11

I made some changes according to your advice but i still get the error. Do you 
have any suggestion
on how to fix it or maybe another way to do what i want?

Below is the conf's files im using now.

Thank you once more.

regards,
-- Ricardo



My changes in dansguardian.conf:
filterip = 127.0.0.1
filterport = 8081
proxyip = 127.0.0.1
proxyport = 8080
usernameidmethodproxyauth = on

# SQUID.CONF
# -
unique_hostname squid-cache
http_port 8080

hierarchy_stoplist cgi-bin ?
acl QUERY urlpath_regex cgi-bin \?
cache deny QUERY
acl apache rep_header Server ^Apache
broken_vary_encoding allow apache

cache_mem 1024 MB
maximum_object_size 8096 KB

cache_dir ufs /cache/squid 2 16 256
access_log /var/log/squid/access.log squid

cache_peer 127.0.0.1 parent 8081 0 no-digest no-netdb-exchange name=squid-cache 
no-query login=*:nopassword

acl localhost src 127.0.0.1
#cache_peer_access squid-cache deny localhost

include /etc/squid/squid-ntml.conf

#Suggested default:
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern .   0   20% 4320

#Recommended minimum configuration:
acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443  # https
acl SSL_ports port 563  # snews
acl SSL_ports port 873  # rsync
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl Safe_ports port 631 # cups
acl Safe_ports port 873 # rsync
acl Safe_ports port 901 # SWAT
acl purge method PURGE
acl CONNECT method CONNECT

acl NTLMUsers proxy_auth REQUIRED
acl rede_interna src 192.168.20.0/24
acl h_trabalho time MTWHF 08:00-18:00
acl downloads url_regex -i .exe .mp3 .vqf .zip .rar .avi .mpeg .mpe .mpg .qt 
.ram .rm .i

Re: [squid-users] Forwarding loop detected issue

2009-02-03 Thread Amos Jeffries

Ricardo Nuno wrote:

Hi all,

I'm new to squid so bare with me. I just setup squid according to these 
instructions:
http://www.howtoforge.com/dansguardian-with-multi-group-filtering-and-squid-with-ntlm-auth-on-debian-etch-p2



Oh dear.


The setup is working but my logs are fill with these errors for every 
connection:

2009/02/03 17:20:15| WARNING: Forwarding loop detected for:
Client: 127.0.0.1 http_port: 127.0.0.1:3128
GET internal://lis.moonlight.lan/squid-internal-periodic/store_digest HTTP/1.0
Accept: application/cache-digest
Accept: text/html
Via: 0.0 lis.moonlight.lan:3128 (squid/2.7.STABLE3)
X-Forwarded-For: unknown
Host: 127.0.0.1:8081
Authorization: Basic Kjpub3Bhc3N3b3Jk
Cache-Control: max-age=259200
Connection: Close

I know that these error is because of my cache_peer line iv been searching the 
web for the
solution of this issue and i tried to separate the configs of the 2 squid 
instances but wen
i did it the setup stop working.


See the 'include' directive which allows a section of squid.conf to be 
shared between two squid, each with their own squid.conf.




Does this error will hurt the performance of Squid how can i fix it without breaking the 
squi1+DG+squid2 setup? 


You don't appear to have a:
  Squid1->DG->Squid2 setup

you do appear to have a:
  Squid1 -> Internet or DG -> Squid1 -> Internet setup.

Is there any particular reason you need to have two squid?
The current feedback config appears to be needlessly complicated for any 
use I can think of right now for having two instances of squid running.




regards,
--Ricardo

Squid Cache: Version 2.7.STABLE3
DansGuardian 2.8.0.6

My dansguardian.conf changes:

filterip =
filterport = 8081
proxyip = 127.0.0.1
proxyport = 3128
usernameidmethodproxyauth = on
forwardedfor = on


Below is my squid.conf:

http_port 127.0.0.1:3128 transparent


So what does your NAT table contain?
'transparent' does not fit with dansguardian being explicitly configured 
to pass back to the proxy on that port.


NP: if you also follow the transparent intercept recommendations passing 
stuff directly to dansguardian you end up opening a backdoor channel. 
Turning your box into a two-stage open proxy with partial anonymization.




http_port 8080

hierarchy_stoplist cgi-bin ?
acl QUERY urlpath_regex cgi-bin \?
cache deny QUERY
acl apache rep_header Server ^Apache
broken_vary_encoding allow apache

cache_mem 1024 MB
maximum_object_size 8096 KB

cache_dir ufs /cache/squid 2 16 256
access_log /var/log/squid/access.log squid

cache_peer 127.0.0.1 parent 8081 0 no-query login=*:nopassword


You are missing "no-digest no-netdb-exchange name=uniqPeer"

And also:
  acl localhost src 127.0.0.1
  cache_peer_access uniqPeer deny localhost

maybe also:
  acl interceptPort myport 3128
  cache_peer_access uniqPeer deny interceptPort



auth_param ntlm program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp
auth_param ntlm children 15
auth_param ntlm keep_alive on

auth_param basic program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-basic
auth_param basic children 5
auth_param basic realm Proxy Server
auth_param basic credentialsttl 2 hours
auth_param basic casesensitive off

refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern .   0   20% 4320

acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443  # https
acl SSL_ports port 563  # snews
acl SSL_ports port 873  # rsync
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl Safe_ports port 631 # cups
acl Safe_ports port 873 # rsync
acl Safe_ports port 901 # SWAT
acl purge method PURGE
acl CONNECT method CONNECT

acl NTLMUsers proxy_auth REQUIRED
acl rede_interna src 192.168.20.0/24
acl h_trabalho time MTWHF 08:00-18:00
acl downloads url_regex -i .exe .mp3 .vqf .zip .rar .avi .mpeg .mpe .mpg .qt 
.ram .rm .iso .raw .wav .mov .iso

http_access allow manager localhost
http_access deny manager
http_access allow purge localhost
http_access deny purge

http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports

http_access allow localhost
http_access allow NTLMUsers

http_access deny all
http_reply_access allow all
icp_access allow all

forwarded_for off


Turning off one of the features which detect breakage loops and request 
tracing.




cache_effective_user proxy



cache_effective_group proxy


Breaking winbind privileges.
http://wiki.squid-cache.org/ConfigExamples/WindowsAuthenticationNTLM

[squid-users] Forwarding loop detected issue

2009-02-03 Thread Ricardo Nuno

Hi all,

I'm new to squid so bare with me. I just setup squid according to these 
instructions:
http://www.howtoforge.com/dansguardian-with-multi-group-filtering-and-squid-with-ntlm-auth-on-debian-etch-p2

The setup is working but my logs are fill with these errors for every 
connection:

2009/02/03 17:20:15| WARNING: Forwarding loop detected for:
Client: 127.0.0.1 http_port: 127.0.0.1:3128
GET internal://lis.moonlight.lan/squid-internal-periodic/store_digest HTTP/1.0
Accept: application/cache-digest
Accept: text/html
Via: 0.0 lis.moonlight.lan:3128 (squid/2.7.STABLE3)
X-Forwarded-For: unknown
Host: 127.0.0.1:8081
Authorization: Basic Kjpub3Bhc3N3b3Jk
Cache-Control: max-age=259200
Connection: Close

I know that these error is because of my cache_peer line iv been searching the 
web for the
solution of this issue and i tried to separate the configs of the 2 squid 
instances but wen
i did it the setup stop working.

Does this error will hurt the performance of Squid how can i fix it without 
breaking the 
squi1+DG+squid2 setup? 

regards,
--Ricardo

Squid Cache: Version 2.7.STABLE3
DansGuardian 2.8.0.6

My dansguardian.conf changes:

filterip =
filterport = 8081
proxyip = 127.0.0.1
proxyport = 3128
usernameidmethodproxyauth = on
forwardedfor = on


Below is my squid.conf:

http_port 127.0.0.1:3128 transparent
http_port 8080

hierarchy_stoplist cgi-bin ?
acl QUERY urlpath_regex cgi-bin \?
cache deny QUERY
acl apache rep_header Server ^Apache
broken_vary_encoding allow apache

cache_mem 1024 MB
maximum_object_size 8096 KB

cache_dir ufs /cache/squid 2 16 256
access_log /var/log/squid/access.log squid

cache_peer 127.0.0.1 parent 8081 0 no-query login=*:nopassword

auth_param ntlm program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp
auth_param ntlm children 15
auth_param ntlm keep_alive on

auth_param basic program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-basic
auth_param basic children 5
auth_param basic realm Proxy Server
auth_param basic credentialsttl 2 hours
auth_param basic casesensitive off

refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern .   0   20% 4320

acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443  # https
acl SSL_ports port 563  # snews
acl SSL_ports port 873  # rsync
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl Safe_ports port 631 # cups
acl Safe_ports port 873 # rsync
acl Safe_ports port 901 # SWAT
acl purge method PURGE
acl CONNECT method CONNECT

acl NTLMUsers proxy_auth REQUIRED
acl rede_interna src 192.168.20.0/24
acl h_trabalho time MTWHF 08:00-18:00
acl downloads url_regex -i .exe .mp3 .vqf .zip .rar .avi .mpeg .mpe .mpg .qt 
.ram .rm .iso .raw .wav .mov .iso

http_access allow manager localhost
http_access deny manager
http_access allow purge localhost
http_access deny purge

http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports

http_access allow localhost
http_access allow NTLMUsers

http_access deny all
http_reply_access allow all
icp_access allow all

forwarded_for off

cache_effective_user proxy
cache_effective_group proxy

coredump_dir /var/spool/squid


Re: [squid-users] Forwarding loop detected for .. help

2008-10-09 Thread Henrik Nordstrom
On tor, 2008-10-09 at 10:09 +0200, Gregory Machin wrote:
> Hi
> what cause this

Most likely a broken dyndns client configured to use the proxy, combined
with the same port being used both for forward proxy and transparent
interception.


> 2008/10/05 05:27:47| WARNING: Forwarding loop detected for:
> GET 
> /nic/update?&hostname=za1fwl01.dnsalias.com&myip=196.22.217.98&wildcard=NOCHG&mx=NOCHG&backmx=NOCHG
> HTTP/1.0
> User-Agent: Fortinet_DDNSC/1.200310271130
> Host: 66.*.*.133:3128
> Via: 1.0 cache.mycache.co.za (squid/3.0.STABLE6), 1.0
> cache.mycache.co.za (squid/3.0.STABLE6), 1.0 cache.mycache.co.za
> (squid/3.0.STABLE6), 1.0 cache.mycache.co.za (squid/3.0.STABLE6), 1.0
> cache.mycache.co.za (squid/3.0.STABLE6), 1.0 cache.mycache.co.za
> (squid/3.0.STABLE6), 1.0 cache.mycache.co.za (squid/3.0.STABLE6), 1.0

> How do I prevent it ?

Use miss_access to deny forwarding requests to the proxy itself.

acl to_myself dst ip.of.proxy 127.0.0.1 [and any other ips the proxy
listens on]

miss_access deny to_myself

Regards
Henrik


signature.asc
Description: This is a digitally signed message part


[squid-users] Forwarding loop detected for .. help

2008-10-09 Thread Gregory Machin
Hi
what cause this

2008/10/05 05:27:47| WARNING: Forwarding loop detected for:
GET 
/nic/update?&hostname=za1fwl01.dnsalias.com&myip=196.22.217.98&wildcard=NOCHG&mx=NOCHG&backmx=NOCHG
HTTP/1.0
Authorization: Basic c3ludGhlc2V1OmRvd251bmRlcg==
User-Agent: Fortinet_DDNSC/1.200310271130
Host: 66.*.*.133:3128
Via: 1.0 cache.mycache.co.za (squid/3.0.STABLE6), 1.0
cache.mycache.co.za (squid/3.0.STABLE6), 1.0 cache.mycache.co.za
(squid/3.0.STABLE6), 1.0 cache.mycache.co.za (squid/3.0.STABLE6), 1.0
cache.mycache.co.za (squid/3.0.STABLE6), 1.0 cache.mycache.co.za
(squid/3.0.STABLE6), 1.0 cache.mycache.co.za (squid/3.0.STABLE6), 1.0
cache.mycache.co.za (squid/3.0.STABLE6), 1.0 cache.mycache.co.za
(squid/3.0.STABLE6), 1.0 cache.mycache.co.za (squid/3.0.STABLE6), 1.0
cache.mycache.co.za (squid/3.0.STABLE6), 1.0 cache.mycache.co.za
(squid/3.0.STABLE6), 1.0 cache.mycache.co.za (squid/3.0.STABLE6), 1.0
cache.mycache.co.za (squid/3.0.STABLE6), 1.0 cache.mycache.co.za
(squid/3.0.STABLE6), 1.0 cache.mycache.co.za (squid/3.0.STABLE6), 1.0
cache.mycache.co.za (squid/3.0.STABLE6), 1.0 cache.mycache.co.za
(squid/3.0.STABLE6), 1.0 cache.mycache.co.za (squid/3.0.STABLE6), 1.0
cache.mycache.co.za (squid/3.0.STABLE6), 1.0 cache.mycache.co.za
(squid/3.0.STABLE6), 1.0 cache.mycache.co.za (squid/3.0.STABLE6), 1.0
cache.mycache.co.za (squid/3.0.STABLE6), 1.0 cache.mycache.co.za
(squid/3.0.STABLE6), 1.0 cache.mycache.co.za (squid/3.0.STABLE6), 1.0
cache.mycache.co.za (squid/3.0.STABLE6), 1.0 cache.mycache.co.za
(squid/3.0.STABLE6), 1.0 cache.mycache.co.za (squid/3.0.STABLE6), 1.0
cache.mycache.co.za (squid/3.0.STABLE6), 1.0 cache.mycache.co.za
(squid/3.0.STABLE6), 1.0 cache.mycache.co.za (squid/3.0.STABLE6), 1.0
cache.mycache.co.za (squid/3.0.STABLE6), 1.0 cache.mycache.co.za
(squid/3.0.STABLE6), 1.0 cache.mycache.co.za (squid/3.0.STABLE6), 1.0
cache.mycache.co.za (squid/3.0.STABLE6), 1.0 cache.mycache.co.za
(squid/3.0.STABLE6), 1.0 cache.mycache.co.za (squid/3.0.STABLE6), 1.0
cache.mycache.co.za (squid/3.0.STABLE6), 1.0 cache.mycache.co.za
(squid/3.0.STABLE6), 1.0 cache.mycache.co.za (squid/3.0.STABLE6), 1.0
cache.mycache.co.za (squid/3.0.STABLE6), 1.0 cache.mycache.co.za
(squid/3.0.STABLE6), 1.0 cache.mycache.co.za (squid/3.0.STABLE6), 1.0
cache.mycache.co.za (squid/3.0.STABLE6), 1.0 cache.mycache.co.za
(squid/3.0.STABLE6), 1.0 cache.mycache.co.za (squid/3.0.STABLE6), 1.0
cache.mycache.co.za (squid/3.0.STABLE6), 1.0 cache.mycache.co.za
(squid/3.0.STABLE6), 1.0 cache.mycache.co.za (squid/3.0.STABLE6), 1.0
cache.mycache.co.za (squid/3.0.STABLE6), 1.0 cache.mycache.co.za
(squid/3.0.STABLE6), 1.0 cache.mycache.co.za (squid/3.0.STABLE6), 1.0
cache.mycache.co.za (squid/3.0.STABLE6), 1.0 cache.mycache.co.za
(squid/3.0.STABLE6), 1.0 cache.mycache.co.za (squid/3.0.STABLE6), 1.0
cache.mycache.co.za (squid/3.0.STABLE6), 1.0 cache.mycache.co.za
(squid/3.0.STABLE6), 1.0 cache.mycache.co.za (squid/3.0.STABLE6), 1.0
cache.mycache.co.za (squid/3.0.STABLE6), 1.0 cache.mycache.co.za
(squid/3.0.STABLE6), 1.0 cache.mycache.co.za (squid/3.0.STABLE6), 1.0
cache.mycache.co.za (squid/3.0.STABLE6), 1.0 cache.mycache.co.za
(squid/3.0.STABLE6)
X-Forwarded-For: 66.8.89.82, 66.*.*.150, 66.*.*.150, 66.*.*.150,
66.*.*.150, 66.*.*.150, 66.*.*.150, 66.*.*.150, 66.*.*.150,
66.*.*.150, 66.*.*.150, 66.*.*.150, 66.*.*.150, 66.*.*.150,
66.*.*.150, 66.*.*.150, 66.*.*.150, 66.*.*.150, 66.*.*.150,
66.*.*.150, 66.*.*.150, 66.*.*.150, 66.*.*.150, 66.*.*.150,
66.*.*.150, 66.*.*.150, 66.*.*.150, 66.*.*.150, 66.*.*.150,
66.*.*.150, 66.*.*.150, 66.*.*.150, 66.*.*.150, 66.*.*.150,
66.*.*.150, 66.*.*.150, 66.*.*.150, 66.*.*.150, 66.*.*.150,
66.*.*.150, 66.*.*.150, 66.*.*.150, 66.*.*.150, 66.*.*.150,
66.*.*.150, 66.*.*.150, 66.*.*.150, 66.*.*.150, 66.*.*.150,
66.*.*.150, 66.*.*.150, 66.*.*.150, 66.*.*.150, 66.*.*.150,
66.*.*.150, 66.*.*.150, 66.*.*.150, 66.*.*.150, 66.*.*.150,
66.*.*.150, 66.*.*.150, 66.*.*.150, 66.*.*.150
Cache-Control: max-age=0
Connection: keep-alive


How do I prevent it ?


Re: [squid-users] Forwarding loop detected.

2007-06-09 Thread Henrik Nordstrom
tor 2007-06-07 klockan 11:22 -0800 skrev Chris Robertson:

> Interesting.  Using the originserver tagline to cache_peer seems to 
> prevent some of the obvious avenues.

No, it's using one of the accelerator options on http_port which changes
the security profile, requiring the use of a cache_peer.

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


Re: [squid-users] Forwarding loop detected.

2007-06-07 Thread Chris Robertson

Suhaib Ahmad wrote:

Hello,

I've squid2.6 STABLE running as web-accelerator, on 'image' (having
ip:67.107.145.109) machine with parent configured as 192.168.7.1.
'image' machine is also the nameserver having 'hosts' file entry:

127.0.0.1   localhost.localdomain   localhost

The squid-cache stops working sometime throwing 'Forward loop
detected' warning in cache.log. Can anyone suggest the remedie.
Thanks.

 squid.conf 
http_port 80 transparent


http://wiki.squid-cache.org/SquidFaq/ReverseProxy#head-7fa129a6528d9a5c914f8dd5671668173e39e341


cache_peer 192.168.7.1 parent 81 0 no-query originserver weight=1
http_access allow all


Asking for abuse.

Interesting.  Using the originserver tagline to cache_peer seems to 
prevent some of the obvious avenues.  But you should still at least 
prevent CONNECT requests to ports other than 443, and any requests to 
ports other than those labeled "Safe" in the default squid.conf.



acl all src 0.0.0.0/0.0.0.0
icp_access allow all




SNIP

Yup.  Looks like a forwarding loop.  Set up your accelerator properly, 
and I imagine this will be resolved.


Regards,
Suhaib


Chris


[squid-users] Forwarding loop detected.

2007-06-06 Thread Suhaib Ahmad

Hello,

I've squid2.6 STABLE running as web-accelerator, on 'image' (having
ip:67.107.145.109) machine with parent configured as 192.168.7.1.
'image' machine is also the nameserver having 'hosts' file entry:

127.0.0.1   localhost.localdomain   localhost

The squid-cache stops working sometime throwing 'Forward loop
detected' warning in cache.log. Can anyone suggest the remedie.
Thanks.

 squid.conf 
http_port 80 transparent
cache_peer 192.168.7.1 parent 81 0 no-query originserver weight=1
http_access allow all
acl all src 0.0.0.0/0.0.0.0
icp_access allow all

 cache.log 
2007/06/05 20:42:35| WARNING: Forwarding loop detected for:
Client: 67.107.145.109 http_port: 67.107.145.109:80
GET 
http://image.bridgemailsystem.com/pms/graphics/6.05.07directresponse2r1(650x90).gif
HTTP/1.0
If-Modified-Since: Tue, 05 Jun 2007 15:15:59 GMT
If-None-Match: "19577-1181056559000"
Authorization: Basic dXNlcm5hbWU6cGFzc3dvcmQ=
User-Agent: www.clamav.net
Host: image.bridgemailsystem.com
Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, */*
Via: 1.1 localhost.localdomain:80 (squid/2.6.STABLE12), 1.0
localhost.localdomain:80 (squid/2.6.STABLE12), 1.0
localhost.localdomain:80 (squid
/2.6.STABLE12), 1.0 localhost.localdomain:80 (squid/2.6.STABLE12), 1.0
localhost.localdomain:80 (squid/2.6.STABLE12), 1.0
localhost.localdomai
n:80 (squid/2.6.STABLE12), 1.0 localhost.localdomain:80
(squid/2.6.STABLE12), 1.0 localhost.localdomain:80
(squid/2.6.STABLE12), 1.0 localhost
.localdomain:80 (squid/2.6.STABLE12), 1.0 localhost.localdomain:80
(squid/2.6.STABLE12)

X-Forwarded-For: 24.164.28.34, 67.107.145.109, 67.107.145.109,
67.107.145.109, 67.107.145.109, 67.107.145.109, 67.107.145.109,
67.107.145.109,
67.107.145.109, 67.107.145.109

Cache-Control: max-age=259200
Connection: keep-alive


Regards,
Suhaib


Re: [squid-users] Forwarding loop

2007-01-12 Thread John Halfpenny

Presumably, I used the OpenSuse 10.2 package of Squid. How can I check?



 --- On Thu 01/11, Henrik Nordstrom < [EMAIL PROTECTED] > wrote:

From: Henrik Nordstrom [mailto: [EMAIL PROTECTED]

To: [EMAIL PROTECTED]

 Cc: squid-users@squid-cache.org

Date: Fri, 12 Jan 2007 05:12:18 +0100

Subject: Re: [squid-users] Forwarding loop



tor 2007-01-11 klockan 10:44 -0500 skrev John Halfpenny:> 2007/01/11 15:34:57| 
WARNING: Forwarding loop detected for:> Client: 127.0.0.1 http_port: 
127.0.0.1:3128> GET http://localhost/squid-internal-dynamic/netdb HTTP/1.0> 
Host: localhost:8081> Via: 1.0 SquidA:8080 (squid/2.6.STABLE5), 1.0 SquidC:3128 
(squid/2.6.STABLE5)> X-Forwarded-For: unknown, 127.0.0.1> Cache-Control: 
max-age=259200Hmmm... is all of these 
/squid-internal-dynamic/netdb?RegardsHenrikAttachment: signature.asc  (0.31KB)

___
Join Excite! - http://www.excite.com
The most personalized portal on the Web!




Re: [squid-users] Forwarding loop

2007-01-11 Thread Henrik Nordstrom
tor 2007-01-11 klockan 10:44 -0500 skrev John Halfpenny:

> 2007/01/11 15:34:57| WARNING: Forwarding loop detected for:
> Client: 127.0.0.1 http_port: 127.0.0.1:3128
> GET http://localhost/squid-internal-dynamic/netdb HTTP/1.0
> Host: localhost:8081
> Via: 1.0 SquidA:8080 (squid/2.6.STABLE5), 1.0 SquidC:3128 (squid/2.6.STABLE5)
> X-Forwarded-For: unknown, 127.0.0.1
> Cache-Control: max-age=259200


Hmmm... is all of these /squid-internal-dynamic/netdb?


Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


Re: [squid-users] Forwarding loop

2007-01-11 Thread John Halfpenny

Hi Henrik,



Checked and they are different. I'm getting this in the log



2007/01/11 15:34:57| WARNING: Forwarding loop detected for:

Client: 127.0.0.1 http_port: 127.0.0.1:3128

GET http://localhost/squid-internal-dynamic/netdb HTTP/1.0

Host: localhost:8081

Via: 1.0 SquidA:8080 (squid/2.6.STABLE5), 1.0 SquidC:3128 (squid/2.6.STABLE5)

X-Forwarded-For: unknown, 127.0.0.1

Cache-Control: max-age=259200

Connection: Close



2007/01/11 15:35:02| WARNING: Forwarding loop detected for:

Client: 127.0.0.1 http_port: 127.0.0.1:3129

GET http://localhost/squid-internal-dynamic/netdb HTTP/1.0

Host: localhost:9091

Via: 1.0 SquidB:9090 (squid/2.6.STABLE5), 1.0 SquidC:3128 (squid/2.6.STABLE5)

X-Forwarded-For: unknown, 127.0.0.1

Cache-Control: max-age=259200

Connection: Close



Something odd is going on as it mentions SquidC:3128 in both reports. The 
bottom report should connect to C on port 3129, or I would have thought... the 
two dansguardian instances reflect the correct port numbers in their respective 
config files.



Any thoughts?



John



 --- On Thu 01/11, Henrik Nordstrom < [EMAIL PROTECTED] > wrote:

From: Henrik Nordstrom [mailto: [EMAIL PROTECTED]

To: [EMAIL PROTECTED]

 Cc: squid-users@squid-cache.org

Date: Thu, 11 Jan 2007 15:22:05 +0100

Subject: Re: [squid-users] Forwarding loop



tor 2007-01-11 klockan 05:40 -0500 skrev John Halfpenny:> The system is 
reporting routing loops on instance C at port 3128 via 8080 and 9090. Why is 
this?Triple-check your visible_hostname settings. C MUST be different from Aand 
B.RegardsHenrikAttachment: signature.asc  (0.31KB)

___
Join Excite! - http://www.excite.com
The most personalized portal on the Web!




Re: [squid-users] Forwarding loop

2007-01-11 Thread Henrik Nordstrom
tor 2007-01-11 klockan 05:40 -0500 skrev John Halfpenny:

> The system is reporting routing loops on instance C at port 3128 via 8080 and 
> 9090. Why is this?

Triple-check your visible_hostname settings. C MUST be different from A
and B.

Regards
Henrik




signature.asc
Description: Detta är en digitalt signerad	meddelandedel


[squid-users] Forwarding loop

2007-01-11 Thread John Halfpenny

Hello Everyone,



I seem to have a routing loop on my server. It is running three instances of 
Squid (shown below) on different ports with different visible & unique 
hostnames.



A - Squid instance 1 (listening on port 8080)

B - Squid instance 2 (listening on port 9090)

C - Squid instance 3 (listening on ports 3128 and 3129)

x - Dansguardian 1   (listening on port 8081)

y - Dansguardian 2   (listening on port 9091)



-- 8080 -- A -- 8081 -- \

 x -- 3128 -- C

 y -- 3129 -- C

-- 9090 -- B -- 9091 -- /



The system is reporting routing loops on instance C at port 3128 via 8080 and 
9090. Why is this? Do I need a separate instance D on which to host my second 
dansguardian instance?



Apologies for the shonky diagram and thanks for any help :-)



John

___
Join Excite! - http://www.excite.com
The most personalized portal on the Web!




Re: [squid-users] forwarding loop in interception caching

2006-11-19 Thread EPSharma

Hi squid users!

I have configured hierarchical cache setup as one parent and other child .
All the child proxy are running in transparent by using wccp version 1. .
The cache is running slowly than before and not functioning properly. When I
check in child cache it give error in cache.log as

2006/11/19 19:25:32| WARNING: Forwarding loop detected for:
GET /download/7/C/2/7C239FA9-FB4C-4768-B34F-31C6A17947D8/au_all.cab HTTP/1.0
Accept: */*
Accept-Encoding: gzip, deflate

I am running 2.6 stable 4 in parent and in child 2.5 stable 9 version and
use
iptables for por redirect from 80 to 3128

I have seen faq and disable the option redirect_rewrites_host_header off but
problems remains as it is.

Any help will be highly appreciated,

Thank you in advance,

Regards,
EpSharma



Re: [squid-users] forwarding loop in interception caching

2006-11-04 Thread Henrik Nordstrom
Please keep replies on the list.


lör 2006-11-04 klockan 10:22 +0200 skrev genco yilmaz:

>   Actually I was aware of it:) but It didn't come into my mind that it
> can cause such a thing.  (a bit lack of experience)

Redirectors does not cause loops if used correctly, but not being aware
of them makes it easy to misunderstand the loop.

> I know that visible_hostname is unique in all of the proxy servers
> and we dont have any cache_peer lines.

Ok.

> For interception in iptables we have;
> 
> iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j REDIRECT
> --to-port 8080

Ok.

> I am not sure but it seems that loop is due to the messages sending
> back and forth between redirector and squid but I cannot find how it
> is happening.

Requests is not sent between the redirector and Squid. Squid only asks
the redirector about the URL and the redirector returns a new URL that
Squid should use instead.

A loop occurs if the URL after redirection points back to Squid.

A loop also occurs in interception mode if the request initiated by
Squid is intercepted again.

So far nothing obvious why you are seeing loops, but what does
access.log say?

Also which Squid version are you using? Interception mode is somewhat
broken in 2.6.STABLE1 causing "failed to select source" errors.

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


Re: [squid-users] forwarding loop in interception caching

2006-11-03 Thread Henrik Nordstrom
ons 2006-11-01 klockan 16:28 +0200 skrev genco yilmaz:

>   After looking into my configuration I found that that header is
> caused by our redirector process. Then I have added this;
> redirect_rewrites_host_header off

Ok. Wasn't aware you are using a redirector.

Do you have any cache_peer lines in squid.conf?

What does your interception rule look like?

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


Re: [squid-users] forwarding loop in interception caching

2006-11-01 Thread genco yilmaz

On 11/1/06, genco yilmaz <[EMAIL PROTECTED]> wrote:

On 11/1/06, Henrik Nordstrom <[EMAIL PROTECTED]> wrote:
> tis 2006-10-31 klockan 18:40 +0200 skrev genco yilmaz:
>
> > redirects all the http requests into 8080. port in which squid is
> > listening. I dont understand why intercepted requests are reflected
> > back into squid and I get the following forwarding loop?
>
> It's not. It's a bad request.
>
> According to the request headers the request was for
> http://127.0.0.1:1457/button.php?u=david3s
>
> Regards
> Henrik
>

Thanks for your reply. It is a machine having 20Mbit/s http traffic
and I see lots of loop messages in cache.log . If these requests were
rare, I would ignore them but there are many of them. Do you have any
idea why these bad requests are generated.
   I am trying to eliminate them because I suspect that they cause
extra load on the server which has already concurrent connection over
3000 .

Regards.




Hi,
 After looking into my configuration I found that that header is
caused by our redirector process. Then I have added this;
redirect_rewrites_host_header off

and "Host:" field now shows the original url but squid still warn me
about this forwarding loop.
   I have read transparent configuration documents for squid and
iptables but I cant see any special configuration if we use a
redirector.  I am reading and reading
but no result yet.  Do you have any suggestion ?

Regards.


Re: [squid-users] forwarding loop in interception caching

2006-10-31 Thread genco yilmaz

On 11/1/06, Henrik Nordstrom <[EMAIL PROTECTED]> wrote:

tis 2006-10-31 klockan 18:40 +0200 skrev genco yilmaz:

> redirects all the http requests into 8080. port in which squid is
> listening. I dont understand why intercepted requests are reflected
> back into squid and I get the following forwarding loop?

It's not. It's a bad request.

According to the request headers the request was for
http://127.0.0.1:1457/button.php?u=david3s

Regards
Henrik



Thanks for your reply. It is a machine having 20Mbit/s http traffic
and I see lots of loop messages in cache.log . If these requests were
rare, I would ignore them but there are many of them. Do you have any
idea why these bad requests are generated.
  I am trying to eliminate them because I suspect that they cause
extra load on the server which has already concurrent connection over
3000 .

Regards.









--
Linux Forumu
http://www.linuxforumu.net


Re: [squid-users] forwarding loop in interception caching

2006-10-31 Thread Henrik Nordstrom
tis 2006-10-31 klockan 18:40 +0200 skrev genco yilmaz:

> redirects all the http requests into 8080. port in which squid is
> listening. I dont understand why intercepted requests are reflected
> back into squid and I get the following forwarding loop?

It's not. It's a bad request.

According to the request headers the request was for
http://127.0.0.1:1457/button.php?u=david3s

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


[squid-users] forwarding loop in interception caching

2006-10-31 Thread genco yilmaz

Hi,
I have looked at the squid FAQ, searched and debug many times before
asking this question.
I am getting the following forwarding loop warning in my squid log
file. I have read Henrik Nordstrom's reply to the same problem in
mailing list saying that;
"Are you doing interception caching? If so, you must make sure that
the HTTP requests initiated by Squid is not reflected back on himself
by the interception"

A load balancer redirects http requests into my squid proxy and an
iptables rule like the following
iptables -t nat -A PREROUTING -i eth0  -p tcp --dport 80 -j REDIRECT
--to-port 8080

redirects all the http requests into 8080. port in which squid is
listening. I dont understand why intercepted requests are reflected
back into squid and I get the following forwarding loop?
   Any request initiated by squid doesn't go to OUTPUT chain in
filter table? If it is not so, where am I doing wrong someting
conceptually?

Thanks a lot.

Here is the warning:

2006/10/31 18:18:38| WARNING: Forwarding loop detected for:
GET /button.php?u=david3s HTTP/1.0
Accept: */*
Referer: http://www.xxx.com/xyz/oytoplist.html
Accept-Language: en
AXXept-Encoding: gzip, deflate
User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1;
.NET CLR 2.0.50727)
Host: 127.0.0.1:1457
Via: 1.1 proxy1-24:8080 (squid/2.5.STABLE6)
X-Forwarded-For: 192.168.1.10
Cache-Control: max-age=259200
Connection: keep-alive


Re: [squid-users] Forwarding loop?

2006-09-14 Thread Henrik Nordstrom
On Thu, 2006-09-14 at 11:46 +0200, Ralf Hildebrandt wrote:
> I solved that by explicitly telling my clients to route the requests
> for the icons via the same proxy chain and alas, the problem ist gone 

Yes.. clients must know how to request http:/// where
visible_hostname is from the proxy generating the FTP listing..

So you must make sure visible_hostname is something the clients will be
able to reach somehow (directly, or via proxies).

Regards
Henrik



Re: [squid-users] Forwarding loop?

2006-09-14 Thread Henrik Nordstrom
On Thu, 2006-09-14 at 10:26 +0200, Ralf Hildebrandt wrote:

> The proxy-chain looks like this:
> intranet -> proxy-cbf-1.charite.de -> DansGuardian -> 
> proxy-cbf-1-nocache.charite.de -> Internet
> 
> is there any way of making the INNERMOST Squid generate the FTP listing?

Yes, just configure it to not send ftp to DansGuardian...

Regarding the icons.. having visible_hostname equal on the two should
work (unique_hostname must be unique). So should enabling the
global_internal_static directive.. (default on).

Regards
Henrik



Re: [squid-users] Forwarding loop?

2006-09-14 Thread Ralf Hildebrandt
* Ralf Hildebrandt <[EMAIL PROTECTED]>:

> The problem also (to some extent) occurs with Squid-generated FTP listings.
> Have a look:
> http://www.stahl.bau.tu-bs.de/~hildeb/broken_icons.png
> 
> You can see that the page was generated by
> proxy-cbf-1-nocache.charite.de
> 
> The proxy-chain looks like this:
> intranet -> proxy-cbf-1.charite.de -> DansGuardian -> 
> proxy-cbf-1-nocache.charite.de -> Internet
> 
> is there any way of making the INNERMOST Squid generate the FTP listing?

I solved that by explicitly telling my clients to route the requests
for the icons via the same proxy chain and alas, the problem ist gone 

-- 
Ralf Hildebrandt (i.A. des IT-Zentrums) [EMAIL PROTECTED]
Charite - Universitätsmedizin BerlinTel.  +49 (0)30-450 570-155
Gemeinsame Einrichtung von FU- und HU-BerlinFax.  +49 (0)30-450 570-962
IT-Zentrum Standort CBF send no mail to [EMAIL PROTECTED]


Re: [squid-users] Forwarding loop?

2006-09-14 Thread Ralf Hildebrandt
* Henrik Nordstrom <[EMAIL PROTECTED]>:

> > How can I prevent the internal stuff from being forwarded to the
> > parent_proxy?
> 
> If it gets forwarded at all then Squid didn't recognise the URL as
> belonging to him.. probably you did not use the correct hostname in the
> requested URL, it needs to use visible_hostname (or none at all).

The problem also (to some extent) occurs with Squid-generated FTP listings.
Have a look:
http://www.stahl.bau.tu-bs.de/~hildeb/broken_icons.png

You can see that the page was generated by
proxy-cbf-1-nocache.charite.de

The proxy-chain looks like this:
intranet -> proxy-cbf-1.charite.de -> DansGuardian -> 
proxy-cbf-1-nocache.charite.de -> Internet

is there any way of making the INNERMOST Squid generate the FTP listing?

-- 
Ralf Hildebrandt (i.A. des IT-Zentrums) [EMAIL PROTECTED]
Charite - Universitätsmedizin BerlinTel.  +49 (0)30-450 570-155
Gemeinsame Einrichtung von FU- und HU-BerlinFax.  +49 (0)30-450 570-962
IT-Zentrum Standort CBF send no mail to [EMAIL PROTECTED]


Re: [squid-users] Forwarding loop?

2006-09-10 Thread Henrik Nordstrom
lör 2006-09-02 klockan 13:45 +0200 skrev Ralf Hildebrandt:

> How can I prevent the internal stuff from being forwarded to the
> parent_proxy?

If it gets forwarded at all then Squid didn't recognise the URL as
belonging to him.. probably you did not use the correct hostname in the
requested URL, it needs to use visible_hostname (or none at all).

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


Re: [squid-users] Forwarding loop?

2006-09-02 Thread Ralf Hildebrandt
* Visolve Squid <[EMAIL PROTECTED]>:

> >How can I prevent the internal stuff from being forwarded to the
> >parent_proxy?
 
> A forwarding loop is when a request passes through one proxy more than 
> once. You can get a forwarding loop if
> 
>* a cache forwards requests to itself. This might happen with
>  interception caching (or server acceleration) configurations.
>* a pair or group of caches forward requests to each other. This can
>  happen when Squid uses ICP, Cache Digests, or the ICMP RTT
>  database to select a next-hop cache.

I know, back to my question:

How can I prevent the internal stuff from being forwarded to the
parent_proxy?

-- 
Ralf Hildebrandt (i.A. des IT-Zentrums) [EMAIL PROTECTED]
Charite - Universitätsmedizin BerlinTel.  +49 (0)30-450 570-155
Gemeinsame Einrichtung von FU- und HU-BerlinFax.  +49 (0)30-450 570-962
IT-Zentrum Standort CBF send no mail to [EMAIL PROTECTED]


Re: [squid-users] Forwarding loop?

2006-09-02 Thread Visolve Squid

Ralf Hildebrandt wrote:


We're using a

intranet -> squid -> Dansguardian -> squid -> Internet
setup to filter the traffic for viruses

This must be the cause for this warning:

Aug 27 23:18:46 proxy-cvk-2 squid[27921]: WARNING: Forwarding loop detected 
for: Client: 127.0.0.1 http_port: 127.0.0.1: GET 
http://127.0.0.1/squid-internal-periodic/store_digest HTTP/1.0^M Accept: 
application/cache-digest^M Accept: text/html^M Host: 127.0.0.1:3129^M Via: 0.0 
wlan-proxy.charite.de:3128 (squid/2.6.STABLE3), 1.0 
proxy-cvk-2-nocache.charite.de: (squid/2.6.STABLE3)^M X-Forwarded-For: 
unknown, unknown, 127.0.0.1^M Cache-Control: max-age=259200^M Connection: 
keep-alive^M X-Forwarded-For: unknown, unknown, 127.0.0.1^M ^M
Aug 27 23:18:46 proxy-cvk-2 squid[27916]: temporary disabling (Not Found) 
digest from 127.0.0.1

How can I prevent the internal stuff from being forwarded to the
parent_proxy?


Hello Hildebrand,

A forwarding loop is when a request passes through one proxy more than 
once. You can get a forwarding loop if


   * a cache forwards requests to itself. This might happen with
 interception caching (or server acceleration) configurations.
   * a pair or group of caches forward requests to each other. This can
 happen when Squid uses ICP, Cache Digests, or the ICMP RTT
 database to select a next-hop cache.

Thanks,
Visolve Squid Team
www.visolve.com/squid/









[squid-users] Forwarding loop?

2006-08-29 Thread Ralf Hildebrandt
We're using a

intranet -> squid -> Dansguardian -> squid -> Internet
setup to filter the traffic for viruses

This must be the cause for this warning:

Aug 27 23:18:46 proxy-cvk-2 squid[27921]: WARNING: Forwarding loop detected 
for: Client: 127.0.0.1 http_port: 127.0.0.1: GET 
http://127.0.0.1/squid-internal-periodic/store_digest HTTP/1.0^M Accept: 
application/cache-digest^M Accept: text/html^M Host: 127.0.0.1:3129^M Via: 0.0 
wlan-proxy.charite.de:3128 (squid/2.6.STABLE3), 1.0 
proxy-cvk-2-nocache.charite.de: (squid/2.6.STABLE3)^M X-Forwarded-For: 
unknown, unknown, 127.0.0.1^M Cache-Control: max-age=259200^M Connection: 
keep-alive^M X-Forwarded-For: unknown, unknown, 127.0.0.1^M ^M
Aug 27 23:18:46 proxy-cvk-2 squid[27916]: temporary disabling (Not Found) 
digest from 127.0.0.1

How can I prevent the internal stuff from being forwarded to the
parent_proxy?

-- 
Ralf Hildebrandt (i.A. des IT-Zentrums) [EMAIL PROTECTED]
Charite - Universitätsmedizin BerlinTel.  +49 (0)30-450 570-155
Gemeinsame Einrichtung von FU- und HU-BerlinFax.  +49 (0)30-450 570-962
IT-Zentrum Standort CBF send no mail to [EMAIL PROTECTED]


Re: [squid-users] Forwarding loop after rebooting.

2006-04-24 Thread Mark Stevens
Much thanks for replies.

I have blocked http_access to all except  child squids to prevent exploitation.

I'm still a tad confused to why this problem only happens when the
master proxy is down for a short period.

Maybe the negative hits were causing it to redirect to itself, and
then requests were denied when the child squids expected the proxy to
act as a proxy and not just an accelerator.

An interesting 'gotcha' considering the setup has been running fine
for about 8 months.


Thanks again!






On 24/04/06, Henrik Nordstrom <[EMAIL PROTECTED]> wrote:
> sön 2006-04-23 klockan 23:48 +0100 skrev Mark Stevens:
>
>
> > 2006/04/23 23:24:23| clientAccessCheck: proxy request denied in accel_only 
> > mode
>
> This is important...  your Squid is used as a peer proxy, but your
> configuration does not allow this Squid to be used as a proxy (only
> accelerator).
>
> > Access log extract:
> >
> > 10.1.1.3 - - [23/Apr/2006:23:24:23 +0100] "GET
> > http://myurl.mydomain.com/myfolder1/ HTTP/1.0" 403 1401
> > TCP_DENIED:NONE
> > 10.1.1.3 - - [23/Apr/2006:23:24:23 +0100] "GET
> > http://myurl.mydomain.com/myfolder1/ HTTP/1.0" 403 1427
> > TCP_MISS:FIRST_UP_PARENT
>
> Looks to me like your Squid uses itself as parent.
>
> What cache_peer statements do you have? Do any of these points back to
> yourself either directly or indirectly via cache_peer statements at that
> peer?
>
>
> Related note: If you have multiple Squids clustered by the same visible
> name, make sure each have a unique unique_hostname set.
>
> Regards
> Henrik
>
>
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v1.4.3 (GNU/Linux)
>
> iD8DBQBETAiz516QwDnMM9sRAn+hAJ9CGC4QjX6NvVEXcs3rLsDGOc7UCgCff1LH
> QVV+ANArd02yRSyXBgiNGsM=
> =5Ets
> -END PGP SIGNATURE-
>
>
>


Re: [squid-users] Forwarding loop after rebooting.

2006-04-23 Thread Henrik Nordstrom
sön 2006-04-23 klockan 23:48 +0100 skrev Mark Stevens:


> 2006/04/23 23:24:23| clientAccessCheck: proxy request denied in accel_only 
> mode

This is important...  your Squid is used as a peer proxy, but your
configuration does not allow this Squid to be used as a proxy (only
accelerator).

> Access log extract:
> 
> 10.1.1.3 - - [23/Apr/2006:23:24:23 +0100] "GET
> http://myurl.mydomain.com/myfolder1/ HTTP/1.0" 403 1401
> TCP_DENIED:NONE
> 10.1.1.3 - - [23/Apr/2006:23:24:23 +0100] "GET
> http://myurl.mydomain.com/myfolder1/ HTTP/1.0" 403 1427
> TCP_MISS:FIRST_UP_PARENT

Looks to me like your Squid uses itself as parent.

What cache_peer statements do you have? Do any of these points back to
yourself either directly or indirectly via cache_peer statements at that
peer?


Related note: If you have multiple Squids clustered by the same visible
name, make sure each have a unique unique_hostname set.

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


Re: [squid-users] Forwarding loop after rebooting.

2006-04-23 Thread Mark Stevens
Hello again,

I've managed to replicate the error in a development environment.

My setup in dev is 2 squids accelerating a master squid, that is
accelerating a webserver.

The 2 child squids are behind a loadbalancer.

To reproduce the problem, I shutdown the master squid, and generate
HTTP load to the child squids via the load balancer, then after about
5 minutes start up the master squid, here is an example of the
response after sending a valid query that worked prior to replication
test.


HTTP Request generated by wget:
Connecting to myurl.mydomain.com[172.23.161.100]:80... connected.
HTTP request sent, awaiting response...
 1 HTTP/1.0 403 Forbidden
 2 Server: squid/2.5.STABLE12
 3 Mime-Version: 1.0
 4 Date: Sun, 23 Apr 2006 22:24:23 GMT
 5 Content-Type: text/html
 6 Content-Length: 1101
 7 Expires: Sun, 23 Apr 2006 22:24:23 GMT
 8 X-Squid-Error: ERR_ACCESS_DENIED 0
 9 X-Cache: MISS from master.mydomain.net
10 X-Cache: MISS from master.mydomain.net
11 X-Cache: MISS from sibling1.object1.com
12 Connection: close
22:18:40 ERROR 403: Forbidden.


Extract from cache.log:
2006/04/23 23:24:23| The request GET
http://myurl.mydomain.com:80/myfolder1/ is ALLOWED, because it matched
'all'
2006/04/23 23:24:23| clientAccessCheck: proxy request denied in accel_only mode
2006/04/23 23:24:23| The request GET
http://myurl.mydomain.com/myfolder1/ is DENIED, because it matched
'all'
2006/04/23 23:24:23| storeEntryValidLength: 233 bytes too big;
'8E293D7F9154EF3C2032A87976FAFCA1'
2006/04/23 23:24:23| clientReadRequest: FD 215: no data to process
((11) Resource temporarily unavailable)
2006/04/23 23:24:23| The reply for GET
http://myurl.mydomain.com/myfolder1/ is ALLOWED, because it matched
'all'

Access log extract:

10.1.1.3 - - [23/Apr/2006:23:24:23 +0100] "GET
http://myurl.mydomain.com/myfolder1/ HTTP/1.0" 403 1401
TCP_DENIED:NONE
10.1.1.3 - - [23/Apr/2006:23:24:23 +0100] "GET
http://myurl.mydomain.com/myfolder1/ HTTP/1.0" 403 1427
TCP_MISS:FIRST_UP_PARENT


I have managed to remove the forwarding loop error by instructing
squid not to accept requests via itself as recommended, but the
content error still exists.

My config doesn't contain a negative ttl entry, so I assume it is the
default 5 minutes.

Any ideas?

TIA.

Mark.



















On 18/03/06, Henrik Nordstrom <[EMAIL PROTECTED]> wrote:
> lör 2006-03-18 klockan 19:23 + skrev Mark Stevens:
>
> > I will perform further testing against the redirect rules, however
> > what I am finding strange is that the problem only happens after
> > downtime, to resolve the problem I used an alternative redirect_rules
> > file with the same squid.conf file, and the looping errors go away,
>
> How your redirector processes it's rules or not is not a Squid
> issue/concern. Squid relies on the redirector of your choice to do it's
> job.
>
> Maybe your redirector is relying on some DNS lookups or something else
> not yet available at the time you start Squid in the system bootup
> procedure? Have seen people bitten by such issues in the past.
>
> Regards
> Henrik
>
>
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v1.4.2.2 (GNU/Linux)
>
> iD8DBQBEHIgc516QwDnMM9sRAmx4AJ42AEoQYYVnbfdoZfa5JjygWHwXBwCfUE+u
> qAf9owU+M+NMy7XW6ceOw28=
> =MeSV
> -END PGP SIGNATURE-
>
>
>


Re: [squid-users] Forwarding loop after rebooting.

2006-03-18 Thread Henrik Nordstrom
lör 2006-03-18 klockan 19:23 + skrev Mark Stevens:

> I will perform further testing against the redirect rules, however
> what I am finding strange is that the problem only happens after
> downtime, to resolve the problem I used an alternative redirect_rules
> file with the same squid.conf file, and the looping errors go away,

How your redirector processes it's rules or not is not a Squid
issue/concern. Squid relies on the redirector of your choice to do it's
job.

Maybe your redirector is relying on some DNS lookups or something else
not yet available at the time you start Squid in the system bootup
procedure? Have seen people bitten by such issues in the past.

Regards
Henrik


signature.asc
Description: Detta är en digitalt signerad	meddelandedel


Re: [squid-users] Forwarding loop after rebooting.

2006-03-18 Thread Mark Stevens
Thanks for the reply's,

Mark the F.A.Q link you posted was my first point of call :), I
understand I am experiencing a loop, and thanks to the documentation
have a better understanding of what they are, I'm just unsure why this
only happens when the machine is rebooted.

Henrik,

I will perform further testing against the redirect rules, however
what I am finding strange is that the problem only happens after
downtime, to resolve the problem I used an alternative redirect_rules
file with the same squid.conf file, and the looping errors go away,

The first time this happened we ran the 'alternative' redirect rules
for a few days, and then my colleague re-introduced the redirect_rule
file that failed on reboot, and restarted squid , shutting down
cleanly,  and waiting for all processes to die off,  the site
functioned without error, and no looping errors, until the reboot,
then the exact same problem occurred.  so in short, we have a
redirector that works without error up until the falls out of the farm
for a short period of time.

Thanks again.







On 18/03/06, Henrik Nordstrom <[EMAIL PROTECTED]> wrote:
> lör 2006-03-18 klockan 13:47 + skrev Mark Stevens:
>
> > This has happened previously when the server rebooted, it is likely
> > that the master squid service is getting hammered by all slaves  as
> > soon as it is brought back into service, could the fact that it's
> > under such heavy load as soon as it starts up be causing a problem in
> > Squid?
>
> No.
>
> It's by 99.9% a configuration error.
>
> Forwarding loops occurs when the configuration in how Squid should route
> the requests makes Squid send the request to itself.
>
> Hmm.. you mentioned you are using a redirector to route the requests. If
> so then make sure you have not enabled redirector_bypass (defaults off).
> Also verify that the redirector is actually working.
>
> Regards
> Henrik
>
>
>
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v1.4.2.2 (GNU/Linux)
>
> iD8DBQBEHC/C516QwDnMM9sRAgEzAJ4mwq/OBh7ua/v8aoi1myF6vGy+mwCfbnhp
> qFZqdcXnzB0PZXA77BdI3dE=
> =AP0Y
> -END PGP SIGNATURE-
>
>
>


Re: [squid-users] Forwarding loop after rebooting.

2006-03-18 Thread Mark Elsen
On 3/18/06, Mark Stevens <[EMAIL PROTECTED]> wrote:
> Sorry if this a double post.
>
> Squid version:squid-2.5.STABLE10
> O/S: 5.8 Generic_117350-12 sun4u sparc SUNW,Ultra-80
>
> Hi,
>
> I'm a sysadmin who has inherited a small cluster of squid servers, the
> setup is as follows.
>
> 4 x Squid Slave Accelerators that accel a master squid.
>
> 1 x Master Squid running a custom made redirect script written in perl
> that accel a Webserver .
>
> 1 x Backend Webserver.
>
> Each slave is running 4 versions of squid accelerating separate sites.
>
> The master runs 4 instances of squid.
>
> The farm is constantly under a fair load - roughly half a million hits a day.
>
> The setup works fine, however, recently when the master server was
> taken down for repair, and brought back up again with the same
> configuration, it failed to  serve content for the
>
> busiest instance, and  every request returned is with a TCP_DENIED 403
> error. The following error was reported in the cache.log
>
> 2006/03/18 06:04:52| WARNING: Forwarding loop detected for:
>
...

   http://www.squid-cache.org/Doc/FAQ/FAQ-11.html#ss11.31

   M.


Re: [squid-users] Forwarding loop after rebooting.

2006-03-18 Thread Henrik Nordstrom
lör 2006-03-18 klockan 13:47 + skrev Mark Stevens:

> This has happened previously when the server rebooted, it is likely
> that the master squid service is getting hammered by all slaves  as
> soon as it is brought back into service, could the fact that it's
> under such heavy load as soon as it starts up be causing a problem in
> Squid?

No.

It's by 99.9% a configuration error.

Forwarding loops occurs when the configuration in how Squid should route
the requests makes Squid send the request to itself.

Hmm.. you mentioned you are using a redirector to route the requests. If
so then make sure you have not enabled redirector_bypass (defaults off).
Also verify that the redirector is actually working.

Regards
Henrik



signature.asc
Description: Detta är en digitalt signerad	meddelandedel


[squid-users] Forwarding loop after rebooting.

2006-03-18 Thread Mark Stevens
Sorry if this a double post.

Squid version:squid-2.5.STABLE10
O/S: 5.8 Generic_117350-12 sun4u sparc SUNW,Ultra-80

Hi,

I'm a sysadmin who has inherited a small cluster of squid servers, the
setup is as follows.

4 x Squid Slave Accelerators that accel a master squid.

1 x Master Squid running a custom made redirect script written in perl
that accel a Webserver .

1 x Backend Webserver.

Each slave is running 4 versions of squid accelerating separate sites.

The master runs 4 instances of squid.

The farm is constantly under a fair load - roughly half a million hits a day.

The setup works fine, however, recently when the master server was
taken down for repair, and brought back up again with the same
configuration, it failed to  serve content for the

busiest instance, and  every request returned is with a TCP_DENIED 403
error. The following error was reported in the cache.log

2006/03/18 06:04:52| WARNING: Forwarding loop detected for:
GET /folder1/subfolder/subfolder/ HTTP/1.0
If-Modified-Since: Sat, 14 Jan 2006 01:44:45 GMT
Host: 192.168.0.10
Accept: */*
From: googlebot(at)googlebot.com
User-Agent: Mozilla/5.0 (compatible; Googlebot/2.1;
+http://www.google.com/bot.html)
Accept-Encoding: gzip
Via: 1.1 slave1.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver)

This has happened previously when the server rebooted, it is likely
that the master squid service is getting hammered by all slaves  as
soon as it is brought back into service, could the fact that it's
under such heavy load as soon as it starts up be causing a problem in
Squid?



I have altered the output to respect privacy of client.


[squid-users] Forwarding loop after rebooting.

2006-03-18 Thread Mark Stevens
Hi group, my first post so please be gentle :)

 I'm a sysadmin who has inherited a small cluster of squid servers,
the setup is as follows.

 4 x Squid Slave Accelerators that accel a master squid.

 1 x Master Squid running a custom made redirect script written in
perl that accel a Webserver .

 1 x Backend Webserver.

 Each slave is running 4 versions of squid accelerating separate sites.

 The master runs 4 instances of squid.

 The farm is constantly under a fair load - roughly half a million hits a day.

 The setup works fine, however, recently when the master server was
taken down for repair, and brought back up again with the same
configuration, it failed to  serve content for the

busiest instance, and  every request returned is with a TCP_DENIED 403
error. The following error was reported in the cache.log

 2006/03/18 06:04:52| WARNING: Forwarding loop detected for:
 GET /folder1/subfolder/subfolder/ HTTP/1.0
 If-Modified-Since: Sat, 14 Jan 2006 01:44:45 GMT
 Host: 192.168.0.10
 Accept: */*
 From: googlebot(at)googlebot.com
 User-Agent: Mozilla/5.0 (compatible; Googlebot/2.1;
+http://www.google.com/bot.html)
 Accept-Encoding: gzip
 Via: 1.1 slave1.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver), 1.0
master.mydomain.com:80 (webserver/webserver)   This has happened
previously when the server rebooted, it is likely that the master
squid service is getting hammered by all slaves  as soon as it is
brought back into service, could the fact that it's under such heavy
load as soon as it starts up be causing a problem in Squid?

  Squid version:squid-2.5.STABLE10
  O/S: 5.8 Generic_117350-12 sun4u sparc SUNW,Ultra-80

  I have altered the output to respect privacy of client.


Re: [squid-users] forwarding loop in hierarchy

2005-08-05 Thread Henrik Nordstrom



On Mon, 4 Jul 2005, Matteo Villari wrote:

Hi. I'm trying to configure an hierarchy of accelerators but i falled a 
forwarding loop. It happens when i turn on in a leaf 
httpd_accel_uses_host_headers. Here is squid.conf of the leaf (with ip 
192.168.11.208)


httpd_accel_uses_host_header makes Squid use the Host header as host name 
when reconstructing the URL.


Without it it uses the httpd_accel_host value.


http_port 8180
htcp_port 0
cache_peer 192.168.11.233 parent 8180 3130
httpd_accel_single_host on


This combination strikes me as somewhat odd..


never_direct allow regione


Or maybe it does make sense?

regione will be sent to the parent.

other requests will be sent some to the parent, some directly depending on 
what Squid thinks is best at the moment.



httpd_accel_host 192.168.11.224
httpd_accel_port 8180
httpd_accel_single_host on
httpd_accel_with_proxy on
httpd_accel_uses_host_header on


There is no need for httpd_accel_* directives on the parent.. Requests 
arriving here will be proxy requests, not web server requests.


When I try to get http://192.168.11.208:8180/jetspeed I expect the mail page 
but all I have is an error of access denied. The reason is a forwarding loop 
as seen in cache.log of the leaf cache:


2005/07/04 17:08:41| The request GET http://192.168.11.208:8180/jetspeed is 
ALLOWED, because it matched 'all'

2005/07/04 17:08:41| WARNING: Forwarding loop detected for:
GET /jetspeed HTTP/1.0
User-Agent: Opera/7.54 (Windows NT 5.1; U)  [it]
Host: 192.168.11.208:8180
Accept: text/html, application/xml;q=0.9, application/xhtml+xml, image/png, 
image/jpeg, image/gif, image/x-xbitmap, */*;q=0.1

Accept-Language: it, en
Accept-Charset: windows-1252, utf-8, utf-16, iso-8859-1;q=0.6, *;q=0.1
Accept-Encoding: deflate, gzip, x-gzip, identity, *;q=0
Referer: http://192.168.11.208:8180/jetspeed
Pragma: no-cache
Via: 1.1 calamaro_due:3128 (squid/2.5.STABLE10-20050607), 1.0 
calamaro_uno:3128 (squid/2.5.STABLE10-20050607)

X-Forwarded-For: 192.168.11.243, 192.168.11.208
Cache-Control: no-cache, max-age=86400
Connection: keep-alive


Makes sense. Your leaf proxy reconstructed the URL as 
http://192.168.11.208:8180/jetspeed, which is itself, and your forwarding 
rules does not tell it any specific instructions from where this should be 
requested.


Try this:


* Set "never_direct allow all" on both proxies, denying Squid to forward a 
request anywhere else than explicitly told from the config.


* On the leaf proxy, use cache_peer to the inner proxy. Also set 
httpd_accel_host to your main site name (this will be used for HTTP/1.0 
clients not sending a Host header).


* On the inner proxy, use cache_peer to the web server.

Regards
Henrik



Re: [squid-users] forwarding loop in hierarchy

2005-07-18 Thread Matteo Villari

Matteo Villari ha scritto:

Hi. I'm trying to configure an hierarchy of accelerators but i falled 
a forwarding loop. It happens when i turn on in a leaf 
httpd_accel_uses_host_headers. Here is squid.conf of the child (with 
ip 192.168.11.208)


http_port 8180
htcp_port 0
cache_peer 192.168.11.233 parent 8180 3130
#acl QUERY urlpath_r
#no_cache deny QUERY
cache_mem 64 MB
maximum_object_size_in_memory 256 KB
cache_dir aufs /usr/local/squid/cache 1024 1 256
debug_options ALL,1 33,2 28,9
auth_param basic children 5
auth_param basic realm Squid proxy
auth_param basic credentialsttl 2
auth_param basic casesensitive off
refresh_pattern .15100%1440
acl all src 0.0.0.0/0.
acl manager proto cach
acl localhost src 127.
acl to_localhost dst 1
acl SSL_ports port 443
acl Safe_ports port 80
acl Safe_ports port 21
acl Safe_ports port 44
acl Safe_ports port 70
acl Safe_ports port 21
acl Safe_ports port 10
acl Safe_ports port 28
acl Safe_ports port 48
acl Safe_ports port 59
acl Safe_ports port 77
acl CONNECT method CONNECT
acl purge method PURGE
http_access allow manager localhost
http_access allow all
http_reply_access allow all
icp_access allow all
cache_effective_user villari
cache_effective_group villari
visible_hostname Villari2
unique_hostname calamaro_due
httpd_accel_host 192.168.11.224
httpd_accel_port 8180
httpd_accel_single_host on
httpd_accel_with_proxy on
httpd_accel_uses_host_header on
cachemgr_passwd x all
always_direct allow manager localhost
acl regione dst 192.168.11.224
never_direct allow regione
snmp_port 0
strip_query_terms off
vary_ignore_expire on

Here is the parent configuration (with ip 192.168.11.233)

http_port 3128
http_port 8180
http_port 8080
icp_port 3130
htcp_port 0
maximum_object_size 40960 KB
maximum_object_size_in_memory 1024 KB
cache_dir aufs /usr/local/squid/cache 1024 1 256
log_ip_on_direct off
log_mime_hdrs on
debug_options ALL,1 33,2 28,9
log_fqdn on
pinger_program /bin/ping
redirect_program /usr/local/squid/bin/squidGuard
acl session urlpath_regex jsessionid
redirector_access allow session
redirector_access deny !session
auth_param basic casesensitive off
refresh_pattern -i jp(e)g 1440 100% 1440 override-expire 
override-lastmod ignore-reload

refresh_pattern -i psml 15 100% 1440 override-expire override-lastmod
refresh_pattern -i css 1440 100% 1440 override-expire override-lastmod 
ignore-reload

refresh_pattern . 0 20% 4320
half_closed_clients off
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443 563
acl Safe_ports port 80 # http
acl Safe_ports port 21 # ftp
acl Safe_ports port 443 563 # https, snews
acl Safe_ports port 70 # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535 # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
acl purge method PURGE
acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
http_access allow all
http_reply_access allow all
icp_access allow all
cache_mgr villari
cache_effective_user villari
cache_effective_group villari
unique_hostname calamaro_uno
httpd_accel_host 192.168.11.224
httpd_accel_port 8180
httpd_accel_single_host on
httpd_accel_with_proxy on
httpd_accel_uses_host_header on
cachemgr_passwd x all
query_icmp on
strip_query_terms off
relaxed_header_parser warn

When I try to get http://192.168.11.208:8180/jetspeed I expect the 
mail page but all I have is an error of access denied. The reason is a 
forwarding loop as seen in cache.log of the child cache:


2005/07/04 17:08:41| The request GET 
http://192.168.11.208:8180/jetspeed is ALLOWED, because it matched 'all'

2005/07/04 17:08:41| WARNING: Forwarding loop detected for:
GET /jetspeed HTTP/1.0
User-Agent: Opera/7.54 (Windows NT 5.1; U)  [it]
Host: 192.168.11.208:8180
Accept: text/html, application/xml;q=0.9, application/xhtml+xml, 
image/png, image/jpeg, image/gif, image/x-xbitmap, */*;q=0.1

Accept-Language: it, en
Accept-Charset: windows-1252, utf-8, utf-16, iso-8859-1;q=0.6, *;q=0.1
Accept-Encoding: deflate, gzip, x-gzip, identity, *;q=0
Referer: http://192.168.11.208:8180/jetspeed
Pragma: no-cache
Via: 1.1 calamaro_due:3128 (squid/2.5.STABLE10-20050607), 1.0 
calamaro_uno:3128 (squid/2.5.STABLE10-20050607)

X-Forwarded-For: 192.168.11.243, 192.168.11.208
Cache-Control: no-cache, max-age=86400
Connection: keep-alive

2005/07/04 17:08:41| aclCheckFast: list: 0x82290f0
2005/07/04 17:08:41| aclMatchAclList: checking all
2005/07/04 17:08:41| aclMatchAcl: checking 'acl all src 0.0.0.0/0.0.0.0'
2005/07/04 17:08:41| aclMatchIp: '192.168.11.243' found
2005/07/04 17:08:41| aclMatchAclList: returning 1
2005/07/04 17:08:41| aclCheckFast: list: 0x8228f88
2005/07/04 17:08:41| aclMatchAclList: checking all
2005/07/04 17:08:41| aclMatchAcl: checking 'acl all src 0.0.0.0/0.0.0.0'
2005/07/04 17:08:41| aclMatchIp: '192.168.11.243' found
2005/07/04 17:08:41| aclMatchAclList:

[squid-users] forwarding loop in hierarchy

2005-07-04 Thread Matteo Villari
Hi. I'm trying to configure an hierarchy of accelerators but i falled a 
forwarding loop. It happens when i turn on in a leaf 
httpd_accel_uses_host_headers. Here is squid.conf of the leaf (with ip 
192.168.11.208)


http_port 8180
htcp_port 0
cache_peer 192.168.11.233 parent 8180 3130
#acl QUERY urlpath_r
#no_cache deny QUERY
cache_mem 64 MB
maximum_object_size_in_memory 256 KB
cache_dir aufs /usr/local/squid/cache 1024 1 256
debug_options ALL,1 33,2 28,9
auth_param basic children 5
auth_param basic realm Squid proxy
auth_param basic credentialsttl 2
auth_param basic casesensitive off
refresh_pattern .15100%1440
acl all src 0.0.0.0/0.
acl manager proto cach
acl localhost src 127.
acl to_localhost dst 1
acl SSL_ports port 443
acl Safe_ports port 80
acl Safe_ports port 21
acl Safe_ports port 44
acl Safe_ports port 70
acl Safe_ports port 21
acl Safe_ports port 10
acl Safe_ports port 28
acl Safe_ports port 48
acl Safe_ports port 59
acl Safe_ports port 77
acl CONNECT method CONNECT
acl purge method PURGE
http_access allow manager localhost
http_access allow all
http_reply_access allow all
icp_access allow all
cache_effective_user villari
cache_effective_group villari
visible_hostname Villari2
unique_hostname calamaro_due
httpd_accel_host 192.168.11.224
httpd_accel_port 8180
httpd_accel_single_host on
httpd_accel_with_proxy on
httpd_accel_uses_host_header on
cachemgr_passwd x all
always_direct allow manager localhost
acl regione dst 192.168.11.224
never_direct allow regione
snmp_port 0
strip_query_terms off
vary_ignore_expire on

Here is the parent configuration (with ip 192.168.11.233)

http_port 3128
http_port 8180
http_port 8080
icp_port 3130
htcp_port 0
maximum_object_size 40960 KB
maximum_object_size_in_memory 1024 KB
cache_dir aufs /usr/local/squid/cache 1024 1 256
log_ip_on_direct off
log_mime_hdrs on
debug_options ALL,1 33,2 28,9
log_fqdn on
pinger_program /bin/ping
redirect_program /usr/local/squid/bin/squidGuard
acl session urlpath_regex jsessionid
redirector_access allow session
redirector_access deny !session
auth_param basic casesensitive off
refresh_pattern -i jp(e)g 1440 100% 1440 override-expire 
override-lastmod ignore-reload

refresh_pattern -i psml 15 100% 1440 override-expire override-lastmod
refresh_pattern -i css 1440 100% 1440 override-expire override-lastmod 
ignore-reload

refresh_pattern . 0 20% 4320
half_closed_clients off
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443 563
acl Safe_ports port 80 # http
acl Safe_ports port 21 # ftp
acl Safe_ports port 443 563 # https, snews
acl Safe_ports port 70 # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535 # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
acl purge method PURGE
acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
http_access allow all
http_reply_access allow all
icp_access allow all
cache_mgr villari
cache_effective_user villari
cache_effective_group villari
unique_hostname calamaro_uno
httpd_accel_host 192.168.11.224
httpd_accel_port 8180
httpd_accel_single_host on
httpd_accel_with_proxy on
httpd_accel_uses_host_header on
cachemgr_passwd x all
query_icmp on
strip_query_terms off
relaxed_header_parser warn

When I try to get http://192.168.11.208:8180/jetspeed I expect the mail 
page but all I have is an error of access denied. The reason is a 
forwarding loop as seen in cache.log of the leaf cache:


2005/07/04 17:08:41| The request GET http://192.168.11.208:8180/jetspeed 
is ALLOWED, because it matched 'all'

2005/07/04 17:08:41| WARNING: Forwarding loop detected for:
GET /jetspeed HTTP/1.0
User-Agent: Opera/7.54 (Windows NT 5.1; U)  [it]
Host: 192.168.11.208:8180
Accept: text/html, application/xml;q=0.9, application/xhtml+xml, 
image/png, image/jpeg, image/gif, image/x-xbitmap, */*;q=0.1

Accept-Language: it, en
Accept-Charset: windows-1252, utf-8, utf-16, iso-8859-1;q=0.6, *;q=0.1
Accept-Encoding: deflate, gzip, x-gzip, identity, *;q=0
Referer: http://192.168.11.208:8180/jetspeed
Pragma: no-cache
Via: 1.1 calamaro_due:3128 (squid/2.5.STABLE10-20050607), 1.0 
calamaro_uno:3128 (squid/2.5.STABLE10-20050607)

X-Forwarded-For: 192.168.11.243, 192.168.11.208
Cache-Control: no-cache, max-age=86400
Connection: keep-alive

2005/07/04 17:08:41| aclCheckFast: list: 0x82290f0
2005/07/04 17:08:41| aclMatchAclList: checking all
2005/07/04 17:08:41| aclMatchAcl: checking 'acl all src 0.0.0.0/0.0.0.0'
2005/07/04 17:08:41| aclMatchIp: '192.168.11.243' found
2005/07/04 17:08:41| aclMatchAclList: returning 1
2005/07/04 17:08:41| aclCheckFast: list: 0x8228f88
2005/07/04 17:08:41| aclMatchAclList: checking all
2005/07/04 17:08:41| aclMatchAcl: checking 'acl all src 0.0.0.0/0.0.0.0'
2005/07/04 17:08:41| aclMatchIp: '192.168.11.243' found
2005/07/04 17:08:41| aclMatchAclList: returning 1
2005/07/04 17:08:4

Re: [squid-users] forwarding loop using squidguard

2005-05-31 Thread Henrik Nordstrom

On Tue, 31 May 2005, Matteo Villari wrote:

The problem is that when squid passes an URL containing this field to 
squidguard it generate a warning of forwarding loop and shows an error page 
access denied depending (i think) on error 111 connection refused.

Please help methanks a lot,Matteo Villari

httpd_accel_host 192.168.11.224
httpd_accel_port 8180
httpd_accel_single_host on
httpd_accel_with_proxy off
httpd_accel_uses_host_header on


Problem confirmed. Please file a bug report with the above info and the 
following note so I remember what to fix:


request->flags is lost on redirected requests.

Regards
Henrik


Re: [squid-users] forwarding loop using squidguard

2005-05-31 Thread Matteo Villari

Matteo Villari ha scritto:


Hi all.
I'm trying to use squidguard to strip out jsessionid field from some 
URLs. I've configured squidguard to strip the part of a URL  containing

;jsessionid=.tomcat1
The problem is that when squid passes an URL containing this field to 
squidguard it generate a warning of forwarding loop and shows an error 
page access denied depending (i think) on error 111 connection refused.

Please help methanks a lot,Matteo Villari

That is my squid.conf file

http_port 80
http_port 8180
icp_port 0
htcp_port 0
log_ip_on_direct off
mime_table /usr/local/squid/etc/mime.conf
log_mime_hdrs on
useragent_log /usr/local/squid/logs/useragent.log
debug_options ALL,1 33,2 28,9
log_fqdn on
pinger_program /bin/ping
redirect_program /usr/local/squidguard/bin/squidGuard
redirect_rewrites_host_header off
acl session url_regex jsessionid
redirector_access allow session
auth_param basic casesensitive off
refresh_pattern .020%4320
half_closed_clients off
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443 563
acl Safe_ports port 80 # http
acl Safe_ports port 21 # ftp
acl Safe_ports port 443 563 # https, snews
acl Safe_ports port 70 # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535 # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
acl purge method PURGE
acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
http_access allow all
http_reply_access allow all
icp_access allow all
cache_effective_user villari
cache_effective_group villari
visible_hostname Villari
httpd_accel_host 192.168.11.224
httpd_accel_port 8180
httpd_accel_single_host on
httpd_accel_with_proxy off
httpd_accel_uses_host_header on
cachemgr_passwd matteo info stats/object
query_icmp on
always_direct allow !session
offline_mode off
strip_query_terms off
coredump_dir /usr/local/squid/cache
relaxed_header_parser warn

and that are entries in my log files:

access.log

1117538216.703  1 192.168.11.233 TCP_DENIED/403 1482 GET 
http://192.168.11.233:8180/jetspeed/media-type/html/user/anon/page/HOME_ArchivioEventiHomePage.psml 
- NONE/- text/html [User-Agent: Opera/7.54 (Windows NT 5.1; U)  
%5bit%5d\r\nHost: 192.168.11.233:8180\r\nAccept: text/html, 
application/xml;q=0.9, application/xhtml+xml, image/png, image/jpeg, 
image/gif, image/x-xbitmap, */*;q=0.1\r\nAccept-Language: it, 
en\r\nAccept-Charset: windows-1252, utf-8, utf-16, iso-8859-1;q=0.6, 
*;q=0.1\r\nAccept-Encoding: deflate, gzip, x-gzip, identity, 
*;q=0\r\nReferer: http://192.168.11.233/jetspeed\r\nVia: 1.1 
Villari:80 (squid/2.5.STABLE9-20050503)\r\nX-Forwarded-For: 
192.168.11.243\r\nCache-Control: max-age=259200\r\nConnection: 
keep-alive\r\n] [HTTP/1.0 403 Forbidden\r\nServer: 
squid/2.5.STABLE9-20050503\r\nMime-Version: 1.0\r\nDate: Tue, 31 May 
2005 11:16:56 GMT\r\nContent-Type: text/html\r\nContent-Length: 
1189\r\nExpires: Tue, 31 May 2005 11:16:56 GMT\r\nX-Squid-Error: 
ERR_ACCESS_DENIED 0\r\n\r]
1117538216.704611 192.168.11.243 TCP_MISS/403 1510 GET 
http://192.168.11.233:8180/jetspeed/media-type/html/user/anon/page/HOME_ArchivioEventiHomePage.psml;jsessionid=6723643B0FA2C4AA2D9A22C433B5ACCA.tomcat1 
- DIRECT/192.168.11.233 text/html [User-Agent: Opera/7.54 (Windows NT 
5.1; U)  %5bit%5d\r\nHost: 192.168.11.233:8180\r\nAccept: text/html, 
application/xml;q=0.9, application/xhtml+xml, image/png, image/jpeg, 
image/gif, image/x-xbitmap, */*;q=0.1\r\nAccept-Language: it, 
en\r\nAccept-Charset: windows-1252, utf-8, utf-16, iso-8859-1;q=0.6, 
*;q=0.1\r\nAccept-Encoding: deflate, gzip, x-gzip, identity, 
*;q=0\r\nReferer: http://192.168.11.233/jetspeed\r\nConnection: 
Keep-Alive, TE\r\nTE: deflate, gzip, chunked, identity, trailers\r\n] 
[HTTP/1.0 403 Forbidden\r\nServer: 
squid/2.5.STABLE9-20050503\r\nMime-Version: 1.0\r\nDate: Tue, 31 May 
2005 11:16:56 GMT\r\nContent-Type: text/html\r\nContent-Length: 
1189\r\nExpires: Tue, 31 May 2005 11:16:56 GMT\r\nX-Squid-Error: 
ERR_ACCESS_DENIED 0\r\nX-Cache: MISS from Villari\r\nConnection: 
keep-alive\r\n\r]


cache.log


2005/05/31 13:16:56| The request GET 
http://192.168.11.233:8180/jetspeed/media-type/html/user/anon/page/HOME_ArchivioEventiHomePage.psml;jsessionid=6723643B0FA2C4AA2D9A22C433B5ACCA.tomcat1 
is ALLOWED, because it matched 'all'

2005/05/31 13:16:56| aclCheck: checking 'redirector_access allow session'
2005/05/31 13:16:56| aclMatchAclList: checking session
2005/05/31 13:16:56| aclMatchAcl: checking 'acl session url_regex 
jsessionid'
2005/05/31 13:16:56| aclMatchRegex: checking 
'http://192.168.11.233:8180/jetspeed/media-type/html/user/anon/page/HOME_ArchivioEventiHomePage.psml;jsessionid=6723643B0FA2C4AA2D9A22C433B5ACCA.tomcat1' 


2005/05/31 13:16:56| aclMatchRegex: looking for 'jsessionid'
2005/05/31 13:16:56| aclMatchAclList: returning 1

[squid-users] Forwarding loop messages

2005-05-10 Thread Brett Simpson
I'm using Squid to forward requests to Dansguardian as a parent cache peer. 
Then Dansguardian forwards the request back to the same Squid so I can get back 
out to the internet. This works when I use an always_direct allow localhost to 
avoid a routing loop between Squid and Dansguardian.

However for every site I visit I get a WARNING forwarded loop detected in my 
cache logs. It's functional though.

If I use always_direct shoudn't it bypass the cache altogethor for the specific 
acl? 

Is there a way I can tell squid to not log these messages for this specific acl?

Thanks,
Brett




Re: [squid-users] forwarding loop

2003-11-11 Thread Henrik Nordstrom
On Tue, 11 Nov 2003, Emilio Casbas wrote:

> 2003/11/11 10:11:18| WARNING: Forwarding loop detected for:
> GET / HTTP/1.0
> Host: host_ip
> User-Agent: check_http/1.24 (nagios-plugins 1.3.0)
> Via: 1.0 www.mysite.com:80 (squid/2.5.STABLE4)
> X-Forwarded-For: x.x.x.x
> Cache-Control: max-age=259200
> Connection: keep-alive

This indicates unique_hostname was not set correctly, and is entirely 
different from the entry below.

> 2003/11/11 10:22:14| WARNING: Forwarding loop detected for:
> GET / HTTP/1.0
> Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, */*
> Accept-Language: en-us
> User-Agent: Mozilla/4.0 (compatible; EMonitor 6.1 Windows NT)
> Host: x.x.x.x
> Via: 1.0 server2:80 (squid/2.5.STABLE4), 1.0 server1:80 (squid/2.5.STABLE4)
> X-Forwarded-For: x.x.x.x, x.x.x.x
> Cache-Control: max-age=259200
> Connection: keep-alive

This request is a forwarding loop. The request path was

  client -> server2 -> server1 -> [the server giving this error]


Such loops can occatioanlly happen in sibling relations, but Squid should 
recover automatically.

If you want to prevent it from happening ever then use cache_peer_access 
to deny the use of the sibling if the request was received from a sibling. 
Alternatively you can use always_direct to do the same (but this approach 
is not compatible with Squid-3. I recommend using the cache_peer_access 
approach)


Regards
Henrik



[squid-users] forwarding loop

2003-11-11 Thread Emilio Casbas
Hi.

We have two machines with server aceleration configuration and we want 
to have the same visible hostname but to avoid the forwarding
loop detect in the cache log.

The configuration is:

Server Aceleration 1
visible_hostname www.mysite.com
unique_hostname server1
Server Aceleration 2
visible_hostname www.mysite.com
unique_hostname server2
But in the cache_log:

2003/11/11 10:11:18| WARNING: Forwarding loop detected for:
GET / HTTP/1.0
Host: host_ip
User-Agent: check_http/1.24 (nagios-plugins 1.3.0)
Via: 1.0 www.mysite.com:80 (squid/2.5.STABLE4)
X-Forwarded-For: x.x.x.x
Cache-Control: max-age=259200
Connection: keep-alive
2003/11/11 10:22:14| WARNING: Forwarding loop detected for:
GET / HTTP/1.0
Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, */*
Accept-Language: en-us
User-Agent: Mozilla/4.0 (compatible; EMonitor 6.1 Windows NT)
Host: x.x.x.x
Via: 1.0 server2:80 (squid/2.5.STABLE4), 1.0 server1:80 (squid/2.5.STABLE4)
X-Forwarded-For: x.x.x.x, x.x.x.x
Cache-Control: max-age=259200
Connection: keep-alive
Thanks!.
Emilio.








smime.p7s
Description: S/MIME Cryptographic Signature