RE: [squid-users] WCCP and ICP

2010-03-29 Thread Michael Bowe
> -Original Message-
> From: Bradley, Stephen W. Mr. [mailto:bradl...@muohio.edu]

> How do I get the two servers to talk to each other to improve cache
> hits on=  the stream?
> (I plan on putting this into a bigger group of servers.)

Couple of things to be aware of :

In my understanding that Cisco WCCP hashes the destination IP to work out
which squid to send request to. Thus there should be no overlap of objects
between the two (or more) squids and no need to run ICP.

However if you are regularly adding/removing caches, or have changed the
hashing to be based on source IP, then you probably should run ICP on your
caches.

Cache1 would have something like this
cache_peer server2.domain.com sibling 3128 3130 proxy-only

Cache2 would have something like this
cache_peer server1.domain.com sibling 3128 3130 proxy-only


Michael.



RE: [squid-users] Cache-digest configuration problem

2010-03-10 Thread Michael Bowe
> -Original Message-
> From: Henrik Nordström [mailto:hen...@henriknordstrom.net]

> tis 2010-03-09 klockan 20:13 +0200 skrev Giannis Fotopoulos:

> > First of all, in order to use cache-digests, must I also use ICP?
> 
> No.

Ah this is interesting, I've never understood this fully.

The wiki has a big page on digests, but the ICP section on that page is
empty.

If you have cache-digests enabled, and also have ICP enabled, what happens?
Does the digest get consulted first, and if object not found in the digest
is ICP then used? Or is it some other combination?

I had this config on my squid cluster and I could see plenty of CD_ entries
in the access log, but also plenty of UDP_ entries too. I then tried
altering the cache_peer line to have ICP port set to 0 and the CD_ entries
continued and all UDP_ stopped.  I'm just wondering about the pros/cons of
this change.

Michael.



RE: [squid-users] Regarding wccp

2010-03-04 Thread Michael Bowe
> -Original Message-
> From: Henrik Nordstrom [mailto:hen...@henriknordstrom.net]
> Sent: Friday, 5 March 2010 7:08 AM
> To: Michael Bowe
> Cc: squid-users@squid-cache.org
> Subject: RE: [squid-users] Regarding wccp
> 
> tor 2010-03-04 klockan 12:25 +1100 skrev Michael Bowe:
> 
> > I think you have the hash stuff wrong, isn't service 80 meant to be
> > src_ip_hash and service 90 meant to be dst_ip_hash?
> 
> no, 80 is usually the normal www service interception, which is a
> dst_ip_hash.
> 
> but it doesn't matter very much as long as you have the combination of
> both src_ip_hash and dst_ip_hash.

As hinted at on the wiki, with TPROXY I reckon there is a gotcha you have to 
watch out for when you have more than one squid.

80 dst_ip_hash
90 src_ip_hash
Ties a particular web server to a particular cache

80 src_ip_hash
90 dst_ip_hash
Ties a particular client to a particular cache

The problem with the 1st way is this :

Say a client wants to access http://some-large-site, their PC resolves the 
address and gets x.x.x.1

GET request goes off to the network, Cisco sees it and hashs the dst_ip. 

Hash for this IP points to cache-A

Router sends the request to cache-A. This cache takes the GET and does another 
DNS lookup of that host. This time it resolves to x.x.x.2

Cache sends request off to the internet

Reply comes back from x.x.x.2, and arrives at the Cisco. Cisco does hash on 
src_ip and this happens to map to cache-B

Reply arrives at cache-B and it doesn’t know anything about it. Trouble!

If you only have 1 TPROXY cache, either way works OK. If you have more than one 
cache I reckon you need to use the 2nd way?

Michael.





RE: [squid-users] Regarding wccp

2010-03-03 Thread Michael Bowe
> -Original Message-
> From: Michael Bowe [mailto:mb...@pipeline.com.au]

> > From: senthilkumaar2021 [mailto:senthilkumaar2...@gmail.com]

> > wccp2_service_info 80 protocol=tcp flags=dst_ip_hash priority=240
> > ports=80
> > wccp2_service_info 90 protocol=tcp flags=src_ip_hash,ports_source
> > priority=240 ports=80
> >
> > (for router ip  replaced the gateway ip of the squid machine)
> 
> I think you have the hash stuff wrong, isn't service 80 meant to be
> src_ip_hash and service 90 meant to be dst_ip_hash?

Interesting, there seems to be conflicting advice in the wiki

This page here 
http://wiki.squid-cache.org/ConfigExamples/FullyTransparentWithTPROXY
shows it (from Steve Wilton):
  80 src_ip_hash
  90 dst_ip_hash,ports_source

And then further down the page shows it:
  80 dst_ip_hash
  90 src_ip_hash,ports_source

And this page
http://wiki.squid-cache.org/Features/Tproxy4
shows it (from Steve Wilton):
  80 dst_ip_hash
  90 src_ip_hash,ports_source

On my busy TPROXY4 clusters we have it:
  80 src_ip_hash
  90 dst_ip_hash,ports_source

Hmm, which way is actually correct?

Michael.




RE: [squid-users] Regarding wccp

2010-03-03 Thread Michael Bowe
Hi Senthil

> -Original Message-
> From: senthilkumaar2021 [mailto:senthilkumaar2...@gmail.com]
> Sent: Wednesday, 3 March 2010 6:24 PM
> To: squid-users@squid-cache.org
> Subject: [squid-users] Regarding wccp
> 
> Hi All,
> I need to configure squid +Tproxy+ wccp
> I followed the document as if in the squid cache
> 
> wccp2_router $ROUTERIP
> wccp2_forwarding_method gre
> wccp2_return_method gre
> wccp2_service dynamic 80
> wccp2_service dynamic 90
> wccp2_service_info 80 protocol=tcp flags=dst_ip_hash priority=240
> ports=80
> wccp2_service_info 90 protocol=tcp flags=src_ip_hash,ports_source
> priority=240 ports=80
> 
> (for router ip  replaced the gateway ip of the squid machine)

I think you have the hash stuff wrong, isn't service 80 meant to be
src_ip_hash and service 90 meant to be dst_ip_hash?

And what about the http_port statement, what settings have you used there?

Also maybe you could show us the output from cache.log after starting squid,
as this contains some info about whether TPROXY has started up OK


> I have used following ip tables
> 
> iptables -t mangle -N DIVERT
> iptables -t mangle -A DIVERT -j MARK --set-mark 1
> iptables -t mangle -A DIVERT -j ACCEPT
> iptables -t mangle -A PREROUTING -p tcp -m socket -j DIVERT
> iptables -t mangle -A PREROUTING -p tcp --dport 80 -j TPROXY
> --tproxy-mark 0x1/0x1 --on-port 3129
> ip rule add fwmark 1 lookup 100
> ip route add local 0.0.0.0/0 dev lo table 100
> echo 0 > /proc/sys/net/ipv4/conf/lo/rp_filter
> echo 1 > /proc/sys/net/ipv4/ip_forward
> 
> set net.ipv4.forwarding = 1

Seems roughly right


> I created tunnel using the router identifier ip address.
> 
> I have made all the configuration in router such as enabling the 80 and
> 90 service

You will need to show us the tunnel config fragments


> when i apply redirect in and out for service 80 and 90 .
> 
> I am not able to get any packets redirected to 90 service only 80
> service gets redirected .

You will need to show us the Cisco config fragments

Michael.



RE: [squid-users] Squid Clustering

2010-02-09 Thread Michael Bowe
> -Original Message-
> From: John Villa [mailto:john.joe.vi...@gmail.com]

> Basically I have two nodes and I am trying to make it so
> that if I hit one twice (to store the cache) and then I hit the other
> note the icp lookups will work to deliver content. Here is what I have
> minus all the acl rules and what not. Let me know if you have any
> recommendations.
> node 1: cache_peer staging-ss2 sibling 4228 4220
> node 2: cache_peer staging-ss1 sibling 4228 4220

I would suggest you set it to be 

 node 1: cache_peer staging-ss2 sibling 4228 4220 proxy-only
 node 2: cache_peer staging-ss1 sibling 4228 4220 proxy-only

As this will prevent duplication of objects on the two server's disks.

Michael.



RE: [squid-users] Ongoing Running out of filedescriptors

2010-02-09 Thread Michael Bowe
I run some busy installed-from-source squid v3.1 on Lenny

To get more filedescriptors I did this :

* configure with --with-filedescriptors=65536

* Modify the init.d startup script ( which I stole from the packaged
  squid version ) so that it includes "ulimit -n 65535"

To check if your tweaks worked, look in cache.log after starting squid.
In my case it reports "With 65535 file descriptors available"

Hope that helps!

Michael.

> -Original Message-
> From: Landy Landy [mailto:landysacco...@yahoo.com]
> Sent: Wednesday, 10 February 2010 10:29 AM
> To: Squid-Users
> Subject: [squid-users] Ongoing Running out of filedescriptors
> 
> I don't know what to do with my current squid, I even upgraded to
> 3.0.STABLE21 but, the problem persist every three days:
> 
> /usr/local/squid/sbin/squid -v
> Squid Cache: Version 3.0.STABLE21
> configure options:  '--prefix=/usr/local/squid' '--
> sysconfdir=/etc/squid' '--enable-delay-pools' '--enable-kill-parent-
> hack' '--disable-htcp' '--enable-default-err-language=Spanish' '--
> enable-linux-netfilter' '--disable-ident-lookups' '--
> localstatedir=/var/log/squid3.1' '--enable-stacktraces' '--with-
> default-user=proxy' '--with-large-files' '--enable-icap-client' '--
> enable-async-io' '--enable-storeio=aufs' '--enable-removal-
> policies=heap,lru' '--with-maxfd=32768'
> 
> I built with --with-maxfd=32768 option but, when squid is started it
> says is working with only 1024 filedescriptor.
> 
> I even added the following to the squid.conf:
> 
> max_open_disk_fds 0
> 
> But it hasn't resolve anything. I'm using squid on Debian Lenny. I
> don't know what to do. Here's part of cache.log:



RE: [squid-users] kernel 2.6.32

2010-02-09 Thread Michael Bowe
I tried this exact same combination recently and failed.

I could ping/traceroute, but TCP traffic wasn't working properly. Did a fair
bit of poking around but couldn't work out why this was the case.

2.6.31 kernels work fine though.


> -Original Message-
> From: Ariel [mailto:lauchafernan...@gmail.com]
> Sent: Wednesday, 10 February 2010 12:59 AM
> To: squid-users@squid-cache.org
> Subject: [squid-users] kernel 2.6.32
> 
>  List,, alguin can guide me if you can compile the kernel 2.6.32 with
> tproxy?
> I use that version of iptables?
> 
> debain lenny 64 bits
> kernel 2.6.32
> iptables  ??
> 
> Gracias




RE: [squid-users] Squid 3.1.0.16: FATAL: http(s)_port: TPROXY support in the system does not work

2010-02-04 Thread Michael Bowe

> -Original Message-
> From: Amos Jeffries [mailto:squ...@treenet.co.nz]


> If you don't mind doing a few builds and working with me to track it
> down
> I'll get back to you shortly with a patch to do some better debugging.

Ok, no worries.

Michael.



RE: [squid-users] Squid 3.1.0.16: FATAL: http(s)_port: TPROXY support in the system does not work

2010-02-03 Thread Michael Bowe
> -Original Message-
> From: Amos Jeffries [mailto:squ...@treenet.co.nz]

> Same ./configure options?

Yes

./configure \
  --enable-linux-netfilter \
  --enable-async-io --with-pthreads --enable-storeio=ufs,aufs \
  --enable-removal-policies=lru,heap \
  --with-large-files \
  --disable-auth \
  --with-filedescriptors=65536 \
  --enable-cache-digests

> Any relevant difference in the config.log produced during build?

I had a good dig around but couldn't see any differences


I found this in a strace :

[pid 11780] socket(PF_INET6, SOCK_STREAM, IPPROTO_TCP) = 4
[pid 11780] setsockopt(4, SOL_IP, 0x13 /* IP_??? */, [1], 4) = 0
[pid 11780] bind(4, {sa_family=AF_INET6, sin6_port=htons(0),
inet_pton(AF_INET6, "::2", &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0},
28) = -1 EADDRNOTAVAIL (Cannot assign requested address)
[pid 11780] close(4)= 0
[pid 11780] write(2, "2010/02/04 10:21:55| FATAL: http("..., 862010/02/04
10:21:55| FATAL: http(s)_port: TPROXY support in the system does not work.
) = 86

So sounds like something related to IPv6 ?

Michael.



[squid-users] Squid 3.1.0.16: FATAL: http(s)_port: TPROXY support in the system does not work

2010-02-02 Thread Michael Bowe
Hi

I just tried upgrading our 3.1.0.15 TPROXY servers to 3.1.0.16

I receive this error when trying to start squid :

  2010/02/03 14:42:08| FATAL: http(s)_port: TPROXY support in the system
does not work.
  FATAL: Bungled squid.conf line 45: http_port 8081 tproxy
  Squid Cache (Version 3.1.0.16): Terminated abnormally.
  CPU Usage: 0.008 seconds = 0.008 user + 0.000 sys
  Maximum Resident Size: 0 KB
  Page faults with physical i/o: 0

If I go back and "make install" the previous 3.1.0.15 version it starts fine

My system is Debian 5.0.4, with TPROXY custom compiled 2.6.31 kernel

Michael.



RE: [squid-users] Best Configuration for sibling peer

2009-12-16 Thread Michael Bowe
> -Original Message-
> From: Kris [mailto:christ...@wanxp.com]

Hi Kris

> 2. TCP Connection Failed

Are you running iptables?
If so, is the conntrack table overflowing?

> my peer config
> # cache_peer 10.10.10.10 sibling 3128 3130 no-netdb-exchange no-digest
> no-delay round-robin proxy-only
> cache_peer 10.10.10.11 sibling 3128 3130 no-netdb-exchange no-digest
> no-delay round-robin proxy-only
> cache_peer 10.10.10.12 sibling 3128 3130 no-netdb-exchange no-digest
> no-delay round-robin proxy-only
> cache_peer 10.10.10.13 sibling 3128 3130 no-netdb-exchange no-digest
> no-delay round-robin proxy-only
> 
> any suggestion what best configuration for sibling peer ?

I'm not sure the above is going to give you good results.

Enabling digest would save a lot of ICP traffic / lookups

As pointed out by Chris, "round-robin" option is used with parent selection
in the absence of ICP. But in your case you are using peers with ICP. 

If you are trying to prevent overlapping disk objects between the siblings
then I reckon your syntax should just be something like this :

cache_peer 10.10.10.1x sibling 3128 3130 proxy-only

Michael.



RE: [squid-users] Hardware/software suggestions (TPROXY)

2009-12-16 Thread Michael Bowe
> -Original Message-
> From: Angelo Höngens [mailto:a.hong...@netmatch.nl]

Hi Angelo

Thanks for responding 


> I didn't even know ISP's still used proxies in this century :D I assume
> you use them to save on bandwidth and improve the experience for the
> end-user?

Haha yeah well for example at one regional site transit is costing us $400+
per Mbps, so caching is still worthwhile for us.


> I'm still not sure about virtualizing or not.. It both has advantages
> and disadvantages..

Yes I would really need to build up a big server and compare CARP vs VMware.
Doesn’t sound like anyone else has had much experience with this topic. The
servers I have been testing with under VMware seem to go OK, but its hard to
tell just how much overhead is incurred. Maybe running say 4 squid VMs on an
8 core server is a reasonably good way to share the load around though.

> I'm in a Dell only shop, and you can buy the R610 with any 1 or 2
> cpu's,
> it's a choice. (And probably cheaper than HP).

Nice to know. I was using their website and it didn’t have an option for 1
CPU. I guess you just speak to the sales rep and get the quote more fully
customised. Thanks for the tip!


Michael.



RE: [squid-users] Hardware/software suggestions (TPROXY)

2009-12-16 Thread Michael Bowe
> -Original Message-
> From: Amos Jeffries [mailto:squ...@treenet.co.nz]

Hi Amos


> If you get 4+ core hardware, a CARP model with one instance receiving
> all requests and balancing across the other cores for actual storage
> handling with 1+ disk per core scales extremely well.

I'll go read up on CARP

I wonder whether CARP + TPROXY would play together nicely though?


> I'd be interested in hearing what req/sec you manage to achieve. We
> only
> have reported performances for up to 3.0 so far.

OK no worries, I will send through some info once we get our gear up and
running.


> > There are choices for the disk controller. Eg HP lets you choose
> between
> > 256M, 512M, 1G RAM on the supplied P410i RAID card. We wouldn't be
> running
> > any RAID, but would extra RAM on the card still be helpful with
> speeding up
> > disk access for squid?
> 
> If the drives use it. Squid is not very helpful for things like that
> yet.

OK, rather than buying more RAID RAM it sounds like its probably better to
just put that money towards an additional spindle.


Michael.



[squid-users] Hardware/software suggestions (TPROXY)

2009-12-15 Thread Michael Bowe
Hi, I need some hardware/software suggestions for TPROXY servers. 

We're an ISP and have been trialling Squid for a while on an assorted
collection of spare hardware. (We've got some Dell 2950 running VMware and
iSCSI, also Dell 2970 with VMware and SAS HDD, also HP DL360-G5 non-VMware
with SAS HDD). All these servers have been working pretty well, but we now
need to work out a budget to buy proper dedicated gear. 

We have some POPs which have about 5000 cable modems. Cisco routers running
WCCP feed groups of Squid 3.1 / TPROXY servers. At the moment we have 3 to 4
sibling no-cache servers at each POP. 

We currently deal with HTTP traffic of about 150Mbps but would like to
dimension the new gear to support double this.

What are peoples opinions on what sort of hardware to use?

Maybe groups of mid-size servers be best? I was thinking along the lines of
:
Eg HP DL360/380-G6, 1 x 2.4Ghz quad core, 32Gb RAM, 4 x 15K 146Gb HDD

Or should we be looking at just using one larger server :
Eg HP DL380-G6, 1 x 2.4Ghz (or faster) quad core, 64Gb RAM, 8 x 15K 146Gb
HDD

When buying HDDs for these servers, you can choose between 2.5" and 3.5"
drives. Should there be much difference in performance between the two? I
see the 3.5" version of 15K's are quite a bit cheaper and also are available
in sizes > 146Gb

There are choices for the disk controller. Eg HP lets you choose between
256M, 512M, 1G RAM on the supplied P410i RAID card. We wouldn't be running
any RAID, but would extra RAM on the card still be helpful with speeding up
disk access for squid?

Dell R610/R710 seems pretty similar to the HP DL360/380. But with the Dell
you have to buy dual quad core, which seems a bit wasteful for squid?
(unless you were going try and dice the server up to run multiple squids
under VMware). Probably best to save a few $$ and stick with 1 x CPU and put
extra cash towards more RAM or disk? Or do you think VMware is an OK way to
make use of all the CPU's? On our trial servers the VMware ESXi seems to
work OK but in the back of my mind I worry about the extra overhead it
introduces.

Thanks for any feedback you might have!

Michael.



[squid-users] localhost and RFC1918 addresses in TPROXY access.log

2009-11-23 Thread Michael Bowe
Hi

We run a number of squid 3.1.0.14 TPROXY caches in an ISP environment.

In our access log we are seeing a fair few client IP addresses of 127.0.0.1
and also RFC1918 address ranges.

The caches do not have any local users. We do not have any RFC1918 clients
accessing caches, all customers have real IP addresses.

Is something broken here?


Examples :

1259017091.941 413 172.16.212.240 TCP_MISS/200 3172 GET
http://s2.bikewalls.com/pictures/Road_Race_125_Misano_SanMarino_2008_11_100x
75.jpg - DIRECT/67.19.13.114 image/jpeg

1259017091.941 413 172.16.212.240 TCP_MISS/200 3335 GET
http://s2.bikewalls.com/pictures/Road_Race_125_Misano_SanMarino_2008_12_100x
75.jpg - DIRECT/67.19.13.114 image/jpeg

1259017091.948 0 127.0.0.1 TCP_IMS_HIT/304 360 GET
http://resources.news.com.au/cs/heraldsun/images/header-and-footer/nav-drop.
gif - NONE/- image/gif

1259017091.953 412 172.16.212.240 TCP_MISS/304 314 GET
http://s2.bikewalls.com/pictures/KTM_125_EXC_2009_01_100x75.jpg -
DIRECT/67.19.13.114 -

1259017091.954 413 172.16.212.240 TCP_MISS/200 3769 GET
http://s2.bikewalls.com/pictures/KTM_400_EXC_01_100.jpg -
DIRECT/67.19.13.114 image/jpeg

1259017091.958 0 127.0.0.1 TCP_IMS_HIT/304 360 GET
http://resources.news.com.au/cs/heraldsun/images/header-and-footer/nav-carsg
uide.gif - NONE/- image/gif

1259017091.960 668 10.130.165.68 TCP_MISS/200 618 GET
http://pubs.globalsecurity.org/adlog.php? - DIRECT/130.94.28.117 image/gif

1259017091.967 1964 10.128.145.91 TCP_MISS/200 74379 GET
http://wormatlas.psc.edu/male/musclemale/images/MaleMusFIG28.jpg -
DIRECT/128.182.66.72 image/jpeg

1259017091.968 0 127.0.0.1 TCP_IMS_HIT/304 359 GET
http://resources.news.com.au/cs/heraldsun/images/header-and-footer/nav-caree
rone.gif - NONE/- image/gif


Let me know if you want me to post configure strings / squid.conf

Thanks,
Michael.



RE: [squid-users] Re: ubuntu apt-get update 404

2009-11-16 Thread Michael Bowe
> -Original Message-
> From: Amos Jeffries [mailto:squ...@treenet.co.nz]

> I wonder. Is that actually 3.1.0.14 direct to origin? or perhapse going
> through some older sub-cache?

I see this at several of our sites

Each site is a cluster of 3 or 4 v3.1.0.14 sibling-caches. No parent caches.
 
> Are the two of you able to provide me with "tcpdump -s0" traces of the
> data between apt and squid please? particularly for the transparent
> mode
> problems.

Yes I will try and capture this for you.

Michael.



RE: [squid-users] Re: ubuntu apt-get update 404

2009-11-13 Thread Michael Bowe
> -Original Message-
> From: Matthew Morgan [mailto:atcs.matt...@gmail.com]
> Sent: Saturday, 14 November 2009 7:59 AM
> To: Squid Users
> Subject: Re: [squid-users] Re: ubuntu apt-get update 404
> 

> Apparently I only get the dropped .bz2 extensions when using squid
> transparently, which is how our network is set up.  If I manually
> specify http_proxy on my workstation to point to squid directly, I
> don't
> have any problems with apt-get update.  Has anyone ever heard of this?
> Here's my updated squid config (this is 3.0-STABLE20, btw).

I've been having perhaps related problems with Debian servers behind Squid
3.1.0.14 TPROXY

I am not getting 404's but am intermittently seeing "invalid reply header"
errors. eg :

Failed to fetch
http://backports.org/debian/dists/etch-backports/main/binary-amd64/Packages.
gz  The HTTP server sent an invalid reply header

Err http://security.debian.org lenny/updates Release.gpg
  The HTTP server sent an invalid reply header [IP: 150.203.164.38 80]

W: Failed to fetch
http://security.debian.org/dists/lenny/updates/Release.gpg  The HTTP server
sent an invalid reply header [IP: 150.203.164.38 80]

As you say, if I specify HTTP_PROXY= to go direct to the cache rather than
transparent then all works fine

Michael.





RE: [squid-users] Squid 3.1, Tproxy 4.1, WCCP, cache_peer sibling

2009-09-27 Thread Michael Bowe
> -Original Message-
> From: Amos Jeffries [mailto:squ...@treenet.co.nz]

> This is the first I've heard of the problem.  Thank you for pointing it
> out along with the fix.
> http://www.squid-cache.org/Versions/v3/HEAD/changesets/squid-3-
> 10004.patch

Thanks Amos,

I've patched our servers and they are working well.

Michael.




[squid-users] Squid 3.1, Tproxy 4.1, WCCP, cache_peer sibling

2009-09-25 Thread Michael Bowe
I have a site with several squid servers setup as shown here 
http://wiki.squid-cache.org/Features/Tproxy4

All the Tproxy functionality is working fine.

Now I would like to enable cache-peer sibling proxy-only to avoid
duplication of objects between each server's hard drive.

The servers sit in a dedicated subnet/vlan (router has "ip wccp redirect
exclude in" on this subinterface ).

If I enable cache_peer, I see that the ICP part works fine, but should
server A try to fetch a HIT from server B, the connection fails because the
source-ip is set to the client rather than server A.

I end up with this type of thing in the cache.log

2009/09/19 17:53:09| Detected DEAD Sibling: cache03.
2009/09/19 17:53:09| Detected REVIVED Sibling: cache03.
2009/09/19 17:53:11| TCP connection to cache03./8080 failed
2009/09/19 17:53:11| Detected DEAD Sibling: cache03.
2009/09/19 17:53:11| Detected REVIVED Sibling: cache03.
2009/09/19 17:53:16| TCP connection to cache03./8080 failed
2009/09/19 17:53:16| Detected DEAD Sibling: cache03.
2009/09/19 17:53:16| Detected REVIVED Sibling: cache03.

I guess we need to be able to disable the Tproxy functionality when talking
to local cache_peers ?

I see that Adrian Chadd made a patch for Squid v2
http://code.google.com/p/lusca-cache/issues/detail?id=48

I was wondering if there were any plans for such a feature to be added to
Squid v3.1?


Michael.