[squid-users] Is my Squid heavily loaded?

2011-03-14 Thread Saurabh Agarwal
Hi All

I am trying to load test squid using this simple test. From a single client 
machine I want to simultaneously download 200 different HTTP files of 10MB each 
in a loop over and over again. I see that within 5 minutes squid process size 
goes beyond 250MB. These 10MB files are all cachable and return a TCP_HIT for 
the second time onwards. There are other processes running and I want to limit 
squid memory usage to 120MB. Hard disk partition allocated to Squid is of 10GB 
and is made using device-mapper. I am using 3 cache_dir as mentioned below. How 
can I control Squid memory usage in this case? Below is my portion of my 
squid.conf.


access_log /squid/logs/access.log  squid
cache_log /squid/logs/cache.log

cache_mem 8 MB
cache_dir aufs /squid/var/cache/small 1500 9 256 max-size=1
cache_dir aufs /squid/var/cache/medium 2500 6 256 max-size=2000
cache_dir aufs /squid/var/cache/large 6000 3 256 max-size=1
maximum_object_size 100 MB
log_mime_hdrs off
max_open_disk_fds 400
maximum_object_size_in_memory 8 KB

cache_store_log none
pid_filename /squid/logs/squid.pid
debug_options ALL,1
---

Regards,
Saurabh


Re: [squid-users] Is my Squid heavily loaded?

2011-03-14 Thread Amos Jeffries

On 15/03/11 00:02, Saurabh Agarwal wrote:

Hi All

I am trying to load test squid using this simple test. From a single
client machine I want to simultaneously download 200 different HTTP
files of 10MB each in a loop over and over again. I see that within 5
minutes squid process size goes beyond 250MB. These 10MB files are
all cachable and return a TCP_HIT for the second time onwards. There
are other processes running and I want to limit squid memory usage to
120MB. Hard disk partition allocated to Squid is of 10GB and is made
using device-mapper. I am using 3 cache_dir as mentioned below. How
can I control Squid memory usage in this case? Below is my portion of
my squid.conf.


200 files @ 10MB - up to 2GB of data possibly in memory simultaneously.

It is easy to see why squid process size goes beyond 250MB easily.


You have cache_mem of 8 MB. Which means Squid will push these objects to 
disk after the first use. From then on what you are testing is the rate 
at which Squid can load them from disk into the network. It is quite 
literally a read from disk into buffer, call function which immediately 
writes direct from buffer to network. Done ins small chunks of 
whatever the system disk I/O page size is (default 4KB but could be more).


 The real speed bottleneck in Squid are the HTTP processing. Which does 
a lot of CPU intensive small steps of parsing and data copying. When 
there are a lot of new requests arriving it sucks CPU time away from 
that speedy read-write byte pumping loop.


Your test is a classic check for Disk speed limits in Squid.

The other tests you need to check performance are:
 * numerous requests for few medium sized objects (which can all fit in 
memory together, headers of ~10% or less the total object). Testing the 
best-case memory-hit speed.
 * numerous requests for very small objects (one packet responses sort 
of size). Testing the worst-case HTTP parser limits.
 * parallel requests for numerous varied objects (too many to fit in 
memory). Testing a somewhat normal traffic speed expectations.


There is a tool called WebPolygraph which does some good traffic 
measurements.




 access_log
/squid/logs/access.log  squid cache_log /squid/logs/cache.log

cache_mem 8 MB cache_dir aufs /squid/var/cache/small 1500 9 256
max-size=1 cache_dir aufs /squid/var/cache/medium 2500 6 256
max-size=2000 cache_dir aufs /squid/var/cache/large 6000 3 256
max-size=1 maximum_object_size 100 MB log_mime_hdrs off
max_open_disk_fds 400 maximum_object_size_in_memory 8 KB

cache_store_log none pid_filename /squid/logs/squid.pid debug_options
ALL,1 ---

Regards, Saurabh


Um, your use of cache_dir is a bit odd.
 *one* ufs/aufs/diskd cache_dir entry per disk spindle. Otherwise your 
speed is lower due to disk I/O collisions between the cache_dir (your 
test objects are all the same size and so will not reveal this behaviour).
 Also, leave some disk space for the cache log and journal overheads. 
Otherwise your Squid will crash with unable to write to file errors 
when the cache starts to get nearly full.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.11
  Beta testers wanted for 3.2.0.5


[squid-users] Problem with squid 3.0 WCCP with Cisco ASA 5510

2011-03-14 Thread mrito
hi List,

I'm trying to setup a Cisco ASA 5510  squid 3.0 WCCP and already followed
some sources on the website procedures but client browsing still does not
work. I can ping the public DNS of the website were trying to access via
client PC but the problem is they cannot connect when using the browser.

We've created a GRE tunnel on the Squid box (running Linux):
# iptunnel add gre2 mode gre remote 172.16.9.11 local 172.16.9.14 dev eth0
# ifconfig gre2 127.0.0.2 up

(where 172.16.9.11 is the internal interface of our ASA and 172.16.9.14 is
the IP of our squid proxy server)

Then we've set up iptables to redirect port 80 to our proxy on port 8080:

# iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j REDIRECT
--to-port 8080

Our Squid 2.7.STABLE3 config file contains:

http_port 172.16.9.14:8080 transparent
wccp2_router 172.16.9.11


We can tell that WCCP connects because in the ASA we have:

ALTVPN# sh wccp

Global WCCP information:
Router information:
Router Identifier:   172.16.18.1
Protocol Version:2.0

Service Identifier: web-cache
Number of Cache Engines: 0
Number of routers:   0
Total Packets Redirected:5595
Redirect access-list:-none-
Total Connections Denied Redirect:   0
Total Packets Unassigned:41
Group access-list:   -none-
Total Messages Denied to Group:  0
Total Authentication failures:   0
Total Bypassed Packets Received: 0

However, clients are getting timeouts when trying to browse the internet.
In the ASA logs, I'm seeing:

Denied ICMP type=3, code=3 from PROXY on interface inside
No matching connection for ICMP error message: icmp src inside:PROXY dst
identity: (type 3, code 3) on inside interface.

Please see also below running config we have on our Cisco ASA 5510 Router:
dns-guard
!
interface Ethernet0/0
 nameif internet
 security-level 0
 ip address 122.3.237.69 255.255.255.240
 ospf cost 10
!
interface Ethernet0/1
 nameif LAN
 security-level 100
 ip address 172.16.9.11 255.255.255.0
 ospf cost 10
!
interface Ethernet0/2
 nameif DMZ
 security-level 50
 ip address 172.16.10.10 255.255.255.0
 ospf cost 10
!
interface Ethernet0/3
 description Connection to Proxy Server
 nameif LAN-TEST
 security-level 0
 ip address 172.16.18.1 255.255.255.0
!
interface Management0/0
 shutdown
 nameif management
 security-level 100
 no ip address
 ospf cost 10
 management-only



ALTVPN# sh route

Codes: C - connected, S - static, I - IGRP, R - RIP, M - mobile, B - BGP
   D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area
   N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2
   E1 - OSPF external type 1, E2 - OSPF external type 2, E - EGP
   i - IS-IS, L1 - IS-IS level-1, L2 - IS-IS level-2, ia - IS-IS inter
area
   * - candidate default, U - per-user static route, o - ODR
   P - periodic downloaded static route

Gateway of last resort is 122.3.237.65 to network 0.0.0.0

C172.16.9.0 255.255.255.0 is directly connected, LAN
C122.3.237.64 255.255.255.240 is directly connected, internet
S*   0.0.0.0 0.0.0.0 [1/0] via 122.3.237.65, internet




ALTVPN# sh access-list
access-list cached ACL log flows: total 0, denied 0 (deny-flow-max 4096)
alert-interval 300
access-list internet_access_in; 2 elements
access-list internet_access_in line 1 extended permit tcp any eq www host
122.3.
237.68 eq www (hitcnt=0) 0x30ad4bcb
access-list internet_access_in line 2 extended permit ip any any
(hitcnt=0) 0xe5
c8f559
access-list LAN_nat0_outbound; 3 elements
access-list LAN_nat0_outbound line 1 extended permit ip 172.16.9.0
255.255.255.0
 192.168.1.0 255.255.255.0 (hitcnt=0) 0x903b7638
access-list LAN_nat0_outbound line 2 extended permit ip any 172.16.9.0
255.255.2
55.0 (hitcnt=0) 0x267f03e2
access-list LAN_nat0_outbound line 3 extended permit ip interface LAN
192.168.1.
0 255.255.255.0 (hitcnt=0) 0x547bc155
access-list OO_temp_internet_map2; 1 elements (dynamic)
access-list OO_temp_internet_map2 line 1 extended permit ip host
122.3.237.69 ho
st 124.105.250.93 (hitcnt=1) 0x749b5a74
access-list internet_1_cryptomap; 1 elements
access-list internet_1_cryptomap line 1 extended permit ip 172.16.9.0
255.255.25
5.0 192.168.1.0 255.255.255.0 (hitcnt=88) 0x1bb16a29
access-list internet_2_cryptomap; 1 elements
access-list internet_2_cryptomap line 1 extended permit ip 172.16.9.0
255.255.25
5.0 192.168.1.0 255.255.255.0 (hitcnt=0) 0x3574b840
access-list internet_3_cryptomap; 1 elements
access-list internet_3_cryptomap line 1 extended permit ip 172.16.9.0
255.255.25
5.0 192.168.1.0 255.255.255.0 (hitcnt=0) 0x10902697
access-list TEST-VOIP; 45 elements
access-list TEST-VOIP line 1 extended permit ip any host 122.3.237.71
(hitcnt=41
55) 0x99a80ab9
access-list TEST-VOIP line 2 remark ftp to access outside
access-list TEST-VOIP line 3 

RE: [squid-users] Is my Squid heavily loaded?

2011-03-14 Thread Saurabh Agarwal
Thanks Amos. I will try doing those different sizes tests.

Some more observations on my machine. If I don't transfer those 200 HTTP Files 
for the first time in parallel but sequentially one by one using wget and after 
this if I use my other script to get these 200 files in parallel from Squid 
then memory usage is allright. Squid memory usage remains under 100MB. I think 
for the first time transfer there is even more disk usage like save the files 
to disk and then read all of them parallel from disk. Also I think there should 
be lots of socket buffer space being used as well by Squid for each client and 
server socket.

Regarding cache_dir usage what do you mean by one cache_dir entry per 
spindle. I have only one disk and one device mapped partition with ext3 file 
system.

Regards,
Saurabh

-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Sent: Monday, March 14, 2011 5:26 PM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] Is my Squid heavily loaded?

On 15/03/11 00:02, Saurabh Agarwal wrote:
 Hi All

 I am trying to load test squid using this simple test. From a single
 client machine I want to simultaneously download 200 different HTTP
 files of 10MB each in a loop over and over again. I see that within 5
 minutes squid process size goes beyond 250MB. These 10MB files are
 all cachable and return a TCP_HIT for the second time onwards. There
 are other processes running and I want to limit squid memory usage to
 120MB. Hard disk partition allocated to Squid is of 10GB and is made
 using device-mapper. I am using 3 cache_dir as mentioned below. How
 can I control Squid memory usage in this case? Below is my portion of
 my squid.conf.

200 files @ 10MB - up to 2GB of data possibly in memory simultaneously.

It is easy to see why squid process size goes beyond 250MB easily.


You have cache_mem of 8 MB. Which means Squid will push these objects to 
disk after the first use. From then on what you are testing is the rate 
at which Squid can load them from disk into the network. It is quite 
literally a read from disk into buffer, call function which immediately 
writes direct from buffer to network. Done ins small chunks of 
whatever the system disk I/O page size is (default 4KB but could be more).

  The real speed bottleneck in Squid are the HTTP processing. Which does 
a lot of CPU intensive small steps of parsing and data copying. When 
there are a lot of new requests arriving it sucks CPU time away from 
that speedy read-write byte pumping loop.

Your test is a classic check for Disk speed limits in Squid.

The other tests you need to check performance are:
  * numerous requests for few medium sized objects (which can all fit in 
memory together, headers of ~10% or less the total object). Testing the 
best-case memory-hit speed.
  * numerous requests for very small objects (one packet responses sort 
of size). Testing the worst-case HTTP parser limits.
  * parallel requests for numerous varied objects (too many to fit in 
memory). Testing a somewhat normal traffic speed expectations.

There is a tool called WebPolygraph which does some good traffic 
measurements.


  access_log
 /squid/logs/access.log  squid cache_log /squid/logs/cache.log

 cache_mem 8 MB cache_dir aufs /squid/var/cache/small 1500 9 256
 max-size=1 cache_dir aufs /squid/var/cache/medium 2500 6 256
 max-size=2000 cache_dir aufs /squid/var/cache/large 6000 3 256
 max-size=1 maximum_object_size 100 MB log_mime_hdrs off
 max_open_disk_fds 400 maximum_object_size_in_memory 8 KB

 cache_store_log none pid_filename /squid/logs/squid.pid debug_options
 ALL,1 ---

 Regards, Saurabh

Um, your use of cache_dir is a bit odd.
  *one* ufs/aufs/diskd cache_dir entry per disk spindle. Otherwise your 
speed is lower due to disk I/O collisions between the cache_dir (your 
test objects are all the same size and so will not reveal this behaviour).
  Also, leave some disk space for the cache log and journal overheads. 
Otherwise your Squid will crash with unable to write to file errors 
when the cache starts to get nearly full.

Amos
-- 
Please be using
   Current Stable Squid 2.7.STABLE9 or 3.1.11
   Beta testers wanted for 3.2.0.5


[squid-users] Client Certificate Authentication

2011-03-14 Thread Jaime Nebrera

  Dear all,

  This is my first email to the list in a looong time so please forgive 
if I'm saying something stupid.


  I want to authenticate users using a digital certificate they will 
already own for forwarding proxy.


  That is, the browsers will use squid to navigate the internet (not 
reverse proxy), do some ACL (white / black list validating the user 
against a LDAP server) and some antivirus filtering (iCap or similar).


  Reading the available information in the Internet I'm not sure if 
this is possible or not.


  As reverse proxy there is no problem, but as a forwarding proxy I 
have seem some replies but dont have for sure if its possible or not.


  I have also seen SSLBump that seems in that topic.

  BTW, I would like the proxy to use User's certificate when 
authenticating against other (external) servers.


  This sounds a lot as a Man In The Middle attack but ...

  Browsers will be configured to use a specific proxy (no transparent) 
and could be either Internet Explorer or Firefox.


  Very thankful in advance. Regards

--
Jaime Nebrera - jnebr...@eneotecnologia.com
Consultor TI - ENEO Tecnologia SL
C/ Manufactura 2, Edificio Euro, Oficina 3N
Mairena del Aljarafe - 41927 - Sevilla
Telf.- 955 60 11 60 / 619 04 55 18



Re: [squid-users] help with squid redirectors

2011-03-14 Thread Osmany
thanks for the reply. Ok so now I've modified the script with your
suggestion and I get this in my access.log

http://dnl-16.geo.kaspersky.com/ftp://dnl-kaspersky.quimefa.cu:2122/Updates/.com/index/u0607g.xml.klz

I'm pretty sure this is not working for the clients. I'm looking for it
to return something like this:

http://dnl-16.geo.kaspersky.com/ftp://dnl-kaspersky.quimefa.cu:2122/Updates/index/u0607g.xml.klz


I've changed the script many time so that I can get what I want but I
had no success. can you please help me?

On Sun, 2011-03-13 at 21:27 -0300, Marcus Kool wrote:
 Osmany,
 look in access.log.
 It should say what is happening:
 I expect this:
 ...  TCP_MISS/301   GET http://kaspersky
 ...  TCP_MISS/200   GET ftp://dnl-kaspersky.quimefa.cu:2122/Updates
 
 and does the client use Squid for the ftp protocol ??
 
 And the RE matches too many strings.
 I recommend to rewrite it to something like this:
 
 if ($url =~ /^http:\/\/dnl.*\.kaspersky\.com\/(.*)/) {
my $newurl;
$newurl = ftp:\/\/dnl-kaspersky\.quimefa\.cu\:2122\/Updates/$1;  # 
 Note the $1
print $X[0]. 301:$newurl\n;
 }
 
 Marcus
 
 Osmany wrote:
  So finally this is what I have and it works perfectly. But I want to go
  further than this. I want the clients to download what they've requested
  from my local urls. For example...if a client wants to update their
  Kaspersky antivirus and it requests for an internet update server, I
  want it to actually get redirected to my ftp and download what it wants
  from here. So far what I've accomplished is that any request gets
  redirected to the specified url but it doesn't follow the path of the
  file that the client requested.
  
  #!/usr/bin/perl
  BEGIN {$|=1}
  while () {
   @X = split;
   $url = $X[1];
 if ($url =~ /^http:\/\/(.*)kaspersky(.*)/) {
  print $X[0]. 301:ftp:\/\/dnl-kaspersky\.quimefa\.cu\:2122\/Updates
  \n;
 }
 elsif ($url =~ /^http:\/\/(.*)update(.*)/) {
  print $X[0]. 301:http:\/\/windowsupdate\.quimefa\.cu\:8530\n;
 }
 else {
  print $X[0].\n;
 }
  }
  
  Can anybody help me with this?
  
  
  




Re: [squid-users] help with squid redirectors

2011-03-14 Thread Marcus Kool

Osmany,

I can help you but I think it is better to do this off list.
You can send me to my private email
- the latest version of the script and
- the unedited relevant lines from access.log

Marcus


Osmany wrote:

thanks for the reply. Ok so now I've modified the script with your
suggestion and I get this in my access.log

http://dnl-16.geo.kaspersky.com/ftp://dnl-kaspersky.quimefa.cu:2122/Updates/.com/index/u0607g.xml.klz

I'm pretty sure this is not working for the clients. I'm looking for it
to return something like this:

http://dnl-16.geo.kaspersky.com/ftp://dnl-kaspersky.quimefa.cu:2122/Updates/index/u0607g.xml.klz


I've changed the script many time so that I can get what I want but I
had no success. can you please help me?

On Sun, 2011-03-13 at 21:27 -0300, Marcus Kool wrote:

Osmany,
look in access.log.
It should say what is happening:
I expect this:
...  TCP_MISS/301   GET http://kaspersky
...  TCP_MISS/200   GET ftp://dnl-kaspersky.quimefa.cu:2122/Updates

and does the client use Squid for the ftp protocol ??

And the RE matches too many strings.
I recommend to rewrite it to something like this:

if ($url =~ /^http:\/\/dnl.*\.kaspersky\.com\/(.*)/) {
   my $newurl;
   $newurl = ftp:\/\/dnl-kaspersky\.quimefa\.cu\:2122\/Updates/$1;  # 
Note the $1
   print $X[0]. 301:$newurl\n;
}

Marcus

Osmany wrote:

So finally this is what I have and it works perfectly. But I want to go
further than this. I want the clients to download what they've requested
from my local urls. For example...if a client wants to update their
Kaspersky antivirus and it requests for an internet update server, I
want it to actually get redirected to my ftp and download what it wants
from here. So far what I've accomplished is that any request gets
redirected to the specified url but it doesn't follow the path of the
file that the client requested.

#!/usr/bin/perl
BEGIN {$|=1}
while () {
 @X = split;
 $url = $X[1];
   if ($url =~ /^http:\/\/(.*)kaspersky(.*)/) {
print $X[0]. 301:ftp:\/\/dnl-kaspersky\.quimefa\.cu\:2122\/Updates
\n;
   }
   elsif ($url =~ /^http:\/\/(.*)update(.*)/) {
print $X[0]. 301:http:\/\/windowsupdate\.quimefa\.cu\:8530\n;
   }
   else {
print $X[0].\n;
   }
}

Can anybody help me with this?










RE: [squid-users] Is my Squid heavily loaded?

2011-03-14 Thread Amos Jeffries

On Mon, 14 Mar 2011 18:12:27 +0530, Saurabh Agarwal wrote:

Thanks Amos. I will try doing those different sizes tests.

Some more observations on my machine. If I don't transfer those 200
HTTP Files for the first time in parallel but sequentially one by one
using wget and after this if I use my other script to get these 200
files in parallel from Squid then memory usage is allright. Squid
memory usage remains under 100MB. I think for the first time transfer
there is even more disk usage like save the files to disk and then
read all of them parallel from disk. Also I think there should be 
lots
of socket buffer space being used as well by Squid for each client 
and

server socket.

Regarding cache_dir usage what do you mean by one cache_dir entry
per spindle. I have only one disk and one device mapped partition
with ext3 file system.


The config file you showed had 3 cache_dir on that 1 disk. This is bad. 
Each cache_dir has N AIO threads (16, 32 or 64 by default IIRC) all 
trying to read/write from random portions of the disk. Squid and AIO 
scheduling does some optimization towards serialising access to the base 
disk, but that does not work well when there are multiple independent 
cache_dir state handlers.


Amos



Re: [squid-users] Client Certificate Authentication

2011-03-14 Thread Amos Jeffries

On Mon, 14 Mar 2011 13:43:38 +0100, Jaime Nebrera wrote:

Dear all,

  This is my first email to the list in a looong time so please
forgive if I'm saying something stupid.

  I want to authenticate users using a digital certificate they will
already own for forwarding proxy.

  That is, the browsers will use squid to navigate the internet (not
reverse proxy), do some ACL (white / black list validating the user
against a LDAP server) and some antivirus filtering (iCap or 
similar).


  Reading the available information in the Internet I'm not sure if
this is possible or not.


It is. Though not easily.



  As reverse proxy there is no problem, but as a forwarding proxy I
have seem some replies but dont have for sure if its possible or not.


Squid https_port can accept forward proxy traffic as easily as 
reverse-proxy traffic. The difficulty comes when you find out that none 
of the popular browsers actually open HTTPS connections to proxies. An 
stunnel wrapper is needed to apply the SSL bit from the users box to the 
Squid.





  I have also seen SSLBump that seems in that topic.


Nope, this is MITM on HTTPS. No per-user certificates involved.



  BTW, I would like the proxy to use User's certificate when
authenticating against other (external) servers.


It cannot. The SSL traffic which follows a certificate CANNOT be 
generated without the secret keys associated with the certificate. Squid 
does not have this information and can only be configured to use one set 
of keys for all DIRECT outgoing traffic.


What you have instead is a certificate authorizing Squid to open 
connections to external places plus some ACl rules in squid.conf 
limiting which clients are allowed to go via HTTPS to those places. 
Those external places see Squid as the client software even with regular 
HTTP traffic.


Amos



Re: [squid-users] Problem with squid 3.0 WCCP with Cisco ASA 5510

2011-03-14 Thread Amos Jeffries
On Mon, 14 Mar 2011 20:25:24 +0800, mr...@mail.altcladding.com.ph 
wrote:

hi List,

I'm trying to setup a Cisco ASA 5510  squid 3.0 WCCP and already 
followed
some sources on the website procedures but client browsing still does 
not
work. I can ping the public DNS of the website were trying to access 
via
client PC but the problem is they cannot connect when using the 
browser.


ICMP protocol used by ping is not sent over the tunnnel hops. So ping 
is meaningless when WCCP and similar diversions are involved.




We've created a GRE tunnel on the Squid box (running Linux):
# iptunnel add gre2 mode gre remote 172.16.9.11 local 172.16.9.14 dev 
eth0

# ifconfig gre2 127.0.0.2 up

(where 172.16.9.11 is the internal interface of our ASA and 
172.16.9.14 is

the IP of our squid proxy server)



So far so good (assuming the ASA likes those IPs too).

Then we've set up iptables to redirect port 80 to our proxy on port 
8080:


# iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j REDIRECT
--to-port 8080



You need a back-path NAT to make it symmetric. The easy way is 
MASQUERADE in the POSTROUTING chain.


Maybe rp_filter and forwarding as well.
http://wiki.squid-cache.org/Features/Wccp2#Squid_box_OS_configuration


Our Squid 2.7.STABLE3 config file contains:

http_port 172.16.9.14:8080 transparent
wccp2_router 172.16.9.11


We can tell that WCCP connects because in the ASA we have:

ALTVPN# sh wccp

Global WCCP information:
Router information:
Router Identifier:   172.16.18.1


Here we are, primary router identifier. By mutual agreement of WCCP 
protocol just to confuse, this indicates the likely ID value for 
wccp2_router.


Try:
  wccp2_router 172.16.18.1


Protocol Version:2.0

Service Identifier: web-cache
Number of Cache Engines: 0


When Squid starts it sends a HERE_I_AM packet to the $wccp2_router.
That packet seems not to be getting through OR not being accepted by 
the ASA.


Try the above alternative IP. If that fails it maybe worth trying every 
other IP the router has.




Number of routers:   0
Total Packets Redirected:5595
Redirect access-list:-none-
Total Connections Denied Redirect:   0
Total Packets Unassigned:41
Group access-list:   -none-
Total Messages Denied to Group:  0
Total Authentication failures:   0
Total Bypassed Packets Received: 0

However, clients are getting timeouts when trying to browse the 
internet.

In the ASA logs, I'm seeing:

Denied ICMP type=3, code=3 from PROXY on interface inside
No matching connection for ICMP error message: icmp src inside:PROXY 
dst

identity: (type 3, code 3) on inside interface.


Interesting. I was of the understanding that WCCP is supposed to 
fail-open so clients have something equivalent to always-up service.




Please see also below running config we have on our Cisco ASA 5510 
Router:

dns-guard
!
interface Ethernet0/0
 nameif internet
 security-level 0
 ip address 122.3.237.69 255.255.255.240
 ospf cost 10
!
interface Ethernet0/1
 nameif LAN
 security-level 100
 ip address 172.16.9.11 255.255.255.0
 ospf cost 10
!
interface Ethernet0/2
 nameif DMZ
 security-level 50
 ip address 172.16.10.10 255.255.255.0
 ospf cost 10
!
interface Ethernet0/3
 description Connection to Proxy Server
 nameif LAN-TEST
 security-level 0
 ip address 172.16.18.1 255.255.255.0
!
interface Management0/0
 shutdown
 nameif management
 security-level 100
 no ip address
 ospf cost 10
 management-only



ALTVPN# sh route

Codes: C - connected, S - static, I - IGRP, R - RIP, M - mobile, B - 
BGP

   D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area
   N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2
   E1 - OSPF external type 1, E2 - OSPF external type 2, E - EGP
   i - IS-IS, L1 - IS-IS level-1, L2 - IS-IS level-2, ia - IS-IS 
inter

area
   * - candidate default, U - per-user static route, o - ODR
   P - periodic downloaded static route

Gateway of last resort is 122.3.237.65 to network 0.0.0.0

C172.16.9.0 255.255.255.0 is directly connected, LAN
C122.3.237.64 255.255.255.240 is directly connected, internet
S*   0.0.0.0 0.0.0.0 [1/0] via 122.3.237.65, internet



snip

Amos