Re: [squid-users] acl for redirect

2015-06-25 Thread Amos Jeffries
On 26/06/2015 2:36 a.m., Mike wrote:
> Amos, thanks for info.
> 
> The primary settings being used in squid.conf:
> 
> http_port 8080
> # this port is what will be used for SSL Proxy on client browser
> http_port 8081 intercept
> 
> https_port 8082 intercept ssl-bump connection-auth=off
> generate-host-certificates=on dynamic_cert_mem_cache_size=16MB
> cert=/etc/squid/ssl/squid.pem key=/etc/squid/ssl/squid.key
> cipher=ECDHE-RSA-RC4-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES128-SHA:DHE-RSA-CAMELLIA128-SHA:AES128-SHA:RC4-SHA:HIGH:!aNULL:!MD5:!ADH
> 
> 
> sslcrtd_program /usr/lib64/squid/ssl_crtd -s /var/lib/squid_ssl_db -M 16MB
> sslcrtd_children 50 startup=5 idle=1
> ssl_bump server-first all
> ssl_bump none localhost
> 
> 
> Then e2guardian uses 10101 for the browsers, and uses 8080 for
> connecting to squid on the same server.

Doesn;t matter. Due to TLS security requirements Squid ensures the TLS
connection in re-encrypted on outgoing.


I am doubtful eth nord works anymore since Googles own documentation for
schools states that one must install a MITM proxy that does the traffic
filtering - e2guardian is not one of those. IMO you should convert your
e2guardian config into Squid ACL rules that can be applied to the bumped
traffic without forcing http://

But if nord does work, so should the deny_info in Squid. Something like
this probably:

 acl google dstdomain .google.com
 deny_info 301:http://%H%R?nord=1 google

 acl GwithQuery urlpath_regex ?
 deny_info 301:http://%H%R&nord=1 GwithQuery

 http_access deny google Gquery
 http_access deny google


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Reg - Squid can cache the chrome OS updates.

2015-06-25 Thread Amos Jeffries
On 26/06/2015 4:36 p.m., Squid List wrote:
> Hi,
> 
> Is the Squid can cache Microsoft Updates and IOS Updates?
> 
> If its cache means, please help me out for cache Chrome OS updates in
> latest squid version that is installed in CentOS 6.6.

The short answer (FWIW):

Squid can (and does) cache any HTTP content which is cacheable. With the
exception of 206 responses and PUT request payloads.


The long answer:

Whether the cached content is used depends entirely on what the client
requests. It has the power to request that cached content be ignored.

Whether content is cacheable depends entirely on what the server
delivers. It has the power to place limits on cache times up to and
including stating an object is already stale (ie not usefully cached).

There are also some mechanisms which when used MAY make content
completely untrustworthy or and uncacheable:
* connection based authentication (NTLM, Negotiate)
* traffic interception (NAT, TPROXY, SSL-Bump)
* broken Vary headers (though this causes caching when it shouldn't)
*


I hope that explains why you wont get a clear simple answer to your
question.

To help any further we will need information about;
- what Squid version you are using (if its not the latest 3.5 please try
an upgrade),
- how its configured (squid.conf without the comment lines please),
- how its being used (explicit forward-, reverse-, or interception proxy)
- what exactly the request messages you are trying to make into HITs are
("debug_options 11,2" produces a traces of those),
- what response messages the server is delivering on the MISS (the same
11,2 trace)
- what Squid is logging for them (access.log entries)

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


[squid-users] Reg - Squid can cache the chrome OS updates.

2015-06-25 Thread Squid List

Hi,

Is the Squid can cache Microsoft Updates and IOS Updates?

If its cache means, please help me out for cache Chrome OS updates in 
latest squid version that is installed in CentOS 6.6.



Thanks & Regards,
Nithi

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 3.1 access_log and log module syslog sets program-name as (squid)

2015-06-25 Thread Amos Jeffries
On 25/06/2015 6:49 p.m., YogiBearNL aka Ronald wrote:
> Squid v2.7: 
> 
> Jun 25 08:36:37 proxy SQUID[16271]:
> 192.168.2.85 - - [25/Jun/2015:08:36:37 +0200] "GET
> http://tpc.googlesyndication.com/safeframe/1-0-2/html/container.html
> HTTP/1.1" 200 2439 "http://tweakers.net/"; "Mozilla/5.0 (Macintosh; Intel
> Mac OS X 10_8_0) AppleWebKit/400.5.3 (KHTML, like Gecko) Version/5.2.3
> Safari/427.8.5" TCP_MISS:DIRECT 
> 
> Squid v3.1.6: 
> 
> Jun 24 21:47:56 proxy
> (SQUID): 192.168.2.85 - - [24/Jun/2015:21:47:56 +0200] "GET
> http://cdn.viglink.com/images/pixel.gif? HTTP/1.1" 200 639
> "http://www.zdnet.com/blog/central-europe/"; "Mozilla/5.0 (Macintosh;
> Intel Mac OS X 10_8_0) AppleWebKit/400.5.3 (KHTML, like Gecko)
> Version/5.2.3 Safari/427.8.5" TCP_MISS:DIRECT 
> 
> When I try to parse the
> syslog lines, the ones with the (squid) as a program name fail because
> there are not normal syslog lines.
> Why is this happening ? And is this
> fixed in a later release ? Or maybe it's some configuration problem
> ?

Squid (both versions) is using the OS syslog() API to deliver these log
entries. The bits up to and inluding the '(SQUID):' and 'SQUID[16271]:'
are all generated by the syslog kernel daemon.

This is weird output, but I think its due to a change in the syslog
application.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] acl for redirect

2015-06-25 Thread Mike

Amos, thanks for info.

The primary settings being used in squid.conf:

http_port 8080
# this port is what will be used for SSL Proxy on client browser
http_port 8081 intercept

https_port 8082 intercept ssl-bump connection-auth=off 
generate-host-certificates=on dynamic_cert_mem_cache_size=16MB 
cert=/etc/squid/ssl/squid.pem key=/etc/squid/ssl/squid.key 
cipher=ECDHE-RSA-RC4-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES128-SHA:DHE-RSA-CAMELLIA128-SHA:AES128-SHA:RC4-SHA:HIGH:!aNULL:!MD5:!ADH


sslcrtd_program /usr/lib64/squid/ssl_crtd -s /var/lib/squid_ssl_db -M 16MB
sslcrtd_children 50 startup=5 idle=1
ssl_bump server-first all
ssl_bump none localhost


Then e2guardian uses 10101 for the browsers, and uses 8080 for 
connecting to squid on the same server.


Yet what is happening is there is the GET, then CONNECT and the tunnel 
is created, never allowing squid to decrypt and pass the data along to 
e2guardian, I suspect Google has changed their settings denying any 
proxy from intercepting, because we can type the most foul terms which 
are in the "bannedssllist" for e2guardian and literally nothing is 
filtered at all on google, nor youtube. Yet other secure sites like 
wordpress, yahoo, and others are caught and blocked, so it is just 
google owned sites that are not.


More below...


On 6/24/2015 6:36 AM, Amos Jeffries wrote:

On 24/06/2015 11:03 a.m., Mike wrote:

We have a server setup using squid 3.5 and e2guardian (newer branch of
dansguardian), the issue is now google has changed a few things around
and google is no longer filtered which is not acceptable. We already
have the browser settings for SSL Proxy set to our server, and squid has
ssl-bump enabled and working. Previously there was enough unsecure
content on Google that the filtering was still working, but now google
has gone 100% encrypted meaning it is 100% unfiltered.

Maybe, maybe not.


What is happening
is it is creating an ssl tunnel (for lack of a better term) between

No. That is the correct and official term for what they are doing. And
"CONNECT tunnel" is the full phrase / name for the particular method of
tunnel creation.



their server and the browser, so all squid sees is the connection to
www.google.com, and after that it is tunneled and not recognized by
squid or e2guardian at all.

BUT ... you said you were SSL-Bump'ing. Which means you are decrypting
such tunnels to filter the content inside them.

So what is the problem? is your method of bumping not decrypting the
Google traffic for Squid access controls and helpers to filter?

Note that DansGuardian and e2guardian being independent HTTP proxies are
not party to that SSL-Bump decrypted content inside Squid. ONly Squid
internals and ICAP/eCAP services have access to it.


I found a few options online that was used with older squid versions but
nothing is working with squid 3.5... Looking for something like this:

acl google dstdomain .google.com
deny_info http://www.google.com/webhp?nord=1 google

As you said Google have gone 100% HTTPS. URLs beginning with http:// are
not HTTPS nor accepted there anymore. If used they just get a 30x
redirect to an https:// URL.

Amos
This is why we are thinking we can force the redirect, if you have ides 
on how to do that. All google pages use the secure aspect, except when 
that http://www.google.com/webhp?nord=1 is used, it forces use of the 
insecure pages, and allows e2guardian filtering to work properly.


Thank you,

Mike


___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users



___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] I was wondering if someone has ever tried to use a SAN\NAS as the cache backend?

2015-06-25 Thread Kinkie
Hi Eliezer,
  it depends.
The problem is not the NAS/SAN per se, but the disk access patterns.
Squid's disk access pattern, regardless the technology, is always
randomly-timed 4kb writes (in case of Rock, they are sequential, in
*ufs scattered).
If the NAS/SAN uses a write-back policy, it is possible that by the
time it decides to flush to disk, squid has written to a full stripe
and everyone will be happy (except for RAID5 or 10); this is
relatively likely in case of Rock, unlikely in case of *ufs.
But every time a write is not stripe-aligned, the NAS/SAN will have to
read and write N stripes (N >= 2 depending on the type of RAID). This
is a bit  suboptimal for the NAS/SAN in case of Rock, but it will
likely hurt the SAN/NAS performance in case of *ufs.

In case the SAN/NAS policy is not write-back but write-through, any
option (including rock) will adversely affect the SAN/NAS performance.

On Thu, Jun 25, 2015 at 2:09 PM, Eliezer Croitoru  wrote:
> Hello list,
>
> I was wondering if someone has ever tried to use a SAN\NAS as the cache
> backend?
> Since rock cache type\dir changed the file handling way from "lots of files
> db" into a single(and one more) cache db There is surly a way to benefit
> from nas and SAN.
>
> If someone have used san(ISCSI) or nas(NFS) for any of the cahed dirs type I
> would like to run some tests and you can help me not repeat old tests
> resolts.
>
> Thanks,
> Eliezer
>
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users



-- 
Francesco
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Questions Regarding Transparent Proxy, HTTPS, and ssl_bump

2015-06-25 Thread Klavs Klavsen

Hi Tom,

How did you succeed in filtering https traffic? using http_access.. or 
the way James did it, using domainname only ?


Tom Mowbray wrote on 06/25/2015 02:06 PM:

James,

Thank for for your help.  Now that I have a better understanding of how
the https traffic is handled, I've been able to get things working as
intended.


-
Tom Mowbray
/tmowb...@dalabs.com/ 
/703-829-6694/

On Wed, Jun 24, 2015 at 2:05 PM, James Lay mailto:j...@slave-tothe-box.net>> wrote:

On 2015-06-24 11:46 AM, Tom Mowbray wrote:

James,

Yes, as a matter of fact I have read through those exact posts and
modeled my config very similarly.  What I have found is that,
however,
when the line "http_access allow SSL_ports" is placed above the
ssl_bump stuff and other acl's (as you have it), it seems to simply
allow ALL https without doing any filtering whatsoever.

Thanks for the response.

-Tom Mowbray
_tmowbray@dalabs.com_
_703-829-6694 _


On Wed, Jun 24, 2015 at 1:31 PM, James Lay
mailto:j...@slave-tothe-box.net>>
wrote:

On 2015-06-24 09:41 AM, Tom Mowbray wrote:

Squid 3.5.5

I seem to have some confusion about how acl lists are
processed
in
squid.conf regarding the handling of SSL (HTTPS) traffic,
attempting
to use ssl_bump directives with transparent proxy.

Based on available documentation, I believe my squid.conf is
correct,
however it never seems to actually behave as expected.

I define the SSL port, as usual:

acl SSL_ports port 443

But here's where my confusion lies... Many state to
place the
following line above the ssl_bump configuration lines:

http_access allow SSL_ports

However when I do this, it appears to simply stop
processing any
other
rules and allows ALL https traffic through the proxy
(which is
actually how I'd expect a standard ACL list to operate,
but then
how
do I actually filter the traffic though our
content-based ACL
lists?).
If I put the above line below the ssl_bump configuration
options
in
my squid.conf, then it appears to BUMP all, even though
I've told
the
config to SPLICE all https traffic, which doesn't work
for our
deployment.

So, does squid actually continue to process the https
traffic
using
the ssl_bump rules if the "http_access allow SSL_ports"
line is
placed
above it in the configuration?

I should note that we've been able to get filtering to work
correctly
when using our configuration in NON-transparent mode,
however our
goal
is get this functionality working as a transparent
proxy. We're
unable to load our self-signed cert onto client machines
that
will be
accessing the proxy, so using the "bump" or
man-in-the-middle
style
https filtering isn't a viable option for us.

Any help or advice is appreciated!

Thanks,

Tom


Tom,

You kinda have to change the way you think about filtering
when it
comes to Squid 3.5.5 and SSL(TLS). Normal http traffic is
easyhere's where we're trying to go and here's a list of
place
we're alloed to go...simple.

Not so with SSL(TLS). Squid can't filter, since Squid may or may
not know where we're going...and that's the issue..it's
where those
ssl_bump atStep ACL's come in. Some sites when you connect
to them
are easy-ish..when you connect your device sends a "Server Name
Information" (SNI) that says where you're going. Other sites
don't
have any information until you complete the SSL handshake
(how can
you filter a site name, until squid KNOWS the site or at least
domain name?).

If you're still wanting to go through with transparent
(intercept)
proxy with SSL, search through the list for my SSL Deep dive
posts...that confi

Re: [squid-users] Questions Regarding Transparent Proxy, HTTPS, and ssl_bump

2015-06-25 Thread James Lay
On Thu, 2015-06-25 at 08:06 -0400, Tom Mowbray wrote:
> James,
> 
> 
> 
> Thank for for your help.  Now that I have a better understanding of
> how the https traffic is handled, I've been able to get things working
> as intended.
> 
> 
> 
> 
> 
> -
> 
> Tom Mowbray
> 
> tmowb...@dalabs.com
> 703-829-6694
> 
> 
> 
> On Wed, Jun 24, 2015 at 2:05 PM, James Lay 
> wrote:
> 
> On 2015-06-24 11:46 AM, Tom Mowbray wrote:
> 
> James,
> 
> Yes, as a matter of fact I have read through those
> exact posts and
> modeled my config very similarly.  What I have found
> is that, however,
> when the line "http_access allow SSL_ports" is placed
> above the
> ssl_bump stuff and other acl's (as you have it), it
> seems to simply
> allow ALL https without doing any filtering
> whatsoever.
> 
> Thanks for the response.
> 
> -Tom Mowbray
> _tmowbray@dalabs.com_
> _703-829-6694_
> 
> 
> 
> On Wed, Jun 24, 2015 at 1:31 PM, James Lay
> 
> wrote:
> 
> 
> On 2015-06-24 09:41 AM, Tom Mowbray wrote:
> 
> 
> Squid 3.5.5
> 
> I seem to have some confusion about
> how acl lists are processed
> in
> squid.conf regarding the handling of
> SSL (HTTPS) traffic,
> attempting
> to use ssl_bump directives with
> transparent proxy.
> 
> Based on available documentation, I
> believe my squid.conf is
> correct,
> however it never seems to actually
> behave as expected.
> 
> I define the SSL port, as usual:
> 
> acl SSL_ports port 443
> 
> But here's where my confusion lies...
> Many state to place the
> following line above the ssl_bump
> configuration lines:
> 
> http_access allow SSL_ports
> 
> However when I do this, it appears to
> simply stop processing any
> other
> rules and allows ALL https traffic
> through the proxy (which is
> actually how I'd expect a standard ACL
> list to operate, but then
> how
> do I actually filter the traffic
> though our content-based ACL
> lists?).
> If I put the above line below the
> ssl_bump configuration options
> in
> my squid.conf, then it appears to BUMP
> all, even though I've told
> the
> config to SPLICE all https traffic,
> which doesn't work for our
> deployment.
> 
> So, does squid actually continue to
> process the https traffic
> using
> the ssl_bump rules if the "http_access
> allow SSL_ports" line is
> placed
> above it in the configuration?
> 
> I should note that we've been able to
> get filtering to work
> correctly
> when using our configuration in
> NON-transparent mode, however our
>

[squid-users] I was wondering if someone has ever tried to use a SAN\NAS as the cache backend?

2015-06-25 Thread Eliezer Croitoru

Hello list,

I was wondering if someone has ever tried to use a SAN\NAS as the cache 
backend?
Since rock cache type\dir changed the file handling way from "lots of 
files db" into a single(and one more) cache db There is surly a way to 
benefit from nas and SAN.


If someone have used san(ISCSI) or nas(NFS) for any of the cahed dirs 
type I would like to run some tests and you can help me not repeat old 
tests resolts.


Thanks,
Eliezer

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Questions Regarding Transparent Proxy, HTTPS, and ssl_bump

2015-06-25 Thread Tom Mowbray
James,

Thank for for your help.  Now that I have a better understanding of how the
https traffic is handled, I've been able to get things working as intended.


-
Tom Mowbray
*tmowb...@dalabs.com* 
*703-829-6694*

On Wed, Jun 24, 2015 at 2:05 PM, James Lay  wrote:

> On 2015-06-24 11:46 AM, Tom Mowbray wrote:
>
>> James,
>>
>> Yes, as a matter of fact I have read through those exact posts and
>> modeled my config very similarly.  What I have found is that, however,
>> when the line "http_access allow SSL_ports" is placed above the
>> ssl_bump stuff and other acl's (as you have it), it seems to simply
>> allow ALL https without doing any filtering whatsoever.
>>
>> Thanks for the response.
>>
>> -Tom Mowbray
>> _tmowbray@dalabs.com_
>> _703-829-6694_
>>
>>
>> On Wed, Jun 24, 2015 at 1:31 PM, James Lay 
>> wrote:
>>
>>  On 2015-06-24 09:41 AM, Tom Mowbray wrote:
>>>
>>>  Squid 3.5.5

 I seem to have some confusion about how acl lists are processed
 in
 squid.conf regarding the handling of SSL (HTTPS) traffic,
 attempting
 to use ssl_bump directives with transparent proxy.

 Based on available documentation, I believe my squid.conf is
 correct,
 however it never seems to actually behave as expected.

 I define the SSL port, as usual:

 acl SSL_ports port 443

 But here's where my confusion lies... Many state to place the
 following line above the ssl_bump configuration lines:

 http_access allow SSL_ports

 However when I do this, it appears to simply stop processing any
 other
 rules and allows ALL https traffic through the proxy (which is
 actually how I'd expect a standard ACL list to operate, but then
 how
 do I actually filter the traffic though our content-based ACL
 lists?).
 If I put the above line below the ssl_bump configuration options
 in
 my squid.conf, then it appears to BUMP all, even though I've told
 the
 config to SPLICE all https traffic, which doesn't work for our
 deployment.

 So, does squid actually continue to process the https traffic
 using
 the ssl_bump rules if the "http_access allow SSL_ports" line is
 placed
 above it in the configuration?

 I should note that we've been able to get filtering to work
 correctly
 when using our configuration in NON-transparent mode, however our
 goal
 is get this functionality working as a transparent proxy. We're
 unable to load our self-signed cert onto client machines that
 will be
 accessing the proxy, so using the "bump" or man-in-the-middle
 style
 https filtering isn't a viable option for us.

 Any help or advice is appreciated!

 Thanks,

 Tom

>>>
>>> Tom,
>>>
>>> You kinda have to change the way you think about filtering when it
>>> comes to Squid 3.5.5 and SSL(TLS). Normal http traffic is
>>> easyhere's where we're trying to go and here's a list of place
>>> we're alloed to go...simple.
>>>
>>> Not so with SSL(TLS). Squid can't filter, since Squid may or may
>>> not know where we're going...and that's the issue..it's where those
>>> ssl_bump atStep ACL's come in. Some sites when you connect to them
>>> are easy-ish..when you connect your device sends a "Server Name
>>> Information" (SNI) that says where you're going. Other sites don't
>>> have any information until you complete the SSL handshake (how can
>>> you filter a site name, until squid KNOWS the site or at least
>>> domain name?).
>>>
>>> If you're still wanting to go through with transparent (intercept)
>>> proxy with SSL, search through the list for my SSL Deep dive
>>> posts...that config is working for me so far (granted, not in an
>>> enterprise environment). However, as Amos said,if you choose
>>> not to install the cert on the client machines, you are either a)
>>> going to be out of luck on LOT'S of websites because they will fail
>>> the SSL handshake, or b) teaching your users to ignore the security
>>> warnings of their browser'sneither of which is a good thing.
>>>
>>> Hope that helps.
>>>
>>> James
>>>
>>>
> Tom,
>
> You are right...that absolutely will allow all SSL initially...the
> filtering is down lower in the config here:
>
> With single list of regex sites/domains like \.google\.com...peek, splice,
> no bump...I'm currently using this config section.
>
> 
> ssl_bump peek step1 all
> ssl_bump peek step2 all
> acl allowed_https_sites ssl::server_name_regex
> "/opt/etc/squid/http_url.txt"
> ssl_bump splice step3 allowed_https_sites
> ssl_bump terminate all
>
>
> With broken acl list of networks list 208.85.40.0/21
> ###
> ssl_bump peek step1 broken
> ssl_bump peek step2 broken
> ssl_bump splice broken
> ssl_bump peek 

Re: [squid-users] Questions Regarding Transparent Proxy, HTTPS, and ssl_bump

2015-06-25 Thread James Lay
On Thu, 2015-06-25 at 13:57 +1200, Jason Haar wrote:

> On 25/06/15 06:05, James Lay wrote:
> > openssl s_client -connect x.x.x.x:443 
> Just a FYI but you can make openssl do SNI which helps debugging (ie
> doing it your way and then doing it with SNI)
> 
> openssl s_client -connect x.x.x.x:443 -servername www.site.name
> 
> (that will allow squid to see www.site.name as the SNI)
> 


Thanks Jasonappreciate that heads up.

James
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TCP_MISS/503

2015-06-25 Thread Amos Jeffries
On 25/06/2015 4:48 p.m., Hector Chan wrote:
> Not sure if this will help you, but I saw 503s on my squid when the origin
> server has an invalid SSL certificate -- expired cert, self-signed cert,
> etc.
> 

Nod. They show up whenever Squid cannot successfully connect to the
server. Thats what "503 Service Unavailable" means - unable to connect
to server. And there are a huge amount of reasons that may happen.
Anything that breaks the server connection encryption during SSL-bump
would do it.

Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Mikrotik and Squid Transparent

2015-06-25 Thread Amos Jeffries
On 25/06/2015 12:45 p.m., Alex Samad wrote:
> Hi
> 
> why this, doesn't this block all traffic getting to the squid port.
> iptables -t mangle -A PREROUTING -p tcp --dport $SQUIDPORT -j DROP

All external traffic yes. The NAT interception happens afterward and works.

The point is that NAT intercept MUST only be done directly on the Squid
machine. A single external connection being accepted will result in a
forwarding loop DoS and the above protects against that.

> 
> 
> what I would do to test is run tcpdump on the squid box and capture
> all traffic coming to it on the squid listening port,

IIRC, you can't do that because tcpdump operates before NAT. It will not
show you the NAT'ed traffic arriving.

Running Squid with -X or "debug_options ALL,9" would be better. You can
see in cache.log what Squid is receiving and what the NAT de-mangling is
actually doing.

Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users