[squid-users] Authorization via LDAP group

2010-04-12 Thread GIGO .

Authorizing users via LDAP group:
 
 
It is listed in the squid_ldap_group man page that using -D binddn -W secret 
fle is to be preferred on  -D binddn -w password. While it provides extra 
security then printing the password in plaintext inside squid.conf. Doesnt this 
query itself go in clear text over the network? If this is a risk how to handle 
this situation?

1. Should we create a special account with minimum of rights required to query 
Active Directory?


2. Or perform this query over TLS? and how it can be done?


3.  Allowing anonymous queries can also be configured in Active directory 
however it does not look appropriate. May be it has no issues in the total 
private setup!

 
Please your guidance is required. 
 

regards,
Bilal 
  
_
Your E-mail and More On-the-Go. Get Windows Live Hotmail Free.
https://signup.live.com/signup.aspx?id=60969

[squid-users] user based ACLs

2010-04-12 Thread Andrea Gallazzi
Hi, 
Can I prevent to access only some web site at only some user ?


i.e. 


user1  can only go on www.website1.com
user2  can only go on www.website2.com

user1 and user2 was authenticated by nsca.

thank you



[squid-users] nagios check_http module being denied on transparent proxy

2010-04-12 Thread Dayo Adewunmi

Hi

In my squid.conf I've got 'http_port 3128 transparent', and I configured 
my firewall
to forward all request from port 80 to 3128. Everything seems to be 
working fine, except

for nagios. This is from the man pages of the check_http module:

check_http v2053 (nagios-plugins 1.4.13)
Copyright (c) 1999 Ethan Galstad nag...@nagios.org
Copyright (c) 1999-2008 Nagios Plugin Development Team
   nagiosplug-de...@lists.sourceforge.net

This plugin tests the HTTP service on the specified host. It can test
normal (http) and secure (https) servers, follow redirects, search for
strings and regular expressions, check connection times, and report on
certificate expiration times.
This plugin will attempt to open an HTTP connection with the host.
Successful connects return STATE_OK, refusals and timeouts return 
STATE_CRITICAL
other errors return STATE_UNKNOWN.  Successful connects, but incorrect 
reponse

messages from the host result in STATE_WARNING return values.  If you are
checking a virtual server that uses 'host headers' you must supply the FQDN
(fully qualified domain name) as the [host_name] argument.

The module works for all servers on the LAN, except for the squid server 
(192.168.0.1) (which also happens to be the firewall server):


access.log:
12/Apr/2010:06:01:00 +0100 192.168.0.9 TCP_DENIED/400 1651 GET 
error:invalid-r

 equest NONE/- text/html

cache.log:
2010/04/12 06:01:00| clientReadRequest: FD 70 (192.168.0.9:58818) 
Invalid Request


If I manually run the check_http module on the nagios server (or from 
any other client):

$ ./check_http -I 192.168.0.1
HTTP WARNING: HTTP/1.0 400 Bad Request

But from the squid server:
$ ./check_http -I 192.168.0.1
HTTP OK HTTP/1.0 200 OK - 965 bytes in 0.000 seconds 
|time=0.000425s;;;0.00 siz0


I've been googling around and the solutions I've been finding are people 
doing things like not adding transparent to their http_port line, or 
defining the line twice, etc. Which doesn't apply to

my case, because I check my squid.conf and the http_port line is fine.
What could be causing this HTTP issue?
Thanks

Dayo


[squid-users] help with squid error

2010-04-12 Thread Gavin McCullagh
Hi,

Could someone perhaps give me a clue as to the reason for the following
417 Expectation Failed error I'm getting back from squid.  This is an
online video system with a flash player and it would appear to be flash
making a direct HTTP connection through the proxy.

Request packet fragment and wiresharked response are below.

Many thanks in advance,
Gavin

The request is (sorry it's not in a better format):


  00 0d 56 5e b5 00 00 1a a0 8c 7d 15 08 00 45 00   ..V^..}...E.
0010  01 4b 0d 16 40 00 40 06 c2 1e ac 10 01 03 ac 10   @.@.
0020  11 55 8b 45 1f 90 4b c1 5e 20 1d d6 8b 1e 80 18   .U.E..K.^ ..
0030  00 5c 6b b6 00 00 01 01 08 0a 00 16 53 0b 26 cb   .\k.S..
0040  55 7a 50 4f 53 54 20 68 74 74 70 3a 2f 2f 38 39   UzPOST http://89
0050  2e 32 30 37 2e 35 36 2e 31 30 37 2f 73 65 6e 64   .207.56.107/send
0060  2f 65 70 31 6d 62 52 51 75 36 48 58 57 56 59 7a   /ep1mbRQu6HXWVYz
0070  49 2f 31 20 48 54 54 50 2f 31 2e 31 0d 0a 48 6f   I/1 HTTP/1.1..Ho
0080  73 74 3a 20 38 39 2e 32 30 37 2e 35 36 2e 31 30   st: 89.207.56.10
0090  37 0d 0a 41 63 63 65 70 74 3a 20 2a 2f 2a 0d 0a   7..Accept: */*..
00a0  50 72 6f 78 79 2d 43 6f 6e 6e 65 63 74 69 6f 6e   Proxy-Connection
00b0  3a 20 4b 65 65 70 2d 41 6c 69 76 65 0d 0a 55 73   : Keep-Alive..Us
00c0  65 72 2d 41 67 65 6e 74 3a 20 53 68 6f 63 6b 77   er-Agent: Shockw
00d0  61 76 65 20 46 6c 61 73 68 0a 43 6f 6e 6e 65 63   ave Flash.Connec
00e0  74 69 6f 6e 3a 20 4b 65 65 70 2d 41 6c 69 76 65   tion: Keep-Alive
00f0  0a 43 61 63 68 65 2d 43 6f 6e 74 72 6f 6c 3a 20   .Cache-Control: 
0100  6e 6f 2d 63 61 63 68 65 0d 0a 43 6f 6e 74 65 6e   no-cache..Conten
0110  74 2d 54 79 70 65 3a 20 61 70 70 6c 69 63 61 74   t-Type: applicat
0120  69 6f 6e 2f 78 2d 66 63 73 0d 0a 43 6f 6e 74 65   ion/x-fcs..Conte
0130  6e 74 2d 4c 65 6e 67 74 68 3a 20 31 35 33 37 0d   nt-Length: 1537.
0140  0a 45 78 70 65 63 74 3a 20 31 30 30 2d 63 6f 6e   .Expect: 100-con
0150  74 69 6e 75 65 0d 0a 0d 0atinue


The response:

Hypertext Transfer Protocol
HTTP/1.0 417 Expectation failed\r\n
[Expert Info (Chat/Sequence): HTTP/1.0 417 Expectation failed\r\n]
[Message: HTTP/1.0 417 Expectation failed\r\n]
[Severity level: Chat]
[Group: Sequence]
Request Version: HTTP/1.0
Response Code: 417
Server: squid/2.7.STABLE3\r\n
Date: Mon, 12 Apr 2010 10:16:26 GMT\r\n
Content-Type: text/html\r\n
Content-Length: 1451\r\n
[Content length: 1451]
Expires: Mon, 12 Apr 2010 10:16:26 GMT\r\n
X-Squid-Error: ERR_INVALID_REQ 0\r\n
X-Cache: MISS from muinnamuice.staff.gcd.ie\r\n
X-Cache-Lookup: NONE from muinnamuice.staff.gcd.ie:8080\r\n
Via: 1.0 muinnamuice.staff.gcd.ie:8080 (squid/2.7.STABLE3)\r\n
Connection: close\r\n
\r\n
Line-based text data: text/html
!DOCTYPE HTML PUBLIC -//W3C//DTD HTML 4.01 Transitional//EN 
http://www.w3.org/TR/html4/loose.dtd;\n
HTMLHEADMETA HTTP-EQUIV=Content-Type CONTENT=text/html; 
charset=iso-8859-1\n
TITLEERROR: The requested URL could not be retrieved/TITLE\n
STYLE 
type=text/css!--BODY{background-color:#ff;font-family:verdana,sans-serif}PRE{font-family:sans-serif}--/STYLE\n
/HEADBODY\n
H1ERROR/H1\n
H2The requested URL could not be retrieved/H2\n
HR noshade size=1px\n
P\n
While trying to process the request:\n
PRE\n
POST /send/ep1mbRQu6HXWVYzI/1 HTTP/1.1\n
Host: 89.207.56.107\r\n
Accept: */*\r\n
Proxy-Connection: Keep-Alive\r\n
User-Agent: Shockwave Flash\r\n
Connection: Keep-Alive\r\n
Cache-Control: no-cache\r\n
Content-Type: application/x-fcs\r\n
Content-Length: 1537\r\n
Expect: 100-continue\r\n
\n
/PRE\n
P\n
The following error was encountered:\n
UL\n
LI\n
STRONG\n
Invalid Request\n
/STRONG\n
/UL\n
\n
P\n
Some aspect of the HTTP Request is invalid.  Possible problems:\n
UL\n
LIMissing or unknown request method\n
LIMissing URL\n
LIMissing HTTP Identifier (HTTP/1.0)\n
LIRequest is too large\n
LIContent-Length missing for POST or PUT requests\n
LIIllegal character in hostname; underscores are not allowed\n
/UL\n
PYour cache administrator is A 
HREF=mailto:helpd...@gcd.ie;helpd...@gcd.ie/A. \n
\n
BR clear=all\n
HR noshade size=1px\n
ADDRESS\n
Generated Mon, 12 Apr 2010 10:16:26 GMT by muinnamuice.staff.gcd.ie 
(squid/2.7.STABLE3)\n
/ADDRESS\n
/BODY/HTML\n



Re: [squid-users] user based ACLs

2010-04-12 Thread Amos Jeffries

Andrea Gallazzi wrote:

Hi, Can I prevent to access only some web site at only some user ?

i.e.
user1  can only go on www.website1.com
user2  can only go on www.website2.com

user1 and user2 was authenticated by nsca.

thank you



http://wiki.squid-cache.org/SquidFaq/SquidAcl#And.2BAC8-Or_logic

Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.1


[squid-users] connection limit and X-Forwarded-For IP

2010-04-12 Thread Mario Remy Almeida

Hi All,

Recently I configure Squid as reverse proxy for back-end apache server 
running Drupal.


acl airarabia_web dstdomain www.airarabia.com
cache_peer 10.4.171.6 parent 80 0 no-query originserver 
name=airarabia_peer2 round-robin forceddomain=www.airarabia.com default
# cache_peer 10.4.171.7 parent 80 0 no-query originserver 
name=airarabia_peer1 round-robin forceddomain=www.airarabia.com default 
# not yet implemented

cache_peer_access airarabia_peer2 allow airarabia_web
cache_peer_access airarabia_peer2 deny all

Problem 1:-
With Apache I had connection Limit of 20 per IP (mod_limitipconn.so)

I need to achieve this with squid reverse proxy.
please let me know if below configurations is correct.

===
acl connectionLimit maxconn 20
acl airarabia_web dstdomain www.airarabia.com
cache_peer 10.4.171.6 parent 80 0 no-query originserver 
name=airarabia_peer2 round-robin forceddomain=www.airarabia.com default

cache_peer_access airarabia_peer2 allow airarabia_web connectionLimit
cache_peer_access airarabia_peer2 deny all
===

Problem 2:-
After configuring reverse proxy, The apache back-end server gets the IP 
of the reverse proxy and not of the actual clients.


   squid.conf
===
follow_x_forwarded_for allow airarabia_web
follow_x_forwarded_for deny all
acl_uses_indirect_client on
delay_pool_uses_indirect_client on
log_uses_indirect_client on
===

I will work on HOW TO for mod_extract_forwarded, but mean time if 
someone can verify if the above squid.conf for problem 2 is correct?


//Remy

--
Disclaimer and Confidentiality


This material has been checked for  computer viruses and although none has
been found, we cannot guarantee  that it is completely free from such problems
and do not accept any  liability for loss or damage which may be caused.
Please therefore  check any attachments for viruses before using them on your
own  equipment. If you do find a computer virus please inform us immediately
so that we may take appropriate action. This communication is intended  solely
for the addressee and is confidential. If you are not the intended recipient,
any disclosure, copying, distribution or any action  taken or omitted to be
taken in reliance on it, is prohibited and may be  unlawful. The views
expressed in this message are those of the  individual sender, and may not
necessarily be that of ISA.


[squid-users] Re: Squid url_rewrite_program problem

2010-04-12 Thread txlombardi

Amos,

Thanks for your reply and help.  Squid version as it is shown in the
add/remove programs of Fedora 12 reads, Squid-7:3.1.0.17.  I assume that
is version 3.1.  SquidGuard is Squid-1.4-8.fc12.  

I am trying to follow the instructions of Alex Vanherwijnen (link in
original post) to create a captive portal.  Everything seems fine except for
the url_rewrite_program which crashes Squid.  The advantage in using his
solution is it maintains a database of user connections, so only one
redirect will happen to the login page during the session time period.

I did look at the two links you posted.  The session helper looks like a
better solution.  The problem is this is all a new area for me and I'm not
sure I can pull that off without some examples to look at.  Do you know of
any that are posted?  

A last thought.  I wonder.  Should I be trying to use Squid Guard for the
captive portal?  It would seem to somehow be involved in the redirect.  The
service is running, but I have not configured it in any way.

Tony
-- 
View this message in context: 
http://n4.nabble.com/Squid-url-rewrite-program-problem-tp1836241p1836996.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] Authorization via LDAP group

2010-04-12 Thread Amos Jeffries

GIGO . wrote:

Authorizing users via LDAP group:


It is listed in the squid_ldap_group man page that using -D binddn -W
secret fle is to be preferred on  -D binddn -w password. While it
provides extra security then printing the password in plaintext
inside squid.conf. Doesnt this query itself go in clear text over the
network? If this is a risk how to handle this situation?



The reasoning goes that if the squid.conf gets compromised, then the 
password itself is secured in a sub-file which hopefully is harder to 
compromise.


It's very easy to compromise any content of squid.conf; the squid.conf 
may be posted here or elsewhere wen asking for help, or the cachemgr 
password which allows access to a full squid.conf dump may be compromised.


Using the -W option means that the secret file is only read internally 
to the helper and used in the post-connection LDAP binding. It's up to 
you whether you configure the LDAP helper to use TLS and secure the wire 
or not.




2. Or perform this query over TLS? and how it can be done?



See the helper man page you already found for the relevant command line 
arguments. The server portion someone else will need to help with.




3.  Allowing anonymous queries can also be configured in Active
directory however it does not look appropriate. May be it has no
issues in the total private setup!


Thats a problem you need to decide on. I agree it does look suspect to 
choose that if you want security.



Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.1


Re: [squid-users] help with squid error

2010-04-12 Thread Amos Jeffries

Gavin McCullagh wrote:

Hi,

Could someone perhaps give me a clue as to the reason for the following
417 Expectation Failed error I'm getting back from squid.  This is an
online video system with a flash player and it would appear to be flash
making a direct HTTP connection through the proxy.


snip


The response:

Hypertext Transfer Protocol
HTTP/1.0 417 Expectation failed\r\n
[Expert Info (Chat/Sequence): HTTP/1.0 417 Expectation failed\r\n]
[Message: HTTP/1.0 417 Expectation failed\r\n]
[Severity level: Chat]
[Group: Sequence]
Request Version: HTTP/1.0
Response Code: 417

...

P\n
While trying to process the request:\n
PRE\n
POST /send/ep1mbRQu6HXWVYzI/1 HTTP/1.1\n

...

Expect: 100-continue\r\n


Squid is following RFC 2616 requirements.  When HTTP/1.1 request 
containing Expect: 100-continue is going to pass through a HTTP/1.0 
proxy or server which can't handle the 100 status messages a 417 
message MUST be sent back instead.


The expected result is that the client software will retry immediately 
without the Expect: 100-continue conditions. Failing that it's 
probably broken software.


Please complain to the flash player authors. Reference them to RFC 2616 
section on Expect: header and 417 status code handling.


Squid provides the ignore_expect100 configuration option which prevents 
the 417 being sent.
However be aware all this does is prevent the 417 being sent. The 
request will instead hang for an unknown but usually long time before 
anything further happens.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.1


Re: [squid-users] connection limit and X-Forwarded-For IP

2010-04-12 Thread Amos Jeffries

Mario Remy Almeida wrote:

Hi All,

Recently I configure Squid as reverse proxy for back-end apache server 
running Drupal.


acl airarabia_web dstdomain www.airarabia.com
cache_peer 10.4.171.6 parent 80 0 no-query originserver 
name=airarabia_peer2 round-robin forceddomain=www.airarabia.com default
# cache_peer 10.4.171.7 parent 80 0 no-query originserver 
name=airarabia_peer1 round-robin forceddomain=www.airarabia.com default 
# not yet implemented

cache_peer_access airarabia_peer2 allow airarabia_web
cache_peer_access airarabia_peer2 deny all

Problem 1:-
With Apache I had connection Limit of 20 per IP (mod_limitipconn.so)

I need to achieve this with squid reverse proxy.
please let me know if below configurations is correct.


You should not really need this with Squid. FD in Squid are very 
lightweight and do not block whole threads like they do in Apache.




===
acl connectionLimit maxconn 20


Missing:

  http_access deny connectionLimit


acl airarabia_web dstdomain www.airarabia.com
cache_peer 10.4.171.6 parent 80 0 no-query originserver 
name=airarabia_peer2 round-robin forceddomain=www.airarabia.com default

cache_peer_access airarabia_peer2 allow airarabia_web connectionLimit


The above will cause denial if LESS than 20 connections are made.

Seems strange until you consider that connectionLimit is only true when 
20 connections are present from a single IP. Which will cause the 
following line to happen:



cache_peer_access airarabia_peer2 deny all
===

Problem 2:-
After configuring reverse proxy, The apache back-end server gets the IP 
of the reverse proxy and not of the actual clients.


Your problem description describes the config:

  forwarded_for on




   squid.conf
===
follow_x_forwarded_for allow airarabia_web
follow_x_forwarded_for deny all
acl_uses_indirect_client on
delay_pool_uses_indirect_client on
log_uses_indirect_client on
===

I will work on HOW TO for mod_extract_forwarded, but mean time if 
someone can verify if the above squid.conf for problem 2 is correct?


It does not match your problem description. It configures Squid to log 
and run ACL tests based on the remote client IP outside your trusted edge.
Useful only for hierarchies and clusters of proxies who need to ignore 
the internal relay chain in their security tests.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.1


Re: [squid-users] Re: Squid url_rewrite_program problem

2010-04-12 Thread Amos Jeffries

txlombardi wrote:

Amos,

Thanks for your reply and help.  Squid version as it is shown in the
add/remove programs of Fedora 12 reads, Squid-7:3.1.0.17.  I assume that
is version 3.1.  SquidGuard is Squid-1.4-8.fc12.


Ah, a beta with known issues. A 3.1 production release of Squid might 
prove more stable.





I am trying to follow the instructions of Alex Vanherwijnen (link in
original post) to create a captive portal.  Everything seems fine except for
the url_rewrite_program which crashes Squid.  The advantage in using his
solution is it maintains a database of user connections, so only one
redirect will happen to the login page during the session time period.


Might be a problem with 3.1.0.17 thats not been mentioned before. 
Probably selinux though.


If you find you need the extra control Alex Vanherwijnens' script 
offers, then it should be fine to go with it instead. After a few small 
alterations it can fit into the same place the squid_session helper does.




I did look at the two links you posted.  The session helper looks like a
better solution.  The problem is this is all a new area for me and I'm not
sure I can pull that off without some examples to look at.  Do you know of
any that are posted?


The EXAMPLE section of that manual contains the entire squid.conf 
snippet needed to setup and run the helper and redirect to a splash page.


All you need do is insert the http_access rule into your other 
http_access rules.




A last thought.  I wonder.  Should I be trying to use Squid Guard for the
captive portal?  It would seem to somehow be involved in the redirect.  The
service is running, but I have not configured it in any way.


The only justifiable use I've seen for squidGuard is for URL filtering 
using lots of regex, or extremely large domain lists etc.



Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.1


Re: [squid-users] nagios check_http module being denied on transparent proxy

2010-04-12 Thread Amos Jeffries

Dayo Adewunmi wrote:

Hi

In my squid.conf I've got 'http_port 3128 transparent', and I configured 
my firewall
to forward all request from port 80 to 3128. Everything seems to be 
working fine, except

for nagios. This is from the man pages of the check_http module:

check_http v2053 (nagios-plugins 1.4.13)
Copyright (c) 1999 Ethan Galstad nag...@nagios.org
Copyright (c) 1999-2008 Nagios Plugin Development Team
   nagiosplug-de...@lists.sourceforge.net

This plugin tests the HTTP service on the specified host. It can test
normal (http) and secure (https) servers, follow redirects, search for
strings and regular expressions, check connection times, and report on
certificate expiration times.
This plugin will attempt to open an HTTP connection with the host.
Successful connects return STATE_OK, refusals and timeouts return 
STATE_CRITICAL
other errors return STATE_UNKNOWN.  Successful connects, but incorrect 
reponse

messages from the host result in STATE_WARNING return values.  If you are
checking a virtual server that uses 'host headers' you must supply the FQDN
(fully qualified domain name) as the [host_name] argument.

The module works for all servers on the LAN, except for the squid server 
(192.168.0.1) (which also happens to be the firewall server):


access.log:
12/Apr/2010:06:01:00 +0100 192.168.0.9 TCP_DENIED/400 1651 GET 
error:invalid-r

 equest NONE/- text/html

cache.log:
2010/04/12 06:01:00| clientReadRequest: FD 70 (192.168.0.9:58818) 
Invalid Request


If I manually run the check_http module on the nagios server (or from 
any other client):

$ ./check_http -I 192.168.0.1
HTTP WARNING: HTTP/1.0 400 Bad Request



Been a while since I faced this with nagios. IIRC there is no Host: 
header in the nagios test requests. This header is REQUIRED for requests 
to travel over an intercepting proxy.




But from the squid server:
$ ./check_http -I 192.168.0.1
HTTP OK HTTP/1.0 200 OK - 965 bytes in 0.000 seconds 
|time=0.000425s;;;0.00 siz0


I've been googling around and the solutions I've been finding are people 
doing things like not adding transparent to their http_port line, or 
defining the line twice, etc. Which doesn't apply to

my case, because I check my squid.conf and the http_port line is fine.


No. The syntax is fine and loads (for now). However fine is not a good 
word for the current state of it.


Squid is vulnerable to CVE-2009-0801. Which means if your http_port with 
transparent flag is accessible or easily guessed your proxy can be 
abused to poison your entire networks HTTP traffic. All it takes is one 
infected client and the whole network is compromised.


The only way traffic should be able to enter a transparent or 
intercept flagged port is by hitting the firewall NAT rules than 
re-write packets to go there directly. The port should be blocked 
(pre-NAT) when possible from any external access outside the box itself.


This means that port 3128 should be reserved for normal forward-proxy 
traffic such as your nagios proxy tests. And another secret port used 
for the Squid end of the firewall-squid linkage.


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.1


Re: [squid-users] help with squid error

2010-04-12 Thread Gavin McCullagh
Hi Amos,

On Tue, 13 Apr 2010, Amos Jeffries wrote:

 Squid is following RFC 2616 requirements.  When HTTP/1.1 request
 containing Expect: 100-continue is going to pass through a
 HTTP/1.0 proxy or server which can't handle the 100 status
 messages a 417 message MUST be sent back instead.
 
 The expected result is that the client software will retry
 immediately without the Expect: 100-continue conditions. Failing
 that it's probably broken software.

I see, thanks.  I've been trying to track down why I can't get the flash
player video to play.

While this failure is mixed in there, I think this may be a red herring for
the problem I'm experiencing.  Others going through the same proxy have had
success.

Thanks again for your help,
Gavin



Re: [squid-users] [Urgent] Please help : NAT + squid2.7 on ubuntu server 9.10 + cisco firewall (ASA5510)

2010-04-12 Thread Horacio H.
2010/4/8 Vichao Saenghiranwathana vich...@gmail.com:

 I still stunned. Can you explain more in deeper detail so I can
 understand what the problem is.


Hi Vichao,

If you already have a static NAT translation at the ASA between these
two addresses: 192.168.9.251 and 203.130.133.9, it doesn't make sense
to me why you also configured the same public IP address at the second
subinterface.  Unless you need it for an unrelated setup, you may want
to remove the second subinterface because (if you also configured a
default-gateway there) when external packets are destinede to the
address 203.130.133.9 it might cause the ASA to NAT packets that
shouldn't be, or viceversa.

Aside from that, if the issue persist your next clue resides in
collecting all the info your ASA shows about the WCCP
association/registration, and monitor the counters of the GRE tunnel
and iptables active rules and default policies.

I hope this comment was helpful. I have a similar setup and it works fine.

Regards,
Horacio.


RE: [squid-users] Squid 3.1.1 and flash video scrubbing

2010-04-12 Thread David Robinson
Isn't it irrelevant if it implements range requests with the fs= parameter? 
With the fs= parameter it becomes a unique URI and squid should treat it as a 
separate object even if it does have a Content-Range header.

Do we know enough to be sure this is a squid bug or should I be contacting 
RedTube and YouPorn to fix their broken code?

-Original Message-
From: Mark Nottingham [mailto:m...@yahoo-inc.com] 
Sent: Friday, April 09, 2010 7:35 PM
To: Henrik Nordström
Cc: Amos Jeffries; squid-users@squid-cache.org
Subject: Re: [squid-users] Squid 3.1.1 and flash video scrubbing


On 09/04/2010, at 9:05 PM, Henrik Nordström wrote:

 We don't know how the server would react on Range requests to this
 ranged fs=.. object. Maybe it imlpements them, maybe it don't.


RED says it doesn't.

--
Mark Nottingham   m...@yahoo-inc.com