[squid-users] Re: SQUID3 and https: Error negotiating SSL connection

2013-02-21 Thread skylab
Hi, thank you for your replies.
How can I verify my ca-certificate list? And how can I update it?
Thank you very much.

Skylab



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/SQUID3-and-https-Error-negotiating-SSL-connection-tp4658592p4658602.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] DNS Queue Remains Filled Issue!

2013-02-21 Thread Arshan Awais
Hi,

I have a query regarding "DNS Returned Timeout" issue. I have searched over 
various forums regarding this issue but the solutions described there does not 
fit my need. 


Now coming to the issue, I have configured squid for web caching and allowed 
just 100MB disk space for caching. When i start proxy, it works fine for some 
minutes, until all the systems in the network get "DNS Returned Timeout" 
message in their browsers. This error is gone upon restarting the squid. 


I have checked the iDNS status using 


#squidclient mgr:idns

I see that initially the queue is empty, but when the timeout error appears in 
browsers, the queue is filled with around 9 to 10 ids and this queue does not 
get empty. 


Kindly give some suggestions for the solution. Thanks

Regards,

Arshan Awais



[squid-users] Question about "proxy_auth REQUIRED" and the case of flushing the authentication-cache

2013-02-21 Thread Tom Tom
Hi

With squid 3.2.7, I have the following curiosity:

SCENARIO 1
<>
acl AUTHENTICATED proxy_auth REQUIRED
external_acl_type SQUID_KERB_LDAP ttl=7200 children-max=20
children-startup=5 children-idle=1 negative_ttl=7200 %LOGIN
/usr/local/squid/libexec/ext_kerberos_ldap_group_acl -g "XXX"
acl INTERNET_ACCESS external SQUID_KERB_LDAP
...
...
http_access deny !INTERNET_ACCESS
http_access deny !AUTHENTICATED
http_access allow INTERNET_ACCESS AUTHENTICATED
http_access deny all

With the config above, I have the following lines in the access.log:
[Thu Feb 21 06:56:45 2013].167 38 XXX TCP_REFRESH_UNMODIFIED/304
332 GET http://imagesrv.adition.com/banners/750/683036/dummy.gif USER
FIRSTUP_PARENT/XXX image/gif
[Thu Feb 21 06:57:04 2013].621 38 XXX TCP_REFRESH_UNMODIFIED/304
261 GET http://imagesrv.adition.com/banners/750/683036/dummy.gif USER
FIRSTUP_PARENT/XXX image/gif



SCENARIO 2
<>
external_acl_type SQUID_KERB_LDAP ttl=7200 children-max=20
children-startup=5 children-idle=1 negative_ttl=7200 %LOGIN
/usr/local/squid/libexec/ext_kerberos_ldap_group_acl -g "XXX"
acl INTERNET_ACCESS external SQUID_KERB_LDAP
...
...
http_access deny !INTERNET_ACCESS
http_access allow INTERNET_ACCESS
http_access deny all


Now, the same request looks like this:
[Thu Feb 21 06:55:59 2013].086  0 XXX TCP_DENIED/407 4153 GET
http://imagesrv.adition.com/banners/750/683036/dummy.gif - HIER_NONE/-
text/html
[Thu Feb 21 06:55:59 2013].135 44 XXX TCP_REFRESH_UNMODIFIED/304
332 GET http://imagesrv.adition.com/banners/750/683036/dummy.gif USER
FIRSTUP_PARENT/XXX image/gif

A tcpdump shows, that the "authorization"-header is not sent in the
first request. In scenario 2, the authorization-header is sent after
the TCP_DENIED/407 response from squid (normal behavior). In scenario
1, squid response directly with 304.

What is the influence of "AUTHENTICATED" in the first example, not to
re-authenticate the request? Why does squid needs to re-authenticate
(TCP_DENIED/407) without the "AUTHENTICATED" tag in the "http_access"
line (Scenario 2)? Is it possible, that with the "AUTHENTICATED" tag
squid uses the authentication-cache? And without the "AUTHENTICATED"
tag, squid will not use the authentication-cache or flushes the
cache-entry for every request?

I have other squids running (3.1.20), which are configured like
scenario 2, but behaves like scenario 1. Why does squid 3.1.20 act
different as 3.2.7?

With "debug_options 29,9" (see below) in squid 3.2.7, I see that in
the "wrong case" (without the AUTHENTICATED tag on the http_access
line), squid is "freeing request 0x1646830". When I request the same
file again, then squid response first with a "TCP_DENIED/407". Does
the "freeing" means, that squid "flushes" his authentication-cache and
therefore need to re-authenticate this request everytime?
2013/02/21 08:43:58.583 kid1| UserRequest.cc(506) addReplyAuthHeader:
headertype:76 authuser:0x1646830*3
2013/02/21 08:43:58.583 kid1| UserRequest.cc(126) releaseAuthServer:
No Negotiate auth server to release.
2013/02/21 08:43:58.583 kid1| UserRequest.cc(125) ~UserRequest:
freeing request 0x1646830

I can also see, that in the wrong case (re-authenticate), squid
flushes his cache and make for the same request a new entry with a new
TTL:
$ squidclient mgr:username_cache
HTTP/1.1 200 OK
Server: squid
Mime-Version: 1.0
Date: Thu, 21 Feb 2013 08:36:14 GMT
Content-Type: text/plain
Expires: Thu, 21 Feb 2013 08:36:14 GMT
Last-Modified: Thu, 21 Feb 2013 08:36:14 GMT
X-Cache: MISS from XXX
Via: 1.1 XXX (squid)
Connection: close

Cached Usernames: 1 of 7921
Next Garbage Collection in 35 seconds.

TypeState Check TTL Cache TTL Username
--- - - - --
AUTH_NEGOTIATE  Ok-13600  USER



In the "good case", squid does not throw away the cache-entry and the
TTL is decrementing (even after I make new requests) -> expected
behavior.

So, why does squid flushes the authentication-cache for every request,
when I use "http_access allow INTERNET_ACCESS" (without the tag
AUTHENTICATED)? And why does squid 3.1.20 behaves different? Probably
a bug?

Any explanations/hints for this behavior? Many many thanks.
Tom


[squid-users] HAVP alternative for traffic scanning?

2013-02-21 Thread Henri Wahl
Hello world,
does anybody know a good solution as replacement for the HTTP AntiVirus
Proxy HAVP? We want to do online virus scanning, where HAVP does a good
job, both there seems to be no much development (e.g. IPv6) and some
performance issues. Therefore I am looking for an alternative.
Thanks + regards
Henri Wahl

-- 
Henri Wahl

IT Department
Leibniz-Institut für Festkörper- u.
Werkstoffforschung Dresden

tel: (03 51) 46 59 - 797
email: h.w...@ifw-dresden.de
http://www.ifw-dresden.de

Nagios status monitor Nagstamon:
http://nagstamon.ifw-dresden.de

DHCPv6 server dhcpy6d:
http://dhcpy6d.ifw-dresden.de

IFW Dresden e.V., Helmholtzstraße 20, D-01069 Dresden
VR Dresden Nr. 1369
Vorstand: Prof. Dr. Ludwig Schultz, Dr. h.c. Dipl.-Finw. Rolf Pfrengle



smime.p7s
Description: S/MIME Kryptografische Unterschrift


Re: [squid-users] HAVP alternative for traffic scanning?

2013-02-21 Thread Ralf Hildebrandt
* Henri Wahl :
> Hello world,
> does anybody know a good solution as replacement for the HTTP AntiVirus
> Proxy HAVP? We want to do online virus scanning, where HAVP does a good
> job, both there seems to be no much development (e.g. IPv6) and some
> performance issues. Therefore I am looking for an alternative.

c-icap for example.

-- 
Ralf Hildebrandt   Charite Universitätsmedizin Berlin
ralf.hildebra...@charite.deCampus Benjamin Franklin
http://www.charite.de  Hindenburgdamm 30, 12203 Berlin
Geschäftsbereich IT, Abt. Netzwerk fon: +49-30-450.570.155


Re: [squid-users] HAVP alternative for traffic scanning?

2013-02-21 Thread Tom Tom
Hi

We made good experience with Avira WebGate (Scanning only) and Avira
WebGate-Suite (Scanning + URL-Filtering). But it's not open source...

Regards,
Tom

On Thu, Feb 21, 2013 at 10:43 AM, Henri Wahl  wrote:
> Hello world,
> does anybody know a good solution as replacement for the HTTP AntiVirus
> Proxy HAVP? We want to do online virus scanning, where HAVP does a good
> job, both there seems to be no much development (e.g. IPv6) and some
> performance issues. Therefore I am looking for an alternative.
> Thanks + regards
> Henri Wahl
>
> --
> Henri Wahl
>
> IT Department
> Leibniz-Institut für Festkörper- u.
> Werkstoffforschung Dresden
>
> tel: (03 51) 46 59 - 797
> email: h.w...@ifw-dresden.de
> http://www.ifw-dresden.de
>
> Nagios status monitor Nagstamon:
> http://nagstamon.ifw-dresden.de
>
> DHCPv6 server dhcpy6d:
> http://dhcpy6d.ifw-dresden.de
>
> IFW Dresden e.V., Helmholtzstraße 20, D-01069 Dresden
> VR Dresden Nr. 1369
> Vorstand: Prof. Dr. Ludwig Schultz, Dr. h.c. Dipl.-Finw. Rolf Pfrengle
>


Re: [squid-users] HAVP alternative for traffic scanning?

2013-02-21 Thread C. Pelissier

Le jeu. 21/02/2013 à 10:43, Henri Wahl a écrit :
> Hello world,
> does anybody know a good solution as replacement for the HTTP AntiVirus
> Proxy HAVP? We want to do online virus scanning, where HAVP does a good
> job, both there seems to be no much development (e.g. IPv6) and some
> performance issues. Therefore I am looking for an alternative.
> Thanks + regards
> Henri Wahl

Squid + SquidClamav + Clamav (no experience on SquidClamav).




[squid-users] Re: ipv6 support for 3.1.16

2013-02-21 Thread anita
Hi Amos,

Thanks for a very quick reply. 
I have a couple of more questions.

1. What is a WCCP setting?
2. How can I check if the ipv4-mapping feature is disabled or not available
in my kernel? I am using Red Hat Linux 6.2 flavour with a GNU/Linux OS.

Thanks in advance.

Regards,
Anita



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/ipv6-support-for-3-1-16-tp4658490p4658609.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] Re: ipv6 support for 3.1.16

2013-02-21 Thread Alex Crow
Kaspersky do an icap server as well, and they are one of the best 
(obviously not gratis or libre but as it's ICAP it will work with Squid).


Alex

On 21/02/13 10:39, anita wrote:

Hi Amos,

Thanks for a very quick reply.
I have a couple of more questions.

1. What is a WCCP setting?
2. How can I check if the ipv4-mapping feature is disabled or not available
in my kernel? I am using Red Hat Linux 6.2 flavour with a GNU/Linux OS.

Thanks in advance.

Regards,
Anita



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/ipv6-support-for-3-1-16-tp4658490p4658609.html
Sent from the Squid - Users mailing list archive at Nabble.com.




[squid-users] Squid 3.1.8 and Kerberos authentication

2013-02-21 Thread Francesco
hello,

i am trying Squid kerberos authentication instead of NTLM authentication
due to resolve compatibility issue with latest version of windows.

Only two things if i can:

1) in squid.conf, i have to specify windows user with the first capital
letter. Ex: user = User@DOMAIN.
If i specify user@DOMAIN i have no authentication to surf

2) squid/access.log, in some page, i see a DENIED request and then a
TCP_MISS with the same page. It seems the browser try to access to a page
and it is not authenticated by the proxy server. Then the client retries
and can reach the page. Is it normal?

Thank you!

Francesco


Re: [squid-users] HAVP alternative for traffic scanning?

2013-02-21 Thread Alex Rousskov
On 02/21/2013 03:21 AM, C. Pelissier wrote:
> 
> Le jeu. 21/02/2013 à 10:43, Henri Wahl a écrit :
>> Hello world,
>> does anybody know a good solution as replacement for the HTTP AntiVirus
>> Proxy HAVP? We want to do online virus scanning, where HAVP does a good
>> job, both there seems to be no much development (e.g. IPv6) and some
>> performance issues. Therefore I am looking for an alternative.

> Squid + SquidClamav + Clamav (no experience on SquidClamav).

ClamAv can also be used via an eCAP adapter (www.e-cap.org). This is not
an endorsement of ClamAv and not an attack on SquidClamav, just pointing
out that there are several ways to deploy ClamAv with Squid.


HTH,

Alex.



Re: [squid-users] Squid 3.1.8 and Kerberos authentication

2013-02-21 Thread Amos Jeffries

On 22/02/2013 5:06 a.m., Francesco wrote:

hello,

i am trying Squid kerberos authentication instead of NTLM authentication
due to resolve compatibility issue with latest version of windows.

Only two things if i can:

1) in squid.conf, i have to specify windows user with the first capital
letter. Ex: user = User@DOMAIN.
If i specify user@DOMAIN i have no authentication to surf


Case sensitivity has nothing to do with Squid. The user details are part 
of the encrypted data transferred directly between your client software 
and your authentication system. When users login the authentication 
system informs Squid what username just logged in - Squid uses that 
label exactly as received.




2) squid/access.log, in some page, i see a DENIED request and then a
TCP_MISS with the same page. It seems the browser try to access to a page
and it is not authenticated by the proxy server. Then the client retries
and can reach the page. Is it normal?


Yes. This is how authentication works in general. Client connects, 
server requests credentials, client repeats with credentials and gets 
whetever response is appropriate for that.


If you were using Basic authentication it allows user credentials to be 
sent by the browser on brand new requests so that the server challenge 
part does not happen.
If you were using persistent connections in HTTP that allows a pipeline 
of multiple requests to be sent on one connection with the same 
credentials, reducing the connection count and thus the time auth 
handshake has to occur.
 ... either one of these may have been happening previously such that 
you would see some or most requests "just working" instead of every 
single one being prefixed by a DENIED/407 handshake.


Amos


Re: [squid-users] Re: ipv6 support for 3.1.16

2013-02-21 Thread Amos Jeffries

On 21/02/2013 11:39 p.m., anita wrote:

Hi Amos,

Thanks for a very quick reply.
I have a couple of more questions.

1. What is a WCCP setting?


Since you don't know it is probably not relevant. WCCP is a router 
protocol for controlling HTTP traffic interception by proxies.



2. How can I check if the ipv4-mapping feature is disabled or not available
in my kernel? I am using Red Hat Linux 6.2 flavour with a GNU/Linux OS.


section 3.1.3.1
http://www.redhat.com/mirrors/LDP/HOWTO/html_single/Linux+IPv6-HOWTO/#AEN488

I think netstat will probably be the best tool to identify this. Look 
for any services using IPv4 address to listen on a tcp6 socket. The IPv4 
address should display as v4-mapped format as described in the link above.
* If you see a strict separation of services listening address between 
IPv4 addresses on 'tcp' socket type and IPv6 on 'tcp6' socket type, then 
your kernel is probably what is called split-stack. In which case you 
will need Squid-3.2 to get Squid working properly.
* If you have any v4-mapped addresses showing up as listening addresses, 
your kernel is capable of it.


Amos


Re: [squid-users] DNS Queue Remains Filled Issue!

2013-02-21 Thread Amos Jeffries

On 21/02/2013 9:06 p.m., Arshan Awais wrote:

Hi,

I have a query regarding "DNS Returned Timeout" issue. I have searched over 
various forums regarding this issue but the solutions described there does not fit my 
need.


Now coming to the issue, I have configured squid for web caching and allowed just 100MB 
disk space for caching. When i start proxy, it works fine for some minutes, until all the 
systems in the network get "DNS Returned Timeout" message in their browsers. 
This error is gone upon restarting the squid.


I have checked the iDNS status using


#squidclient mgr:idns

I see that initially the queue is empty, but when the timeout error appears in 
browsers, the queue is filled with around 9 to 10 ids and this queue does not 
get empty.


Are they the same ones constantly? even after the timeout is reported to 
clients?


Or are they a changing set with queries dropping off the set after 
timeout?  This is the expected behaviour when DNS servers are not 
responding.


Can you show us this idns manager report please?

Also try to debug why DNS is not responding immediately. Command line 
from the Squid box:

   dig -t  example.com @
   dig -t A example.com @

Amoss


Re: [squid-users] Question about "proxy_auth REQUIRED" and the case of flushing the authentication-cache

2013-02-21 Thread Amos Jeffries

On 21/02/2013 9:47 p.m., Tom Tom wrote:

Hi

With squid 3.2.7, I have the following curiosity:

SCENARIO 1
<>
acl AUTHENTICATED proxy_auth REQUIRED
external_acl_type SQUID_KERB_LDAP ttl=7200 children-max=20
children-startup=5 children-idle=1 negative_ttl=7200 %LOGIN
/usr/local/squid/libexec/ext_kerberos_ldap_group_acl -g "XXX"
acl INTERNET_ACCESS external SQUID_KERB_LDAP
...
...
http_access deny !INTERNET_ACCESS
http_access deny !AUTHENTICATED
http_access allow INTERNET_ACCESS AUTHENTICATED
http_access deny all

With the config above, I have the following lines in the access.log:
[Thu Feb 21 06:56:45 2013].167 38 XXX TCP_REFRESH_UNMODIFIED/304
332 GET http://imagesrv.adition.com/banners/750/683036/dummy.gif USER
FIRSTUP_PARENT/XXX image/gif
[Thu Feb 21 06:57:04 2013].621 38 XXX TCP_REFRESH_UNMODIFIED/304
261 GET http://imagesrv.adition.com/banners/750/683036/dummy.gif USER
FIRSTUP_PARENT/XXX image/gif


Note that both INTERNET_ACCESS  and AUTHENTICATED access controls 
require authentication credentials in order to match. Both will generate 
authentication challenges if there are no credentials.


Without a cache.log trace showing all the auth oeprations (level 9 debug 
woudl be best) I can't say why the first request







SCENARIO 2
<>
external_acl_type SQUID_KERB_LDAP ttl=7200 children-max=20
children-startup=5 children-idle=1 negative_ttl=7200 %LOGIN
/usr/local/squid/libexec/ext_kerberos_ldap_group_acl -g "XXX"
acl INTERNET_ACCESS external SQUID_KERB_LDAP
...
...
http_access deny !INTERNET_ACCESS
http_access allow INTERNET_ACCESS
http_access deny all


Now, the same request looks like this:
[Thu Feb 21 06:55:59 2013].086  0 XXX TCP_DENIED/407 4153 GET
http://imagesrv.adition.com/banners/750/683036/dummy.gif - HIER_NONE/-
text/html
[Thu Feb 21 06:55:59 2013].135 44 XXX TCP_REFRESH_UNMODIFIED/304
332 GET http://imagesrv.adition.com/banners/750/683036/dummy.gif USER
FIRSTUP_PARENT/XXX image/gif

A tcpdump shows, that the "authorization"-header is not sent in the
first request. In scenario 2, the authorization-header is sent after
the TCP_DENIED/407 response from squid (normal behavior). In scenario
1, squid response directly with 304.

What is the influence of "AUTHENTICATED" in the first example, not to
re-authenticate the request? Why does squid needs to re-authenticate
(TCP_DENIED/407) without the "AUTHENTICATED" tag in the "http_access"
line (Scenario 2)? Is it possible, that with the "AUTHENTICATED" tag
squid uses the authentication-cache? And without the "AUTHENTICATED"
tag, squid will not use the authentication-cache or flushes the
cache-entry for every request?


No. Kerberos does not make use of the username cache for validation. It 
only adds recently tested credentials to the cache for reporting purposes.


Kerberos credentials are tied to the TCP connection state details and 
may be accepted if the new requests proxy-auth token matches the one 
tied to the connection already. The first request on a connectio should 
almost always be a DENIED/407, unless the Kerberos client is deciding to 
send its keytab token on the first request fof a connection - in which 
case Squid may be validating and accepting it immediately.



Squid of any version _in general_ should be consistently behaving as per 
scenario #2, except in the situations where scenario #1 is possible. But 
that exception is client dependent and should not be related to the 
squid.conf change you describe.
Due to that exception case(s) it is hard to answer your question about 
"why" without seeing a debug ALL,9 cache.log trace of these tests. Can 
you supply that please?





I have other squids running (3.1.20), which are configured like
scenario 2, but behaves like scenario 1. Why does squid 3.1.20 act
different as 3.2.7?


Squid-3.1 has many authentication credentials management bugs that got 
fixed under the bug 2305 project. Including things like credentials from 
user-A being reported and shared by user-B - which sounds like scenario #1.




With "debug_options 29,9" (see below) in squid 3.2.7, I see that in
the "wrong case" (without the AUTHENTICATED tag on the http_access
line), squid is "freeing request 0x1646830".


HttpRequest or Auth::UserRequest request?


  When I request the same
file again, then squid response first with a "TCP_DENIED/407". Does
the "freeing" means, that squid "flushes" his authentication-cache and
therefore need to re-authenticate this request everytime?
2013/02/21 08:43:58.583 kid1| UserRequest.cc(506) addReplyAuthHeader:
headertype:76 authuser:0x1646830*3
2013/02/21 08:43:58.583 kid1| UserRequest.cc(126) releaseAuthServer:
No Negotiate auth server to release.
2013/02/21 08:43:58.583 kid1| UserRequest.cc(125) ~UserRequest:
freeing request 0x1646830

I can also see, that in the wrong case (re-authenticate), squid
flushes his cache and make for the same request a new entry with a new
TTL:
$ squidclient mgr:username_cache
HTTP/1.1 200 

Re: [squid-users] squid kerberos authenticators spamming AD and locking out users

2013-02-21 Thread Amos Jeffries

On 21/02/2013 7:20 p.m., Brett Lymn wrote:

Folks,

I am running 4 proxy servers with squid 3.1.19 (yes, I know it is old,
will update soon) with kerberos authentication behind a F5 load balancer
for a user community of about 2000 people using Windows/I.E..  Normally,
this all works fine, people can surf the web and authentication happens
in background as it should.

The issue we are seeing is around once per month at random one of the
kerberos authenticators seems to start spamming the life out of the
windows AD servers.  The event we ID we are seeing on the windows
servers is 0xc06a which translates to, basically, bad password.  We
seem to get this when a user (not always the same one) changes their
password.  Clearly, it does not happen every time, we have a password
expiry policy in AD so every is forced to change their password
regularly so we would be seeing the problem a lot more frequently if it
happened every time a user changed their password.  It seems to me that
there is some sort of race condition going on where, perhaps, the
authenticators are doing something while the password is being changed,
the authenticators keep using the old details.  When this happens the
authenticator seems to spin making requests at a very rapid rate, my
windows admins tell me there are milliseconds between requests and it
fills their logs, also the users account gets locked out due to too many
bad passwords.

There is nothing in the logs indicating anything is wrong.  Is this
fixed in a later version? If not, any ideeas on how to troubleshoot?


Can you please try an upgrade to Squid-3.3?
There were a lot of things in 3.1 which could lead to this happening.

Amos


RE: [squid-users] Redirect Youtube out second ISP

2013-02-21 Thread Stinn, Ryan
I ended up putting a second proxy up and using cache peer to redirect all 
traffic to it. Not the best solution but it's just a tiny VM fetching youtube.

Ryan 

-Original Message-
From: Pieter De Wit [mailto:pie...@insync.za.net] 
Sent: Wednesday, February 20, 2013 10:57 AM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] Redirect Youtube out second ISP

Hi,

I would just run 2 squids on the same box, iptables mark the second one's 
traffic for the second uplink (using multiple routing tables etc). 
The first squid then simply forwards all youtube traffic by URL - no IP issues 
etc.

Cheers,

Pieter

On 21/02/2013 05:33, Ricardo Rios wrote:
> No is not, is just what i see doing some sniffing on my mikrotik box, 
> where my costumers connect, i am sure i still missing few IPs.
>
> Regards
>
>> I am doing it this way currently on my router however knowing all of 
>> youtube's IP addresses is annoying. Do you know if your list is 
>> conclusive?
>>
>> Ryan Stinn
>> Holy Trinity Catholic School Division
>>
>> -Original Message-
>> From: Ricardo Rios [mailto:shorew...@malargue.gov.ar]
>> Sent: Monday, February 18, 2013 4:46 PM
>> To: Squid Users
>> Subject: Re: [squid-users] Redirect Youtube out second ISP
>>
>> I have that working but using www.shorewall.net [1] Firewall, sending 
>> all youtube request to provider number 4
>>
>> /etc/shorewall/providers
>>
>> #NAME NUMBER MARK DUPLICATE INTERFACE GATEWAY OPTIONS COPY
>>
>> cable2 2 2 main eth4:192.168.150.99 192.168.150.199
>> track,balance=3,loose,mtu=1492
>> cable3 3 3 main eth4:192.168.150.99 192.168.150.202
>> track,balance=3,loose,mtu=1492
>> silica 4 4 main eth6 186.0.190.241 track,balance=2,mtu=1500
>>
>> /etc/shorewall/tcrules
>>
>> #MARK SOURCE DEST PROTO DEST SOURCE USER TEST LENGTH TOS CONNBYTES 
>> HELPER
>>
>> #Youtube
>> 4:P 10.0.0.0/24 208.117.253.0/20
>> 4:P 10.0.0.0/24 74.125.228.0/24
>> 4:P 10.0.0.0/24 173.194.60.0/18
>> 4:P 10.0.0.0/24 200.9.157.0/20
>>
>> http://www.shorewall.net/Documentation_Index.html [2]Regards
>>
>>> - Original Message -
>>>
 From: "Stinn, Ryan"  To:
 "squid-users@squid-cache.org"  Cc: Sent:
 Saturday, 16 February 2013 4:13 AM Subject: [squid-users] Redirect 
 Youtube out second ISP I'm wondering if it's possible to use squid 
 to redirect youtube out a second ISP line. We have two connections 
 and I'd like to push all youtube out the second connection.
>>> Try this: acl dstdom_regex yt -i youtube tcp_outgoing_address yt
>>> 1.2.3.4 1.2.3.4 is IP address of 2nd line (should be on same machine 
>>> as squid). Amm.
>
>
>
> Links:
> --
> [1] http://www.shorewall.net
> [2] http://www.shorewall.net/Documentation_Index.html




[squid-users] tproxy configuration

2013-02-21 Thread Roman Gelfand
 Please, find below the network topology, squid.conf and rc.local
configuration files.  It appears that the squid is not routing the
http requests.  I am not sure what I am doing wrong here
Please note, the same squid.conf works on transparent proxy (non
tproxy), for the exception of tproxy keyword and service changes.
Thanks in advance,

   WAN
   ||
   ||
  wccp/gre tunnel  ||
squid==Fortigate FW/RT  Int ip 1 192.168.8.1
3.3||   Int ip 2 192.168.11.1
ip: 192.168.8.21   ||   Ext ip XX.XX.XXX.24
   ||
   ||
  WLAN Router  Int. ip
192.168.11.32  Ext. ip 192.168.7.1
   ||
   ||
   ||
   Client Workstation 192.168.7.110


#!/bin/sh -e
#
# rc.local
#
# This script is executed at the end of each multiuser runlevel.
# Make sure that the script will "exit 0" on success or any other
# value on error.
#
# In order to enable or disable this script just change the execution
# bits.
#
# By default this script does nothing.
# GRE Tunnel :
echo "Loading modules.."
modprobe -a nf_tproxy_core xt_TPROXY xt_socket xt_mark ip_gre

LOCALIP="192.168.8.21"
FORTIDIRIP="192.168.8.1"
FORTIIPID="XX.XX.XXX.254"
echo "changing routing and reverse path stuff.."
echo 0 > /proc/sys/net/ipv4/conf/lo/rp_filter
echo 1 > /proc/sys/net/ipv4/ip_forward
echo "creating tunnel..."
iptunnel add wccp0 mode gre remote $FORTIIPID local $LOCALIP dev eth0
ifconfig wccp0 127.0.1.1/32 up
echo "creating routing table for tproxy..."
ip rule add fwmark 1 lookup 100
ip route add local 0.0.0.0/0 dev lo table 100
echo "creating iptables tproxy rules..."
iptables -A INPUT  -i lo -j ACCEPT
iptables -A INPUT  -p icmp -m icmp --icmp-type any -j ACCEPT
iptables -A FORWARD -i lo -j ACCEPT
iptables -A INPUT  -s $FORTIDIRIP -p udp -m udp --dport 2048 -j ACCEPT
iptables -A INPUT -i wccp0 -j ACCEPT
iptables -A INPUT -p gre -j ACCEPT
iptables -t mangle -F
iptables -t mangle -A PREROUTING -d $LOCALIP -j ACCEPT
iptables -t mangle -N DIVERT
iptables -t mangle -A DIVERT -j MARK --set-mark 1
iptables -t mangle -A DIVERT -j ACCEPT
iptables -t mangle -A PREROUTING -p tcp -m socket -j DIVERT
iptables -t mangle -A PREROUTING -p tcp --dport 80 -j TPROXY
--tproxy-mark 0x1/0x1 --on-port 3228
iptables -t mangle -A PREROUTING -p tcp --dport 443 -j TPROXY
--tproxy-mark 0x1/0x1 --on-port 3229
exit 0

squid.conf
---
#debug_options ALL,1 33,2
#debug_options ALL,1 33,2 28,9
hierarchy_stoplist cgi-bin
acl QUERY urlpath_regex cgi-bin
#cache_effective_user squid
shutdown_lifetime 1 second
visible_hostname server
httpd_suppress_version_string on
forwarded_for off
#1GB disk cache
cache_dir ufs /usr/local/var/cache/squid 1024 16 256

maximum_object_size 5 MB
cache_mem 1024 MB
cache_swap_low 90
cache_swap_high 95
maximum_object_size_in_memory 512 KB
request_header_access Referer deny all
reply_header_access Referer deny all
http_port 80 accel
acl site1 dstdomain site1.domain.com
acl site2 dstdomain site2.domain.com
acl site3 dstdomain site3.domain.com
acl site4 dstdomain site4.domain.com
acl site5 dstdomain site5.domain.com
acl site6 dstdomain site6.domain.com
acl site7 dstdomain site7.domain.com
https_port 443 cert=/etc/ssl/certs/domain_sites.crt
key=/etc/ssl/private/domain.key accel vport
# never_direct allow site1
always_direct allow site1
http_access allow site1
http_access deny site1
always_direct allow site2
http_access allow site2
http_access deny site2
always_direct allow site3
http_access allow site3
http_access deny site3
always_direct allow site4
http_access allow site4
http_access deny site4
always_direct allow site5
http_access allow site5
http_access deny site5
always_direct allow site6
http_access allow site6
http_access deny site6
always_direct allow site7
http_access allow site7
http_access deny site7
#
# Recommended minimum configuration:
#
# Example rule allowing access from your local networks.
# Adapt to list your (internal) IP networks from where browsing
# should be allowed
acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 172.16.0.0/12 # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
acl localnet src {WAN Network} # RFC1918 possible internal network
acl localnet src fc00::/7   # RFC 4193 local private network range
acl localnet src fe80::/10  # RFC 4291 link-local (directly
plugged) machines
acl SSL_ports port 443
acl SSL_ports port 4435
acl SSL_ports port 8443
acl Safe_ports port 80  # http
acl Safe_ports port 8080 # http
acl Safe_ports port 21  # ftp
acl Safe_ports po

Re: [squid-users] SQUID3 and https: Error negotiating SSL connection

2013-02-21 Thread Guy Helmer

On Feb 21, 2013, at 2:04 AM, skylab  wrote:

> Hi, thank you for your replies.
> How can I verify my ca-certificate list? And how can I update it?
> Thank you very much.
> 
> Skylab

It depends on your O/S. Linux and *BSDs keep the certs updated through packages.

If you have Redhat/CentOS, check the ca-certificates RPM. You might have to set 
sslproxy_cafile to /etc/ssl/certs/ca-bundle.crt

If you have Debian/Ubuntu/etc, check the ca-certificates DEB. You might have to 
set sslproxy_capath to /etc/ssl/certs

For FreeBSD, check the package ca_root_nss. Set sslproxy_cafile to 
/usr/local/share/certs/ca-root-nss.crt

HTH,
Guy



Re: [squid-users] Squid 3.3.1 Compiler Error

2013-02-21 Thread Amos Jeffries

FTR: please report this type of problem to bugzilla in future.

On 21/02/2013 2:50 a.m., Adam W. Dace wrote:

OS: Mac OS X v10.7.5
Xcode: Xcode v4.6
GCC: GCC v4.2.1
Configure Command: ./configure

I've tried a few things and squid just won't compile for me.

Here's the relevant make output:

Making all in acl
/bin/sh ../../libtool --tag=CXX   --mode=compile g++ -DHAVE_CONFIG_H
-I../.. -I../../include -I../../lib -I../../src -I../../include
-I/sw/include -Wall -Wpointer-arith -Wwrite-strings -Wcomments
-Werror -pipe -D_REENTRANT -g -O2 -MT DomainData.lo -MD -MP -MF
.deps/DomainData.Tpo -c -o DomainData.lo DomainData.cc
libtool: compile:  g++ -DHAVE_CONFIG_H -I../.. -I../../include
-I../../lib -I../../src -I../../include -I/sw/include -Wall
-Wpointer-arith -Wwrite-strings -Wcomments -Werror -pipe -D_REENTRANT
-g -O2 -MT DomainData.lo -MD -MP -MF .deps/DomainData.Tpo -c
DomainData.cc  -fno-common -DPIC -o .libs/DomainData.o
DomainData.cc: In function 'int aclHostDomainCompare(char* const&,
char* const&)':
DomainData.cc:80: error: 'matchDomainName' was not declared in this scope
make[3]: *** [DomainData.lo] Error 1
make[2]: *** [all-recursive] Error 1
make[1]: *** [all] Error 2
make: *** [all-recursive] Error 1


Any ideas?  BTW, I've got squid v3.2.7 up and running fine.  I would
just like to upgrade.


We would like it if you could as well. We are aware of this build issue 
but unfortunately the Squid devs do not have MacOS machines on hand to 
experiment with fixes.

Can you assist in that regard?

'matchDomainName' is most definitely defined in URL.h which is included 
the same as it was in 3.2. But for some reason the MacOS compiler is 
doing the above errors now.


Amos


Re: [squid-users] tproxy configuration

2013-02-21 Thread Amos Jeffries

On 22/02/2013 11:03 a.m., Roman Gelfand wrote:

  Please, find below the network topology, squid.conf and rc.local
configuration files.  It appears that the squid is not routing the
http requests.  I am not sure what I am doing wrong here
Please note, the same squid.conf works on transparent proxy (non
tproxy), for the exception of tproxy keyword and service changes.
Thanks in advance,

WAN
||
||
   wccp/gre tunnel  ||
squid==Fortigate FW/RT  Int ip 1 192.168.8.1
3.3||   Int ip 2 192.168.11.1
ip: 192.168.8.21   ||   Ext ip XX.XX.XXX.24
||
||
   WLAN Router  Int. ip
192.168.11.32  Ext. ip 192.168.7.1
||
||
||
Client Workstation 192.168.7.110


#!/bin/sh -e
#
# rc.local
#
# This script is executed at the end of each multiuser runlevel.
# Make sure that the script will "exit 0" on success or any other
# value on error.
#
# In order to enable or disable this script just change the execution
# bits.
#
# By default this script does nothing.
# GRE Tunnel :
echo "Loading modules.."
modprobe -a nf_tproxy_core xt_TPROXY xt_socket xt_mark ip_gre

LOCALIP="192.168.8.21"
FORTIDIRIP="192.168.8.1"
FORTIIPID="XX.XX.XXX.254"
echo "changing routing and reverse path stuff.."
echo 0 > /proc/sys/net/ipv4/conf/lo/rp_filter


What about rp_filter on eth0 where the traffic is actually exiting the 
Squid box?



echo 1 > /proc/sys/net/ipv4/ip_forward
echo "creating tunnel..."
iptunnel add wccp0 mode gre remote $FORTIIPID local $LOCALIP dev eth0
ifconfig wccp0 127.0.1.1/32 up
echo "creating routing table for tproxy..."
ip rule add fwmark 1 lookup 100
ip route add local 0.0.0.0/0 dev lo table 100


You may need this to be dev eth0 instead of dev lo. Experiment to find 
out which.



echo "creating iptables tproxy rules..."
iptables -A INPUT  -i lo -j ACCEPT
iptables -A INPUT  -p icmp -m icmp --icmp-type any -j ACCEPT
iptables -A FORWARD -i lo -j ACCEPT


What about forwarding of non-localhost traffic? such as the TPROXY 
spoofed client IPs.



iptables -A INPUT  -s $FORTIDIRIP -p udp -m udp --dport 2048 -j ACCEPT
iptables -A INPUT -i wccp0 -j ACCEPT
iptables -A INPUT -p gre -j ACCEPT
iptables -t mangle -F
iptables -t mangle -A PREROUTING -d $LOCALIP -j ACCEPT
iptables -t mangle -N DIVERT
iptables -t mangle -A DIVERT -j MARK --set-mark 1
iptables -t mangle -A DIVERT -j ACCEPT
iptables -t mangle -A PREROUTING -p tcp -m socket -j DIVERT
iptables -t mangle -A PREROUTING -p tcp --dport 80 -j TPROXY
--tproxy-mark 0x1/0x1 --on-port 3228
iptables -t mangle -A PREROUTING -p tcp --dport 443 -j TPROXY
--tproxy-mark 0x1/0x1 --on-port 3229
exit 0

squid.conf
---
#debug_options ALL,1 33,2
#debug_options ALL,1 33,2 28,9
hierarchy_stoplist cgi-bin
acl QUERY urlpath_regex cgi-bin
#cache_effective_user squid
shutdown_lifetime 1 second
visible_hostname server
httpd_suppress_version_string on
forwarded_for off
#1GB disk cache
cache_dir ufs /usr/local/var/cache/squid 1024 16 256

maximum_object_size 5 MB
cache_mem 1024 MB
cache_swap_low 90
cache_swap_high 95
maximum_object_size_in_memory 512 KB
request_header_access Referer deny all
reply_header_access Referer deny all
http_port 80 accel
acl site1 dstdomain site1.domain.com
acl site2 dstdomain site2.domain.com
acl site3 dstdomain site3.domain.com
acl site4 dstdomain site4.domain.com
acl site5 dstdomain site5.domain.com
acl site6 dstdomain site6.domain.com
acl site7 dstdomain site7.domain.com
https_port 443 cert=/etc/ssl/certs/domain_sites.crt
key=/etc/ssl/private/domain.key accel vport
# never_direct allow site1
always_direct allow site1
http_access allow site1
http_access deny site1
always_direct allow site2
http_access allow site2
http_access deny site2
always_direct allow site3
http_access allow site3
http_access deny site3
always_direct allow site4
http_access allow site4
http_access deny site4
always_direct allow site5
http_access allow site5
http_access deny site5
always_direct allow site6
http_access allow site6
http_access deny site6
always_direct allow site7
http_access allow site7
http_access deny site7
#
# Recommended minimum configuration:
#
# Example rule allowing access from your local networks.
# Adapt to list your (internal) IP networks from where browsing
# should be allowed
acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 172.16.0.0/12 # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
acl localnet src {WAN Network} # RFC1918 possible internal network
acl 

Re: [squid-users] Redirect Youtube out second ISP

2013-02-21 Thread Amos Jeffries

On 22/02/2013 11:02 a.m., Stinn, Ryan wrote:

I ended up putting a second proxy up and using cache peer to redirect all 
traffic to it. Not the best solution but it's just a tiny VM fetching youtube.

Ryan


Why did you avoid the TOS methods? much simpler than double-proessing 
all the HTTP syntax.


Amos



[squid-users] Re: squid kerberos authenticators spamming AD and locking out users

2013-02-21 Thread Markus Moeller
I don't think this has to do with squid and Kerberos.  This is a Windows 
client only issue.  Usually the user should be prompted by Windows to update 
the password. If the user does not update the password the client won't get 
a Kerberos ticket and will fallback to NTLM if that also doesn't work it 
won't send anything to squid to authenticate.


Markus

"Amos Jeffries"  wrote in message 
news:51269973.5070...@treenet.co.nz...

On 21/02/2013 7:20 p.m., Brett Lymn wrote:

Folks,

I am running 4 proxy servers with squid 3.1.19 (yes, I know it is old,
will update soon) with kerberos authentication behind a F5 load balancer
for a user community of about 2000 people using Windows/I.E..  Normally,
this all works fine, people can surf the web and authentication happens
in background as it should.

The issue we are seeing is around once per month at random one of the
kerberos authenticators seems to start spamming the life out of the
windows AD servers.  The event we ID we are seeing on the windows
servers is 0xc06a which translates to, basically, bad password.  We
seem to get this when a user (not always the same one) changes their
password.  Clearly, it does not happen every time, we have a password
expiry policy in AD so every is forced to change their password
regularly so we would be seeing the problem a lot more frequently if it
happened every time a user changed their password.  It seems to me that
there is some sort of race condition going on where, perhaps, the
authenticators are doing something while the password is being changed,
the authenticators keep using the old details.  When this happens the
authenticator seems to spin making requests at a very rapid rate, my
windows admins tell me there are milliseconds between requests and it
fills their logs, also the users account gets locked out due to too many
bad passwords.

There is nothing in the logs indicating anything is wrong.  Is this
fixed in a later version? If not, any ideeas on how to troubleshoot?


Can you please try an upgrade to Squid-3.3?
There were a lot of things in 3.1 which could lead to this happening.

Amos






Re: [squid-users] Re: squid kerberos authenticators spamming AD and locking out users

2013-02-21 Thread Brett Lymn
On Thu, Feb 21, 2013 at 11:23:32PM +, Markus Moeller wrote:
>
> I don't think this has to do with squid and Kerberos.
>

Reasonably sure it does - for a start the machine that AD says is
causing the errors is one of the proxy servers and if we restart squid
on that particular machine the problem stops.

>  This is a Windows 
> client only issue.  Usually the user should be prompted by Windows to 
> update the password. If the user does not update the password the client 
> won't get a Kerberos ticket and will fallback to NTLM if that also doesn't 
> work it won't send anything to squid to authenticate.
> 

That scenario does not match what we are observing, the user has changed
their password, they are able to (while the account is not locked out)
browse the web and access other internal resources.  Our squid servers
don't do NTLM.

-- 
Brett Lymn
"Warning:
The information contained in this email and any attached files is
confidential to BAE Systems Australia. If you are not the intended
recipient, any use, disclosure or copying of this email or any
attachments is expressly prohibited.  If you have received this email
in error, please notify us immediately. VIRUS: Every care has been
taken to ensure this email and its attachments are virus free,
however, any loss or damage incurred in using this email is not the
sender's responsibility.  It is your responsibility to ensure virus
checks are completed before installing any data sent in this email to
your computer."




Re: [squid-users] Squid 3.1.8 and Kerberos authentication

2013-02-21 Thread Francesco
Hello Amos,

happy to hear from you!

>> 1) in squid.conf, i have to specify windows user with the first capital
>> letter. Ex: user = User@DOMAIN.
>> If i specify user@DOMAIN i have no authentication to surf
>
> Case sensitivity has nothing to do with Squid. The user details are part
> of the encrypted data transferred directly between your client software
> and your authentication system. When users login the authentication
> system informs Squid what username just logged in - Squid uses that
> label exactly as received.

But, if i write, in squid.conf in proxy_auth acl, user instead of User,
Squid do not grant access, with authentication deny.
Is there a way to accept "user" and "User" at the same way?

>
> Yes. This is how authentication works in general. Client connects,
> server requests credentials, client repeats with credentials and gets
> whetever response is appropriate for that.

When working with 2008 and 2008 R2 domain controller, kerberos
authentication is better than ntlm, is it right?

Thank you!
Francesco


Re: [squid-users] Re: squid kerberos authenticators spamming AD and locking out users

2013-02-21 Thread Amos Jeffries

On 22/02/2013 12:34 p.m., Brett Lymn wrote:

On Thu, Feb 21, 2013 at 11:23:32PM +, Markus Moeller wrote:

I don't think this has to do with squid and Kerberos.


Reasonably sure it does - for a start the machine that AD says is
causing the errors is one of the proxy servers and if we restart squid
on that particular machine the problem stops.


What happens if you leave Squid running but terminate the TCP 
connections open between Squid and the AD server?


Or just the TCP connections client<->Squid for the one user who is looping?

Amos


Re: [squid-users] Squid 3.1.8 and Kerberos authentication

2013-02-21 Thread Amos Jeffries

On 22/02/2013 12:58 p.m., Francesco wrote:

Hello Amos,

happy to hear from you!


1) in squid.conf, i have to specify windows user with the first capital
letter. Ex: user = User@DOMAIN.
If i specify user@DOMAIN i have no authentication to surf

Case sensitivity has nothing to do with Squid. The user details are part
of the encrypted data transferred directly between your client software
and your authentication system. When users login the authentication
system informs Squid what username just logged in - Squid uses that
label exactly as received.

But, if i write, in squid.conf in proxy_auth acl, user instead of User,
Squid do not grant access, with authentication deny.
Is there a way to accept "user" and "User" at the same way?


Not with proxy auth, it is a case sensitive string match.

The best thing to do is find out why the AD backend is suddenly 
presenting uppercase on the usernames.





Yes. This is how authentication works in general. Client connects,
server requests credentials, client repeats with credentials and gets
whetever response is appropriate for that.

When working with 2008 and 2008 R2 domain controller, kerberos
authentication is better than ntlm, is it right?


Kerberos is better than NTLM, always. Kerberos is not supported by some 
very old software though (think 1980's-1990's year of release - the 
stuff you really should be upgrading anyway).


Amos


Re: [squid-users] Re: squid kerberos authenticators spamming AD and locking out users

2013-02-21 Thread Brett Lymn
On Fri, Feb 22, 2013 at 01:18:53PM +1300, Amos Jeffries wrote:
> 
> What happens if you leave Squid running but terminate the TCP 
> connections open between Squid and the AD server?
> 

We have not tried doing that, I will give it a try if I get a chance.

> Or just the TCP connections client<->Squid for the one user who is looping?
> 

The client does not need to be connected to squid after the problem has
started up.  We have had an instance where a user had shutdown their
workstation and gone home but the errors were still occurring.  It seems
once the authenticator has started doing this it continues until we
restart it.

-- 
Brett Lymn
"Warning:
The information contained in this email and any attached files is
confidential to BAE Systems Australia. If you are not the intended
recipient, any use, disclosure or copying of this email or any
attachments is expressly prohibited.  If you have received this email
in error, please notify us immediately. VIRUS: Every care has been
taken to ensure this email and its attachments are virus free,
however, any loss or damage incurred in using this email is not the
sender's responsibility.  It is your responsibility to ensure virus
checks are completed before installing any data sent in this email to
your computer."




Re: [squid-users] tproxy configuration

2013-02-21 Thread Roman Gelfand
On Thu, Feb 21, 2013 at 6:10 PM, Amos Jeffries  wrote:
> On 22/02/2013 11:03 a.m., Roman Gelfand wrote:
>>
>>   Please, find below the network topology, squid.conf and rc.local
>> configuration files.  It appears that the squid is not routing the
>> http requests.  I am not sure what I am doing wrong here
>> Please note, the same squid.conf works on transparent proxy (non
>> tproxy), for the exception of tproxy keyword and service changes.
>> Thanks in advance,
>>
>> WAN
>> ||
>> ||
>>wccp/gre tunnel  ||
>> squid==Fortigate FW/RT  Int ip 1 192.168.8.1
>> 3.3||   Int ip 2 192.168.11.1
>> ip: 192.168.8.21   ||   Ext ip XX.XX.XXX.24
>> ||
>> ||
>>WLAN Router  Int. ip
>> 192.168.11.32  Ext. ip 192.168.7.1
>> ||
>> ||
>> ||
>> Client Workstation 192.168.7.110
>>
>>
>> #!/bin/sh -e
>> #
>> # rc.local
>> #
>> # This script is executed at the end of each multiuser runlevel.
>> # Make sure that the script will "exit 0" on success or any other
>> # value on error.
>> #
>> # In order to enable or disable this script just change the execution
>> # bits.
>> #
>> # By default this script does nothing.
>> # GRE Tunnel :
>> echo "Loading modules.."
>> modprobe -a nf_tproxy_core xt_TPROXY xt_socket xt_mark ip_gre
>>
>> LOCALIP="192.168.8.21"
>> FORTIDIRIP="192.168.8.1"
>> FORTIIPID="XX.XX.XXX.254"
>> echo "changing routing and reverse path stuff.."
>> echo 0 > /proc/sys/net/ipv4/conf/lo/rp_filter
>
>
> What about rp_filter on eth0 where the traffic is actually exiting the Squid
> box?

Could you elaborate on this..

>
>
>> echo 1 > /proc/sys/net/ipv4/ip_forward
>> echo "creating tunnel..."
>> iptunnel add wccp0 mode gre remote $FORTIIPID local $LOCALIP dev eth0
>> ifconfig wccp0 127.0.1.1/32 up
>> echo "creating routing table for tproxy..."
>> ip rule add fwmark 1 lookup 100
>> ip route add local 0.0.0.0/0 dev lo table 100
>
>
> You may need this to be dev eth0 instead of dev lo. Experiment to find out
> which.
>
>
>> echo "creating iptables tproxy rules..."
>> iptables -A INPUT  -i lo -j ACCEPT
>> iptables -A INPUT  -p icmp -m icmp --icmp-type any -j ACCEPT
>> iptables -A FORWARD -i lo -j ACCEPT
>
>
> What about forwarding of non-localhost traffic? such as the TPROXY spoofed
> client IPs.
>

Could you elaborate on this, as well.

>
>> iptables -A INPUT  -s $FORTIDIRIP -p udp -m udp --dport 2048 -j ACCEPT
>> iptables -A INPUT -i wccp0 -j ACCEPT
>> iptables -A INPUT -p gre -j ACCEPT
>> iptables -t mangle -F
>> iptables -t mangle -A PREROUTING -d $LOCALIP -j ACCEPT
>> iptables -t mangle -N DIVERT
>> iptables -t mangle -A DIVERT -j MARK --set-mark 1
>> iptables -t mangle -A DIVERT -j ACCEPT
>> iptables -t mangle -A PREROUTING -p tcp -m socket -j DIVERT
>> iptables -t mangle -A PREROUTING -p tcp --dport 80 -j TPROXY
>> --tproxy-mark 0x1/0x1 --on-port 3228
>> iptables -t mangle -A PREROUTING -p tcp --dport 443 -j TPROXY
>> --tproxy-mark 0x1/0x1 --on-port 3229
>> exit 0
>>
>> squid.conf
>> ---
>> #debug_options ALL,1 33,2
>> #debug_options ALL,1 33,2 28,9
>> hierarchy_stoplist cgi-bin
>> acl QUERY urlpath_regex cgi-bin
>> #cache_effective_user squid
>> shutdown_lifetime 1 second
>> visible_hostname server
>> httpd_suppress_version_string on
>> forwarded_for off
>> #1GB disk cache
>> cache_dir ufs /usr/local/var/cache/squid 1024 16 256
>>
>> maximum_object_size 5 MB
>> cache_mem 1024 MB
>> cache_swap_low 90
>> cache_swap_high 95
>> maximum_object_size_in_memory 512 KB
>> request_header_access Referer deny all
>> reply_header_access Referer deny all
>> http_port 80 accel
>> acl site1 dstdomain site1.domain.com
>> acl site2 dstdomain site2.domain.com
>> acl site3 dstdomain site3.domain.com
>> acl site4 dstdomain site4.domain.com
>> acl site5 dstdomain site5.domain.com
>> acl site6 dstdomain site6.domain.com
>> acl site7 dstdomain site7.domain.com
>> https_port 443 cert=/etc/ssl/certs/domain_sites.crt
>> key=/etc/ssl/private/domain.key accel vport
>> # never_direct allow site1
>> always_direct allow site1
>> http_access allow site1
>> http_access deny site1
>> always_direct allow site2
>> http_access allow site2
>> http_access deny site2
>> always_direct allow site3
>> http_access allow site3
>> http_access deny site3
>> always_direct allow site4
>> http_access allow site4
>> http_access deny site4
>> always_direct allow site5
>> http_access allow site5
>> http_access deny site5
>> always_direct allow site6
>> http_access allow site6
>> http_access deny site6
>> always_direct all

Re: [squid-users] tproxy configuration

2013-02-21 Thread Amos Jeffries

On 22/02/2013 5:07 p.m., Roman Gelfand wrote:

On Thu, Feb 21, 2013 at 6:10 PM, Amos Jeffries  wrote:

On 22/02/2013 11:03 a.m., Roman Gelfand wrote:

   Please, find below the network topology, squid.conf and rc.local
configuration files.  It appears that the squid is not routing the
http requests.  I am not sure what I am doing wrong here
Please note, the same squid.conf works on transparent proxy (non
tproxy), for the exception of tproxy keyword and service changes.
Thanks in advance,

 WAN
 ||
 ||
wccp/gre tunnel  ||
squid==Fortigate FW/RT  Int ip 1 192.168.8.1
3.3||   Int ip 2 192.168.11.1
ip: 192.168.8.21   ||   Ext ip XX.XX.XXX.24
 ||
 ||
WLAN Router  Int. ip
192.168.11.32  Ext. ip 192.168.7.1
 ||
 ||
 ||
 Client Workstation 192.168.7.110


#!/bin/sh -e
#
# rc.local
#
# This script is executed at the end of each multiuser runlevel.
# Make sure that the script will "exit 0" on success or any other
# value on error.
#
# In order to enable or disable this script just change the execution
# bits.
#
# By default this script does nothing.
# GRE Tunnel :
echo "Loading modules.."
modprobe -a nf_tproxy_core xt_TPROXY xt_socket xt_mark ip_gre

LOCALIP="192.168.8.21"
FORTIDIRIP="192.168.8.1"
FORTIIPID="XX.XX.XXX.254"
echo "changing routing and reverse path stuff.."
echo 0 > /proc/sys/net/ipv4/conf/lo/rp_filter


What about rp_filter on eth0 where the traffic is actually exiting the Squid
box?

Could you elaborate on this..


What rp_filter does is prevent packets from local software using that 
interface from using IP addresses that do not belong to that box.


The purpose of TPROXY being to spoof the _clients_ IP address on 
outgoing trafffic. Which does not leave the machine on lo, but through 
eth0 or some other interface.



Amos