Re: [squid-users] Need help to configure MS Exchange RPC over HTTP

2012-05-25 Thread Clem

Hi Ruiyuan,

Client - Apache (hop1) - IIS 7 - exchange 2007 It works the setup 
and just I could not have the latest Apache. Otherwise I will continue 
to use Apache reverse proxy. The latest Apache does not support MS RPC 
over http which is posted on the internet.


What do you mean when you say that the latest Apache does not support MS 
RPC OVER HTTP, whereas your version supports it ?? That's not make sense ?


If I can do Client - Apache reverse proxy - IIS RPC - exchange 2007, 
I'll install it as soon as possible !


Thx

Clem


Le 24/05/2012 21:52, Ruiyuan Jiang a écrit :

By the way, NTLM works with windows 7 client through Apache here.


Hi, Clem

I am reading your post

http://www.squid-cache.org/mail-archive/squid-users/201203/0454.html

In the post, someone stated that NTLM auth does not support:

It's facing the double hop issue, ntlm credentials can be sent only on one hop, and 
is lost with 2 hops like : client -  squid (hop1) IIS6 rpx proxy (hop2) -  
exchange 2007

That is not true. Here we have the setup:

Client -  Apache (hop1) -  IIS 7 -  exchange 2007

It works the setup and just I could not have the latest Apache. Otherwise I 
will continue to use Apache reverse proxy. The latest Apache does not support 
MS RPC over http which is posted on the internet.

https://issues.apache.org/bugzilla/show_bug.cgi?id=40029

I am not sure why squid does not support NTLM auth to the backend exchange 
server.

Ruiyuan





This message (including any attachments) is intended
solely for the specific individual(s) or entity(ies) named
above, and may contain legally privileged and
confidential information. If you are not the intended
recipient, please notify the sender immediately by
replying to this message and then delete it.
Any disclosure, copying, or distribution of this message,
or the taking of any action based on it, by other than the
intended recipient, is strictly prohibited.




[squid-users] Authentication bug in 3.1.19 solved in 3.2.0.17

2012-05-25 Thread Alan
Hello,

I'm implementing a proxy server that authenticates users via radius,
and then based on the source ip, login and the destination, grants
access or not to the requested objects.

The relevant section of squid.conf is:

auth_param basic program /usr/lib/squid/squid_radius_auth -f
/etc/squid/radius.conf -t 5
auth_param basic children 5
auth_param basic realm Web Proxy
auth_param basic credentialsttl 1 hour
external_acl_type my_acl_type %SRC %LOGIN %DST /var/www/htdocs/acl.php
acl my_acl external my_acl_type
http_access allow my_acl
http_access deny all

Both IE and Firefox have the same behavior: they popup the
authentication prompt, then they can make requests for a while, and
randomly popup the authentication prompt again.  I type the same
username and password, and it works fine.
In Konqueror there is no popup, I guess it tries again one more time
with the last username and password before prompting the user.

A network capture reveals that the client is always sending the right
Proxy-Authentication header with it's requests, but squid randomly
replies with a 407 status code, without even asking the radius server
(the authentication result is presumably still cached).

In squid 3.2.0.17 this problem is gone and I don't get the
authentication prompts anymore, but since it is labeled Beta instead
of Stable, I wonder if this can be solved in the 3.1 series.

Has anybody else been affected by this?

Best regards,

Alan


RE: [squid-users] Squid Restarting

2012-05-25 Thread Kamarudin Samsudin
Hi Amos,

I'm looking into this now. Could you guide me, how to get the two values in 
those FD form the core dump file.
I did something... hope will give you clues. This is from Solaris.


pflags  core_mib01_01-19_04_2012
core 'core-19_04_2012' of 22938:   (squid) -f /squid_dir/etc/squid.conf
data model = _ILP32  flags = MSACCT|MSFORK
 /1:flags = 0
sigmask = 0xbefc,0x  cursig = SIGABRT   


pstack core_mib01_01-19_04_2012
core 'core-19_04_2012' of 22938:   (squid) -f /squid_dir/etc/squid.conf
-  lwp# 1 / thread# 1  
 feccc5f8 _lwp_kill (6, 0, fed34f98, fecabf18, , 6) + 8
 fec4194c abort(23dd4e28, 1, 286790, eeb60, fed333d8, 0) + 110
 000c2aa4 xassert  (23b3b0, 23aca0, 1f0, 49, 1, 2) + c4
 000d6d10 _ZN8FwdState12serverClosedEi (1f706cd0, e, 0, fed392a4, 3, 47c) + 48
 0017cf1c _ZN17CommCloseCbPtrFun4dialEv (232a4ba4, 0, 2ea174, 1, 2ea178, 
217384) + c
 0016c178 _ZN9AsyncCall4makeEv (232a4b88, 3d28b8, , 2ea000, 1, 14) + 1b8
 0016e5c4 _ZN14AsyncCallQueue8fireNextEv (47b828, 237800, 2ea174, 2ea178, 
17a950, 232a4b88) + 1e4
 0016e778 _ZN14AsyncCallQueue4fireEv (47b828, 1a20fd11, 6b061, 3000, 1, 3bf400) 
+ 18
 000cd680 _ZN9EventLoop13dispatchCallsEv (ffbffa50, ffbffa38, 1, 0, 81ca50, 
15cae8) + c
 000cd8a4 _ZN9EventLoop7runOnceEv (ffbffa50, ffbff9a0, 10, ffbffa38, 81ca50, 
ffbffa58) + 110
 000cd990 _ZN9EventLoop3runEv (ffbffa50, ffbffa30, 246000, c0648, 0, 4835a0) + c
 0011b934 _Z9SquidMainiPPc (3, ffbffc64, 288800, , ff312a00, fecb6144) 
+ 8dc
 0011bef0 main (3, ffbffc64, ffbffc74, 3f36b0, ff310140, 0) + 8
 000833e4 _start   (0, 0, 0, 0, 0, 0) + 5c


Hope this will help.


Thanks  regards,
---
Din



-Original Message-
From: Justin Lawler 
Sent: Friday, May 25, 2012 3:48 PM
To: Kamarudin Samsudin
Subject: FW: [squid-users] Squid Restarting

FYI

-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Sent: Monday, May 14, 2012 8:16 PM
To: Justin Lawler
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] Squid Restarting

On 14/05/2012 11:03 p.m., Justin Lawler wrote:
 Thanks Amos - we have heap dumps but unfortunately we could not share with 
 the wider community as they're taken from a customer production environment. 
 However, we can send on information taken from the heap dump - like output 
 from pflags/pstack/etc. Would this be sufficient to investigate the issue?

Private data should not be a problem. Initially we just need a backtrace from 
the dump to find which function calls led to it, and the two values in those FD.

Amos

 Thanks and regards,
 Justin

 -Original Message-
 From: Amos Jeffries

 On 4/05/2012 9:59 p.m., Justin Lawler wrote:
 Hi,

 We're running squid 3.1.19 - and have seen it restarting from the logs, just 
 after the below error:

 2012/04/19 12:12:28| assertion failed: forward.cc:496: server_fd == fd
 2012/04/19 12:12:59| Starting Squid Cache version 3.1.19 for 
 sparc-sun-solaris2.10...

 Is this a known issue? any workaround?
 Seems to be new and a bit strange. Squid opens one connection to the server 
 to fetch content sometime later a connection was closed, but not the one 
 which was opened to begin with.

 Do you have a core dump or stack trace available to identify what the fd and 
 server_fd values actually were during the crash?

 It's been in production for 6 weeks now, and have only seen it once, but we 
 need to have an answer for the customer. We're worried it'll be more 
 frequently as traffic goes up.
 Being the first report over a month after the release, it would seem to be 
 very rare.

 Amos
 This message and the information contained herein is proprietary and 
 confidential and subject to the Amdocs policy statement, you may 
 review at http://www.amdocs.com/email_disclaimer.asp


This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,
you may review at http://www.amdocs.com/email_disclaimer.asp



Re: [squid-users] how to check squid slowness?

2012-05-25 Thread Muhammad Yousuf Khan
i am not very senior in linux but recently i deployed squid and
replace the ISA Server 2006. i was facing the same thing which kinda
resolve the issue for me for now is.

1, DNS caching (powerdns)
2, reduce destination block list.
previously i was usting 2 MB

On Fri, May 25, 2012 at 9:03 AM, Julie Xu xl...@hotmail.com wrote:

 Hi

 I have some complains about the squid courses get web page from internet very 
 slow time by time.

 on the server side, I can not see any performance issue.

 so, I would like to get someway to confirm on squid application side.

 what I should do to confirm the squid pass traffic is slow or not?

 and is any possbile to monitor for few day?

 any comments will be apprecaited

 thanks in advance

 xll



Re: [squid-users] how to check squid slowness?

2012-05-25 Thread Muhammad Yousuf Khan
sorry for the last emails kindly ignore that.

i am not very senior in linux but recently i deployed squid and
replace the ISA Server 2006. i was facing the same thing. the solutions,
which kinda resolve the issue for me (for now) is.

1, DNS caching (powerdns)
2, reducing destination block list.
previously i was using 2 MB destination block list for blocking chat
and social sites in office hours.
which was quite a huge list and i was unaware of that.

and one more thing you can lookinto blocking youtube or video
streaming if limited bandwidth is a problem.

Thanks



On Fri, May 25, 2012 at 2:58 PM, Muhammad Yousuf Khan sir...@gmail.com wrote:
 i am not very senior in linux but recently i deployed squid and
 replace the ISA Server 2006. i was facing the same thing which kinda
 resolve the issue for me for now is.

 1, DNS caching (powerdns)
 2, reduce destination block list.
 previously i was usting 2 MB

 On Fri, May 25, 2012 at 9:03 AM, Julie Xu xl...@hotmail.com wrote:

 Hi

 I have some complains about the squid courses get web page from internet 
 very slow time by time.

 on the server side, I can not see any performance issue.

 so, I would like to get someway to confirm on squid application side.

 what I should do to confirm the squid pass traffic is slow or not?

 and is any possbile to monitor for few day?

 any comments will be apprecaited

 thanks in advance

 xll



[squid-users] Missing examples in Squid Users Guide

2012-05-25 Thread Harry
Hello,

I'm reading the Squid Users Guide available at:
http://www.deckle.co.za/squid-users-guide. But in more places than one, I'm
finding that the text is making reference to examples which are nowhere to
be found. 

Here's just one instance. On page,
http://www.deckle.co.za/squid-users-guide/Access_Control_and_Access_Control_Operators,
I see the sentence...

You could thus deny the dialup range cache access during working hours
with the FOLLOWING acl rules:

... but don't see any acl rules following as the text says.

Are the owners of Squid documentation aware of this problem, and then
working to fix it?

Regards,
/HS




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Missing-examples-in-Squid-Users-Guide-tp4654240.html
Sent from the Squid - Users mailing list archive at Nabble.com.


RE: [squid-users] Need help to configure MS Exchange RPC over HTTP

2012-05-25 Thread Ruiyuan Jiang
Hi, Clem

In the Apache link that I provided, it stated that below Apache v2.0.58 
supports RPC over HTTP. Any version of Apache above that version does not 
support RPC. Two reasons:

1. it is not a standard.
2. patents by Microsoft if Apache uses it.

Ruiyuan Jiang


-Original Message-
From: Clem [mailto:clemf...@free.fr] 
Sent: Friday, May 25, 2012 2:19 AM
To: Ruiyuan Jiang
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] Need help to configure MS Exchange RPC over HTTP

Hi Ruiyuan,

 Client - Apache (hop1) - IIS 7 - exchange 2007 It works the setup 
and just I could not have the latest Apache. Otherwise I will continue 
to use Apache reverse proxy. The latest Apache does not support MS RPC 
over http which is posted on the internet.

What do you mean when you say that the latest Apache does not support MS 
RPC OVER HTTP, whereas your version supports it ?? That's not make sense ?

If I can do Client - Apache reverse proxy - IIS RPC - exchange 2007, 
I'll install it as soon as possible !

Thx

Clem


Le 24/05/2012 21:52, Ruiyuan Jiang a écrit :
 By the way, NTLM works with windows 7 client through Apache here.


 Hi, Clem

 I am reading your post

 http://www.squid-cache.org/mail-archive/squid-users/201203/0454.html

 In the post, someone stated that NTLM auth does not support:

 It's facing the double hop issue, ntlm credentials can be sent only on one 
 hop, and is lost with 2 hops like : client -  squid (hop1) IIS6 rpx proxy 
 (hop2) -  exchange 2007

 That is not true. Here we have the setup:

 Client -  Apache (hop1) -  IIS 7 -  exchange 2007

 It works the setup and just I could not have the latest Apache. Otherwise I 
 will continue to use Apache reverse proxy. The latest Apache does not support 
 MS RPC over http which is posted on the internet.

 https://issues.apache.org/bugzilla/show_bug.cgi?id=40029

 I am not sure why squid does not support NTLM auth to the backend exchange 
 server.

 Ruiyuan





 This message (including any attachments) is intended
 solely for the specific individual(s) or entity(ies) named
 above, and may contain legally privileged and
 confidential information. If you are not the intended
 recipient, please notify the sender immediately by
 replying to this message and then delete it.
 Any disclosure, copying, or distribution of this message,
 or the taking of any action based on it, by other than the
 intended recipient, is strictly prohibited.





This message (including any attachments) is intended
solely for the specific individual(s) or entity(ies) named
above, and may contain legally privileged and
confidential information. If you are not the intended 
recipient, please notify the sender immediately by 
replying to this message and then delete it.
Any disclosure, copying, or distribution of this message,
or the taking of any action based on it, by other than the
intended recipient, is strictly prohibited.



Re: [squid-users] Missing examples in Squid Users Guide

2012-05-25 Thread Eliezer Croitoru
this guide is maintained by outside of squid-cache and is just a nice 
reference.

you can try to contact Oskar Pearson that is this guide writer.

you can try to look at the acl part of squid-docs:
http://www.squid-cache.org/Doc/config/acl/

good luck

Eliezer

On 25/05/2012 13:40, Harry wrote:

Hello,

I'm reading the Squid Users Guide available at:
http://www.deckle.co.za/squid-users-guide. But in more places than one, I'm
finding that the text is making reference to examples which are nowhere to
be found.

Here's just one instance. On page,
http://www.deckle.co.za/squid-users-guide/Access_Control_and_Access_Control_Operators,
I see the sentence...

 You could thus deny the dialup range cache access during working hours
with the FOLLOWING acl rules:

... but don't see any acl rules following as the text says.

Are the owners of Squid documentation aware of this problem, and then
working to fix it?

Regards,
/HS




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Missing-examples-in-Squid-Users-Guide-tp4654240.html
Sent from the Squid - Users mailing list archive at Nabble.com.



--
Eliezer Croitoru
https://www1.ngtech.co.il
IT consulting for Nonprofit organizations
eliezer at ngtech.co.il


[squid-users] Linux + TPROXY + Remote Squid

2012-05-25 Thread Thomas York
I have a lab environment set up using two Debian Wheezy servers (Squeeze
doesn't have a new enough kernel or iptables to do TPROXY properly). One of
the servers is a router and the other is a proxy server. There are several
clients connected to the router to simulate a production routing
environment. If I have both the TPROXY redirection and Squid on the same
server, Squid handles the requests and everything works perfectly. However,
this isn't how I want the proxy to be configured in our production
environment. I've changed my iptables rules on the router to redirect all
tagged 1 packets to the proxy server. This is working perfectly fine and I
can see the data being routed to the proxy server using tcpdump on both the
router and the proxy. However, Squid on the proxy server doesn't seem to
'see' the data being routed and doesn't do anything with it. I have
http_port 3129 tproxy set on the proxy server. Is there anything special I
need to do using iptables on the proxy server? 

Both servers are running kernel 3.2.0-2-amd64 and iptables 1.4.13 from
Wheezy and the Squid being used on the proxy is 3.1.19. If any more
information is needed, please just let me know and I'd be happy to supply
it. Thanks.

--Thomas York


smime.p7s
Description: S/MIME cryptographic signature


Re: [squid-users] Linux + TPROXY + Remote Squid

2012-05-25 Thread Giles Coochey

On 25/05/2012 15:35, Thomas York wrote:

I have a lab environment set up using two Debian Wheezy servers (Squeeze
doesn't have a new enough kernel or iptables to do TPROXY properly). One of
the servers is a router and the other is a proxy server. There are several
clients connected to the router to simulate a production routing
environment. If I have both the TPROXY redirection and Squid on the same
server, Squid handles the requests and everything works perfectly. However,
this isn't how I want the proxy to be configured in our production
environment. I've changed my iptables rules on the router to redirect all
tagged 1 packets to the proxy server. This is working perfectly fine and I
can see the data being routed to the proxy server using tcpdump on both the
router and the proxy. However, Squid on the proxy server doesn't seem to
'see' the data being routed and doesn't do anything with it. I have
http_port 3129 tproxy set on the proxy server. Is there anything special I
need to do using iptables on the proxy server?

Both servers are running kernel 3.2.0-2-amd64 and iptables 1.4.13 from
Wheezy and the Squid being used on the proxy is 3.1.19. If any more
information is needed, please just let me know and I'd be happy to supply
it. Thanks.

--Thomas York

Are you Source-NAT'ing the redirect from the Router?



smime.p7s
Description: S/MIME Cryptographic Signature


Re: [squid-users] Basic questions about Squid capabilities

2012-05-25 Thread Jason Voorhees
Hey thanks for your reply.

On Tue, May 22, 2012 at 10:40 AM, Eliezer Croitoru elie...@ngtech.co.il wrote:
 On 20/05/2012 19:47, Jason Voorhees wrote:

 Hi people:

 I'm a squid user since long time ago but my skills -I believe- aren't
 so high to implement some of the feature I'm asking for in this
 e-mail.

 In a university there are 6000-8000 users (they are divided in a big
 campus through different VLANs, offices even metro-ethernet connected
 branchs) browsing Internet through two lines of 80 and 70 mbps.
 Currently there's a fortinet appliance doing the labor of web
 filtering with some interesting feature I'd like to implement with
 Squid too. These are the pros and cons about fortinet:

 cons
 
 - It doesn't have a cache (at least not an effective one)
 - When fortinet implement too much bandwidth rules (something like
 squid delay pools) it begins to work slowly and the browsing becomes
 slow too.

 squid can implement both of them but it depends on the hardware that is
 hosting squid.
 basic 4 cores with 8gb ram can basically do the job for you.
 the users are not much of measurement size but a requests per second and
 bandwidth throughput together.


I'll keep in mind your hardware recomendations, but now you mention 4
cores for Squid I remember an old doubt: Can Squid really be
benefitted with more CPU cores? A long time ago I remember I asked
about Squid running with a SMP kernel in a server with two processors
(sockets) but someone told me that Squid wasn't prepared to use more
than 1 processor so I wouldn't note difference between using 1 or 2
processors. Do this apply to cores too?



 pros
 
 - It has a feature to transparently block https websites. The fortinet
 admin told me that only for blocked webpages users get a warning of a
 incorrect certificate (a fortinet digital certificated) but for
 allowed websites users don't get any warning of failing digital
 certificates (i don't know if this is true or possible).
 - Its web filtering its good, it has a up to date database of
 categorized websites to do an easy blocking.

 What I plan to do is (or what I'd like to do):

 - Put Squid in front of fortinet so this one can use squid's cache. I
 read this is possible using WCCP and some other things.
 - Squid should work as a replace of fortinet if this one someday
 fails. So squid is the backup solution to replace fortinet.

 it depends on the outgoing ip address and on interception level.
 in basic interception mode you can use fortinet as a cache_peer.



I really don't know if WCCP is necessary for this scenario, it's
something I just found in a tutorial on Internet. So, just using a
cache_peer configuration I could make fortinet use the squid's cache?




 So to achieve this I think I need:

 a) Do a good filtering : I was thinking about configure Squid +
 SquidGuard with a free database, but I have here a simple and basic
 question: When I use a redirector like Squidguard... all Squid ACLs
 will definitely stop working? I mean, can I use a redirector and still
 use my traditional ACLs (acl, http_access, http_reply_access)? Last
 time I used a redirector with Squid I appreciated that all ACLs
 weren't even read by Squid so I have this doubt.

 a url_rewrite is what you will use and all the acls will work the same way.
 you can bypass the url_rewrite with acls... so to speak.


So if my url redirector (squidguard) and Squid ACLs should work
together, which of those have precedence over the another? Is there
any special setting to make both ACLs (squid and redirector) working?
Do I need to put the url_rewrite directive above/below http_access
directives or something like that?




 b) Integrate fortinet with WCCP : I rapidly saw a few tutorials of how
 to do that but... have you achieve this without problem?

 what exactly do you want to achieve by using WCCP? what benefits from that?


I really don't know,  I just read something about this on Internet.
I'll investigate further before mention about this again.




 c) Do transparent https proxy with squid : I tried to use https_port +
 ssl-bump feature of Squid 3.1 and iptables (REDIRECT 443 port to 3128)
 without 100% success. I generated my own certificate and that one is
 the same users get when trying to view some websites (i.e.
 facebook.com) what is OK but it happened that some websites didn't
 work as expected: some website loaded OK, some loaded without CSS
 stylesheets nor images, and some others never loaded (i got the
 redirect loop error in the browser). I wasn't able to build squid
 3.2 but I don't know if is necessary to use this version to get this
 feature of transparent https proxy working.

 to use ssl-bump you use a different port then 3128 and specifically for
 ssl-bump.
 there was a bug somewhere that makes a loop like that and i think that the
 cause is redirecting 443 to 3128 instead to ssl-bump port.
 try it again and you will see miracles :]



Do you mean it was maybe caused by a Squid bug? Or do you 

RE: [squid-users] Linux + TPROXY + Remote Squid

2012-05-25 Thread Thomas York
No. The router has three interfaces. One goes to the internet and has a 
default route. I am running NAT on this interface so that the firewall, proxy 
and clients can reach the internet. The second is a single /24 network 
(10.0.1.0/24) that has only the proxy and the firewall on it. The third is a 
single /24 (10.1.1.0/24) that has a single Windows 7 client on it for 
generating  HTTP requests and testing. I'm tagging the packets on the firewall 
and running them through a separate routing table, which sends the packets to 
the proxy (without NAT-ing). The proxy and the firewall see the routed packets 
perfectly fine. I'm not doing any kind of iptables rules on the proxy, 
however.

-Original Message-
From: Giles Coochey [mailto:gi...@coochey.net]
Sent: Friday, May 25, 2012 11:12 AM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] Linux + TPROXY + Remote Squid

On 25/05/2012 15:35, Thomas York wrote:
 I have a lab environment set up using two Debian Wheezy servers (Squeeze
 doesn't have a new enough kernel or iptables to do TPROXY properly). One of
 the servers is a router and the other is a proxy server. There are several
 clients connected to the router to simulate a production routing
 environment. If I have both the TPROXY redirection and Squid on the same
 server, Squid handles the requests and everything works perfectly. However,
 this isn't how I want the proxy to be configured in our production
 environment. I've changed my iptables rules on the router to redirect all
 tagged 1 packets to the proxy server. This is working perfectly fine and I
 can see the data being routed to the proxy server using tcpdump on both the
 router and the proxy. However, Squid on the proxy server doesn't seem to
 'see' the data being routed and doesn't do anything with it. I have
 http_port 3129 tproxy set on the proxy server. Is there anything special I
 need to do using iptables on the proxy server?

 Both servers are running kernel 3.2.0-2-amd64 and iptables 1.4.13 from
 Wheezy and the Squid being used on the proxy is 3.1.19. If any more
 information is needed, please just let me know and I'd be happy to supply
 it. Thanks.

 --Thomas York
Are you Source-NAT'ing the redirect from the Router?



smime.p7s
Description: S/MIME cryptographic signature


RE: [squid-users] Linux + TPROXY + Remote Squid

2012-05-25 Thread Thomas York
I forgot one detail. I have an iptables rule BEFORE the PREROUTING
divert/tproxy iptables rules on the router. I added an accept so that HTTP
traffic from the proxy doesn't get tagged and rerouted to the proxy.

Here's the rule set I have for the firewall

iptables -t mangle -N DIVERT
iptables -t mangle -A DIVERT -j MARK --set-mark 1
iptables -t mangle -A DIVERT -j ACCEPT
iptables -t mangle -A PREROUTING -s 10.0.1.1 -j ACCEPT
iptables  -t mangle -A PREROUTING -p tcp -m socket -j DIVERT
iptables  -t mangle -A PREROUTING -p tcp --dport 80 -j TPROXY --tproxy-mark
0x1/0x1 --on-port 3129

-- Thomas York

-Original Message-
From: Thomas York [mailto:strate...@fuhell.com] 
Sent: Friday, May 25, 2012 1:32 PM
To: 'Giles Coochey'; squid-users@squid-cache.org
Subject: RE: [squid-users] Linux + TPROXY + Remote Squid

No. The router has three interfaces. One goes to the internet and has a 
default route. I am running NAT on this interface so that the firewall,
proxy 
and clients can reach the internet. The second is a single /24 network 
(10.0.1.0/24) that has only the proxy and the firewall on it. The third is a

single /24 (10.1.1.0/24) that has a single Windows 7 client on it for 
generating  HTTP requests and testing. I'm tagging the packets on the
firewall 
and running them through a separate routing table, which sends the packets
to 
the proxy (without NAT-ing). The proxy and the firewall see the routed
packets 
perfectly fine. I'm not doing any kind of iptables rules on the proxy, 
however.

-Original Message-
From: Giles Coochey [mailto:gi...@coochey.net]
Sent: Friday, May 25, 2012 11:12 AM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] Linux + TPROXY + Remote Squid

On 25/05/2012 15:35, Thomas York wrote:
 I have a lab environment set up using two Debian Wheezy servers (Squeeze
 doesn't have a new enough kernel or iptables to do TPROXY properly). One
of
 the servers is a router and the other is a proxy server. There are several
 clients connected to the router to simulate a production routing
 environment. If I have both the TPROXY redirection and Squid on the same
 server, Squid handles the requests and everything works perfectly.
However,
 this isn't how I want the proxy to be configured in our production
 environment. I've changed my iptables rules on the router to redirect all
 tagged 1 packets to the proxy server. This is working perfectly fine and I
 can see the data being routed to the proxy server using tcpdump on both
the
 router and the proxy. However, Squid on the proxy server doesn't seem to
 'see' the data being routed and doesn't do anything with it. I have
 http_port 3129 tproxy set on the proxy server. Is there anything special
I
 need to do using iptables on the proxy server?

 Both servers are running kernel 3.2.0-2-amd64 and iptables 1.4.13 from
 Wheezy and the Squid being used on the proxy is 3.1.19. If any more
 information is needed, please just let me know and I'd be happy to supply
 it. Thanks.

 --Thomas York
Are you Source-NAT'ing the redirect from the Router?



smime.p7s
Description: S/MIME cryptographic signature


FW: RE: [squid-users] comperterName logged for sAMAccountName

2012-05-25 Thread Diersen, Dustyn [DAS]
The more I dig into this problem, the more complex it seems to get. I spent 
most of the day yesterday working with our AD admin on squid's use of Kerberos 
authentication. Today we tried something new, we both logged into a server via 
terminal services. He setup the browser to use our squidGuard proxy, then he 
gave the go ahead to hit a blacklisted/blocked site while he did the same. 
Evidently I was quicker because we both saw my username in the blocked log.

Can anyone explain to me how my userName was tied to his HTTP GET request 
according to squid?

-Dustyn  

-Original Message-
From: Diersen, Dustyn [DAS] 
Sent: Thursday, May 24, 2012 9:28 AM
To: 'squid-users@squid-cache.org'
Subject: RE: [squid-users] comperterName logged for sAMAccountName

2012/5/23 Diersen, Dustyn [DAS] dustyn.dier...@iowa.gov:
 I have squid running with SquidGuard using Active Directory for LDAP 
 \ authentication. The problem I am seeing is the use of the AD 
 attribute \ sAMAccountName for both userName and computerName. I 
 thought I had a fix by adding \ sAMAccountType to my following 
 squid_ldap_auth helper, but I am still seeing \ numerous 
 computerNames rather than userNames being logged. The REAL problem is 
 ACL \ matching, as I never know what I will be receiving from my 
 users and do not wish to \ include computerName in my userlists. Â I 
 have tested adding a couple of \ computerNames to the userlist which 
 resolves blocked access messages for users with \ specialized access 
 requirements.
 Here is my current LDAP helper string:
 auth_param basic program /usr/local/squid/libexec/squid_ldap_auth -R 
 -b \ dc=base,dc=domain,dc=in,dc=our,dc=AD -s sub -D BASE\\user -W 
 \ /squidGuard/filename -f \ 
 (((objectCategory=person)(sAMAccountName=%s)(sAMAccountType=805306
 368))) -u \ sAMAccountName -P -v3 -Hldap://domain.com I have been 
 searching for a solution to this problem for more than a week, but 
 have \ been unable to find one that works in my environment.
 -Dustyn

 If you're using AD anyhow then why aren't you using kerberos (or
 NTLMv2 [not safe anymore]) authentication? Then you generally get the 
 username, though I think I also by us seen computer names in the 
 username field which I think happens when there is a system process 
 trying to access the web for instance for updates

 Regards,
 Eli

Hello Eli,
I do also have Kerberos defined, see below for entries. I need help figuring 
out where the computerNames are coming from. As I mentioned before, I thought I 
had eliminated the computerNames by the squid_ldap_auth helper above. I have 
more than 400 users (and growing) and would like to keep their userNames only 
in the userlists. When the computerName is logged, the end user ends up using 
the default ACL which is more restrictive on outbound browsing, resulting in 
trouble tickets to fix the problem.

auth_param negotiate program /usr/local/squid/libexec/squid_kerb_auth
auth_param negotiate children 30
auth_param negotiate keep_alive on
url_rewrite_program /squidGuard/redirector-id.pl url_rewrite_children 8 
url_rewrite_concurrency  10 acl AUTH proxy_auth REQUIRED  

and here is the rest of my basic auth:
auth_param basic children 15
auth_param basic realm SquidGuard Authentication auth_param basic 
credentialsttl 8 hours http_access allow localnet http_access allow AUTH

Thank you,
-Dustyn


[squid-users] Connection pinning (NTLM pass through)

2012-05-25 Thread Petter Abrahamsson
Hi,

I'm trying to get NTLM pass through to work with squid 3.1.19. I have
followed the instructions found on the wiki[1] on connection pinning
but I just keep receiving 401 status messages.
Below is the very simple squid.conf that I'm using for this test.

acl manager proto cache_object
acl localhost src 127.0.0.1/32 ::1
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localnet
http_access allow localhost
http_access deny all
http_port 8080 connection-auth=on
hierarchy_stoplist cgi-bin ?
coredump_dir /var/cache/squid
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .   0   20% 4320

And below is the corresponding access.log entries with obfuscated ip
addresses and host names.

1337976537.852 63 192.168.12.214 TCP_MISS/401 466 GET
http://www.example.net/directory/ - DIRECT/x.x.x.x text/html
1337976550.714 29 192.168.12.214 TCP_MISS/401 1074 GET
http://www.example.net/directory/ - DIRECT/x.x.x.x text/html
1337976551.025 57 192.168.12.214 TCP_MISS/401 466 GET
http://www.example.net/directory/ - DIRECT/x.x.x.x text/html
1337976554.627 57 192.168.12.214 TCP_MISS/401 1074 GET
http://www.example.net/directory/ - DIRECT/x.x.x.x text/html
1337976558.006   3128 192.168.12.214 TCP_MISS/401 466 GET
http://www.example.net/directory/ - DIRECT/x.x.x.x text/html
1337976559.462 59 192.168.12.214 TCP_MISS/401 1074 GET
http://www.example.net/directory/ - DIRECT/x.x.x.x text/html
1337976559.760 56 192.168.12.214 TCP_MISS/401 466 GET
http://www.example.net/directory/ - DIRECT/x.x.x.x text/html

I feel like I'm missing something obvious since the instructions on
the wiki are quite simple.
When I try the same website through a v2.7 squid it lets me login.
Let me know if any other information is needed.
Any help would be very much appreciated.

Regards,
/petter

[1] http://wiki.squid-cache.org/Features/ConnPin


[squid-users] Redirecting certain sites with a forward option?

2012-05-25 Thread - Mikael -
I have a few teachers how want a group of students to be forwarded to
a page for certain sites. The page would then have forwarding option
to view the original site. They are trying to add around 100 URL's to
this list. For example, if student tries to access loc.gov site, they
would be redirected to our page that would have some guidelines
written by teachers, and on the bottom of that page student could
click on Continue to original address, %URL.

Squid v 3.1.6 is setup in transparent mode.

Is this possible?

Here is our config file:

http_port 3128

refresh_pattern http://.*\.adobe\.com/ 0 80% 20160 reload-into-ims
refresh_pattern http://.*\.windowsupdate\.microsoft\.com/ 0 80% 20160
reload-into-ims
refresh_pattern http://office\.microsoft\.com/ 0 80% 20160 reload-into-ims
refresh_pattern http://windowsupdate\.microsoft\.com/ 0 80% 20160
reload-into-ims
refresh_pattern http://w?xpsp[0-9]\.microsoft\.com/ 0 80% 20160 reload-into-ims
refresh_pattern http://w2ksp[0-9]\.microsoft\.com/ 0 80% 20160 reload-into-ims
refresh_pattern http://download\.microsoft\.com/ 0 80% 20160 reload-into-ims
refresh_pattern http://download\.macromedia\.com/ 0 80% 20160 reload-into-ims

refresh_pattern -i \.flv$ 10080 90% 99 ignore-no-cache
override-expire ignore-private
refresh_pattern -i \.index.(html|htm)$ 0 40% 10080
refresh_pattern -i \.(html|htm|css|js)$ 1440 40% 40320
refresh_pattern -i \.(gif|png|jpg|jpeg|ico)$ 10080 90% 43200
override-expire ignore-no-cache ignore-no-store ignore-private
refresh_pattern -i \.(iso|avi|wav|mp3|mp4|mpeg|swf|x-flv)$ 43200 90%
432000 override-expire ignore-no-cache ignore-no-store ignore-private
refresh_pattern -i
\.(deb|rpm|exe|zip|tar|tgz|ram|rar|bin|ppt|doc|tiff)$ 10080 90% 43200
override-expire ignore-no-cache ignore-no-store ignore-private

refresh_pattern ^gopher:14400%  1440
refresh_pattern ^ftp: 144000 20% 1008000
refresh_pattern -i \.index.(html|htm)$ 1440 90% 40320
refresh_pattern -i \.(html|htm|css|js)$ 1440 90% 40320
refresh_pattern (/cgi-bin/|\?) 0 0% 0

acl manager proto cache_object
acl localhost src 127.0.0.1/32
acl to_localhost dst 127.0.0.0/8
acl schoolip src 148.110.3.191

acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT

http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost
http_access allow schoolip
http_access allow test
#http_access allow all

# cache mem doesn't take higher value here. we have 8 GB on machine,
is 4 GB max?
cache_mem 3994 MB
maximum_object_size 65536 KB
maximum_object_size_in_memory 64 KB
memory_replacement_policy heap GDSF
cache_replacement_policy heap LFUDA
cache_dir aufs /var/spool/squid3 16384 32 512
half_closed_clients off
coredump_dir /var/spool/squid3
half_closed_clients off
cache_store_log none

logfile_rotate 3
ipcache_size 16384
ipcache_low 98
ipcache_high 99
log_fqdn off
half_closed_clients off
reload_into_ims on
httpd_suppress_version_string off