Re: [squid-users] squid 3.1.x with IIS SharePoint as back-end.

2012-01-10 Thread 巍俊葛
Thanks Amos.

I did the lynx test on back-end web site on squid system like this:
sudo lynx http://wtestsm1.asiapacific.hpqcorp.net

First, it show the message:
Alert!: Invalid header 'WWW-Authenticate: NTLM'

Then it show the following message.
Show the 401 message body? (y/n)

For the domain auth, I mean the back-end web site need corp domain
user to be accessed.
I put this in this way, if I log on with my corp domain on my laptop,
then I could acces IIS Share Point without any credentials window pop
up. If not, I have to input my domain account on credentials window to
access the Share Point Site.


The following is my squid configuration about this case which I ignore
some default sections.
#added by kimi
acl hpnet src 16.0.0.0/8# RFC1918 possible internal network
#added by kimi
acl origin_servers dstdomain ids-ams.elabs.eds.com
http_access allow origin_servers
http_access allow hpnet

http_port 192.85.142.88:80 accel defaultsite=ids-ams.elabs.eds.com
connection-auth=on

forwarded_for on

request_header_access WWW-Authenticate allow all

cache_peer wtestsm1.asiapacific.hpqcorp.net parent 80 0 no-query
no-digest originserver name=main connection-auth=on login=PASS

cache_peer_domain main .elabs.eds.com

hierarchy_stoplist cgi-bin ?

coredump_dir /var/spool/squid

# Add any of your own refresh_pattern entries above these.
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .   0   20% 4320

cache_dir aufs /data/squid/cache 12000 64 256
cache_mem 1024 MB
maximum_object_size_in_memory 1024 KB
maximum_object_size 51200 KB

visible_hostname ids-ams.elabs.eds.com
debug_options ALL,5
http_access deny all

While let squid be running, I do test like this
http://ids-ams.elabs.eds.com

The 404 error page is shown.
That's why I am wondering squid could be as reverse-proxy with IIS
SharePoint as back-end?

Thanks,
~Kimi



On 11/01/2012, Amos Jeffries  wrote:
> On 11/01/2012 6:28 p.m., kimi ge(巍俊葛) wrote:
>> Hi,
>>
>>   I have an issue to make squid 3.1.x to work with IIS SharePoint as the
>>   back-end.
>> The details are listed below.
>>
>> 1. squid 3.1.x is running as a reverse-proxy.
>> 2. The back-end is IIS SharePoint Site with domain authentication
>> required.
>>   That means only the valid domain user could access this SharePoint site.
>>   The issue is it always return 404 error page. And the logon window is
>>   not prompted.
>
> What is this "domain authentication" you mention? All of the HTTP auth
> mechanisms count as "domain auth" to a reverse proxy, and none of them
> are named "Domain".
>
>>
>>   My question is whether squid supports this kind of case or not?
>>   If supports, how should I do configuration on squid.conf file?
>>
>>   Thanks in advance.
>>   ~Kimi
>
> 404 status is about the resource being requested _not existing_. Login
> only operates when there is something to be authorized fetching. So I
> think auth is not relevant at this point in your testing.
>
> Probably the URL being passed to IIS is not what you are expecting to be
> passed and IIS is not setup to handle it. You will need to share your
> squid.conf details for more help.
>
> Amos
>


Re: [squid-users] squid 3.1.x with IIS SharePoint as back-end.

2012-01-10 Thread Amos Jeffries

On 11/01/2012 6:28 p.m., kimi ge(巍俊葛) wrote:

Hi,

  I have an issue to make squid 3.1.x to work with IIS SharePoint as the
  back-end.
The details are listed below.

1. squid 3.1.x is running as a reverse-proxy.
2. The back-end is IIS SharePoint Site with domain authentication required.
  That means only the valid domain user could access this SharePoint site.
  The issue is it always return 404 error page. And the logon window is
  not prompted.


What is this "domain authentication" you mention? All of the HTTP auth 
mechanisms count as "domain auth" to a reverse proxy, and none of them 
are named "Domain".




  My question is whether squid supports this kind of case or not?
  If supports, how should I do configuration on squid.conf file?

  Thanks in advance.
  ~Kimi


404 status is about the resource being requested _not existing_. Login 
only operates when there is something to be authorized fetching. So I 
think auth is not relevant at this point in your testing.


Probably the URL being passed to IIS is not what you are expecting to be 
passed and IIS is not setup to handle it. You will need to share your 
squid.conf details for more help.


Amos


[squid-users] squid 3.1.x with IIS SharePoint as back-end.

2012-01-10 Thread 巍俊葛
Hi,

 I have an issue to make squid 3.1.x to work with IIS SharePoint as the
 back-end.
The details are listed below.

1. squid 3.1.x is running as a reverse-proxy.
2. The back-end is IIS SharePoint Site with domain authentication required.
 That means only the valid domain user could access this SharePoint site.
 The issue is it always return 404 error page. And the logon window is
 not prompted.

 My question is whether squid supports this kind of case or not?
 If supports, how should I do configuration on squid.conf file?

 Thanks in advance.
 ~Kimi


Re: [squid-users] SSL interception: no hits

2012-01-10 Thread Damir Cosic


Amos, right on! It works with 3.1.18. Thank you very much!

On 1/10/12 6:36 PM, Amos Jeffries wrote:

On 11.01.2012 06:33, Damir Cosic wrote:

Hello,

I am trying to configure a Squid (v3.1.11) proxy for SSL connections
between hosts on the LAN and servers on the internet. The traffic is
routed through the host on which Squid runs and iptables are used to
redirect traffic to ports 80 and 443 to ports 3128 and 3130,
respectively. Simple HTTP caching works well. First attempt is a miss
and subsequent ones are hits. For HTTPS, however, there are no hits,
only misses, even though the requested page is in the Squid's cache. I
would greatly appreciate any help.

The Squid configuration is based on the default file, with following
modifications (I understand that some of these are security risks, but
currently it is in testing environment and the only goal is to make it
work):

http_port 3128 intercept
https_port 3130 intercept ssl-bump cert=/etc/certs/beta-srv.crt
key=/etc/certs/beta-srv.key
always_direct allow all
ssl_bump allow all
sslproxy_cert_error allow all

The log entry when a client attempts to retrieve a page from a server:

Jan  2 23:51:10 beta squid: 1325573470.788 25 192.168.10.2
TCP_MISS/200 388 GET https://192.168.11.2/ - DIRECT/192.168.11.2
text/html

The cache file (the garbled part at the beginning is left out):

https://192.168.11.2/^@HTTP/1.1 200 OK^M
Date: Sat, 07 Jan 2012 21:22:42 GMT^M
Server: Apache/2.2.15 (Unix) DAV/2 mod_ssl/2.2.15 OpenSSL/1.0.0d^M
Last-Modified: Fri, 06 Jan 2012 16:25:09 GMT^M
ETag: "10d-31-4b5de7e0d2340"^M
Accept-Ranges: bytes^M
Content-Length: 49^M
Keep-Alive: timeout=5, max=100^M
Connection: Keep-Alive^M
Content-Type: text/html^M
^M
It is secure!

Please let me know if some other information would be useful.


Well, that is certainly cacheable, which explains why it is in the 
cache ;)


BUT,
 * what are the client request headers? It is possible and in some 
agents likely that they are requesting re-validation and new content 
to be fetched.


 * does a newer version work better? ssl-bump is only supported well 
in the 3.1.13 and later releases. Please try a newer release and see 
if the problem disappears.


Amos



Re: [squid-users] Active Directory Integrated Squid Proxy Guide

2012-01-10 Thread Amos Jeffries

On 11.01.2012 15:18, James Robertson wrote:

I forgot to mention that I'm running Server 2008 R2 domain
controllers.  Secondly, when I do a 'locate PROXY.keytab' I can't 
find

it which should be in the squid correctly if I'm not mistaken.


You may need to run "updatedb" to update the index before running the
find command.

I'm currently running Squid 2.7 (I'm a little afraid to do the 
upgrade
and mess something up, and don't know how yet) but in the config 
line



The same caveats apply as for multi-instance installations. Keep the 
port, cache_dir etc. separated.


The Debian packages are named differently so you can install them side 
by side and take care of most of the problems related to default 
locations and helpers for you.


I have not done it myself, so watch the install options apt give you to 
ensure its not removing one during install of the other


HTH
Amos



Re: [squid-users] Active Directory Integrated Squid Proxy Guide

2012-01-10 Thread James Robertson
> I forgot to mention that I'm running Server 2008 R2 domain
> controllers.  Secondly, when I do a 'locate PROXY.keytab' I can't find
> it which should be in the squid correctly if I'm not mistaken.

You may need to run "updatedb" to update the index before running the
find command.

>> I'm currently running Squid 2.7 (I'm a little afraid to do the upgrade
>> and mess something up, and don't know how yet) but in the config line
>> 'default_keytab_name = /etc/squid3/PROXY.keytab' you list Squid3.
>> Could that be a problem?

Yes that's a problem.  Debian uses /etc/squid for v 2 and /etc/squid3
for v 3.  This will also be a problem in /etc/default/squid3 and it's
contents.
You may be better of using an independant directory or even the
default Keytab path in case you forget about it in future, after
upgrades etc.

If you are doing this on a production system it's probably a bit risky
given that you are new to Linux and Squid - make sure you are taking
backups of you conf files and server along the way :).
If you have the option (perhaps through a vm) I would suggest setting
up a new dev/testing machine.  Until implementation of the wpad stuff
the dev/testing proxy will have no affect on your network.

Also, I don't know if negotiate_wrapper works with squid 2.X.  Perhaps
Markus or another list subscriber could clarify that?

>> As for my resolv.conf I simply have both of my internal DNS servers
>> listed.  Not quite sure what else to verify.  I've also added my Squid
>> box to the unlimited policy on my network to make sure nothing is
>> blocking it.

Are the hostnames of your kdc's correct in /etc/krb5.conf (in the
[realms] section).  can you resolve their hostnames from the squid
box?


Re: [squid-users] Active Directory Integrated Squid Proxy Guide

2012-01-10 Thread berry guru
I forgot to mention that I'm running Server 2008 R2 domain
controllers.  Secondly, when I do a 'locate PROXY.keytab' I can't find
it which should be in the squid correctly if I'm not mistaken.



On Tue, Jan 10, 2012 at 5:00 PM, berry guru  wrote:
> Thanks for responding back James.  I'm new to Linux, and new to Squid
> but I'm very intrigued and would like to learn.  So I did a little
> more digging through the configuration and I came across something.
> I'm currently running Squid 2.7 (I'm a little afraid to do the upgrade
> and mess something up, and don't know how yet) but in the config line
> 'default_keytab_name = /etc/squid3/PROXY.keytab' you list Squid3.
> Could that be a problem?
>
> As for my resolv.conf I simply have both of my internal DNS servers
> listed.  Not quite sure what else to verify.  I've also added my Squid
> box to the unlimited policy on my network to make sure nothing is
> blocking it.
>
> How can I go about troubleshooting this with logs maybe, if possible?
>
>
> On Tue, Jan 10, 2012 at 1:15 PM, James Robertson  wrote
>> Hi Evan,
>>
>> You should probably double check your DNS on the proxy (resolv.conf)
>> and the domain and look for any typo's in that and your kerberos
>> config.
>>
>> The fact that it could not resolve one (or possibly more) of your KDC
>> addresses could cause you problems later on - especially when msktutil
>> needs to do --auto-updates.
>>
>> Cheers
>>
>> On 11 January 2012 07:33, berry guru  wrote:
>>> Hi James,
>>>
>>> So I don't mean to be a pest, but I've ran into another issue.  I've
>>> ran the kinit administrator command but I'm getting the following
>>> error:
>>>
>>> kinit: Cannot resolve network address for KDC in realm "COMPANY.LOCAL"
>>> while getting initial credentials.
>>>
>>> I poked around online and I saw a few issues regarding my error, but
>>> the resolve was making the realm all caps.
>>>
>>>
>>> Cheers,
>>>
>>> Evan
>>>
>>>
>>> On Sun, Jan 8, 2012 at 9:58 PM, James Robertson  
>>> wrote:
 Hi Everyone,

 I just thought I would share a guide I am working on, it's not quite
 finished so expect errors, typo's etc.  I would love any feedback or
 critique about it.

 http://wiki.bitbinary.com/index.php/Active_Directory_Integrated_Squid_Proxy

 There is probably things that the developers and users will cringe at,
 if so I would like to know.

 Thanks for maintaining squid and the for the friendly mailing lists.

 Kind Regards,

 James


Re: [squid-users] Active Directory Integrated Squid Proxy Guide

2012-01-10 Thread berry guru
Thanks for responding back James.  I'm new to Linux, and new to Squid
but I'm very intrigued and would like to learn.  So I did a little
more digging through the configuration and I came across something.
I'm currently running Squid 2.7 (I'm a little afraid to do the upgrade
and mess something up, and don't know how yet) but in the config line
'default_keytab_name = /etc/squid3/PROXY.keytab' you list Squid3.
Could that be a problem?

As for my resolv.conf I simply have both of my internal DNS servers
listed.  Not quite sure what else to verify.  I've also added my Squid
box to the unlimited policy on my network to make sure nothing is
blocking it.

How can I go about troubleshooting this with logs maybe, if possible?


On Tue, Jan 10, 2012 at 1:15 PM, James Robertson  wrote
> Hi Evan,
>
> You should probably double check your DNS on the proxy (resolv.conf)
> and the domain and look for any typo's in that and your kerberos
> config.
>
> The fact that it could not resolve one (or possibly more) of your KDC
> addresses could cause you problems later on - especially when msktutil
> needs to do --auto-updates.
>
> Cheers
>
> On 11 January 2012 07:33, berry guru  wrote:
>> Hi James,
>>
>> So I don't mean to be a pest, but I've ran into another issue.  I've
>> ran the kinit administrator command but I'm getting the following
>> error:
>>
>> kinit: Cannot resolve network address for KDC in realm "COMPANY.LOCAL"
>> while getting initial credentials.
>>
>> I poked around online and I saw a few issues regarding my error, but
>> the resolve was making the realm all caps.
>>
>>
>> Cheers,
>>
>> Evan
>>
>>
>> On Sun, Jan 8, 2012 at 9:58 PM, James Robertson  
>> wrote:
>>> Hi Everyone,
>>>
>>> I just thought I would share a guide I am working on, it's not quite
>>> finished so expect errors, typo's etc.  I would love any feedback or
>>> critique about it.
>>>
>>> http://wiki.bitbinary.com/index.php/Active_Directory_Integrated_Squid_Proxy
>>>
>>> There is probably things that the developers and users will cringe at,
>>> if so I would like to know.
>>>
>>> Thanks for maintaining squid and the for the friendly mailing lists.
>>>
>>> Kind Regards,
>>>
>>> James


Re: [squid-users] Fwd: Forwarding Integrated Authentication for Terminal Server / Citrix users.

2012-01-10 Thread Amos Jeffries

On 11.01.2012 02:55, Jason Fitzpatrick wrote:

Hi all

We are in the process of replacing an ISA cluster with a Squid 
Cluster

(Squid Cache: Version 3.1.14) and have run into some issues with the
forwarding of credentials to an upstream proxy.

Our setup is as follows (names and IP addresses just for explanation
purposes)

Netscaller Load-ballancer    10.0.0.10:8080 [squid.domain.local]

Squid Node 1 10.0.0.11:8080
[squidnode1.domain.local] - sibling
Squid Node 2     10.0.0.12:8080
[squidnode2.domain.local] - sibling

Upstream Websense  10.0.0.20:8080 [websense.domain.local]
- parent

Upstream Transparent Proxy   10.1.0.10:8080 [parent.domain.local] - 
parent


Clients connect in from within a Citrix / Terminal server environment
to the load-ballancer, which in turn forwards the TCP connection to
one of the squidnode's (load ballanced / round robin with failover)
The Squid then forwards the connections onto the websense system 
using

the following directive from squid.conf (ex from node 1)

cache_peer 10.0.0.20 parent 8080 3130 no-query login=PASS weight=4
cache_peer 10.0.0.12 sibling 8080 3130 login=PASS

The websense (running on a linux platform) then authenicates the 
users

and based on its access rules then forwards the request onto the
upstream server and off to the internet.

Our issue is that the websense does not seem to be authenticating all
Terminal Server / Citrix users correctly, it is set up to use IWA 
with

a fall back to ntlm authentication, it seems to be authenticating the
1st connection via the squid from the IP address of the TS but not 
the

following ones.


"seems" was probably a good choice of word there. Consider how is this 
authentication happening? based on what details?


 1) HTTP is a stateless protocol. Multiple users requests can leave 
Squid sharing one TCP connection. => TCP and IP level details is not a 
good indication of the "user" viewing the response.


 2) Squid caches responses. Multiple users can share a single response. 
=> requesting client details are not a good indication of the "user" 
viewing any cacheable response.



Having eliminated TCP and IP, possibly also HTTP information from 
reliability what is left: only non-cachead responses and requests with 
authentication credentials which are passed back explicitly.


The way Squid handles NTLM is to break HTTP performance and disable 
(1). But only for DIRECT traffic, when going through peers it does not 
always work. See below.





Websense seem to think that this is a problem with the squid
configuration but I am not sure that this is true as the squid is 
only

forwarding on the authentication request to the websense box. Does
Squid have the ability to differentiate between multiple users on a
single computer?


Yes. HTTP authentication supports multiple users in one request stream.
But NTLM is not user authentication. It is layer 2 TCP connection 
authentication done over HTTP at layer-7. The difference and 
interactions can cause confusing side effects. Squid supports receiving 
and validating such authentication itself. Recent Squid also support 
relaying it in www-auth logins to a web server to some degree 
(reliability varies a LOT across the Internet environment).


The problem in your config is that "login=PASS" only passes Basic 
proxy-auth credentials. If no Basic auth credentials are present Squid 
will erase the existing ones and create some Basic ones with any details 
it can find for that user.
 * NOTE that Proxy-Authentication header are hop-by-hop, So "passing 
credentials on" is not a matter of relaying headers, but of Squid 
logging itself into the remote server, which is not supporting NTLM 
proxy-auth.


The other part of the problem is whether websense is needing www-auth 
or proxy-auth. Probably proxy-auth which will not work in 3.1 due to the 
above lack of support.


For NTLM you need at minimum a squid release which supports both 
login=PASSTHRU and connection-auth=on. This is the actual pass-thru 
style of proxy-auth headers. Officially that is 3.2, but the PASSTHRU 
patches can be adjusted easily to 3.1. By itself the PASSTHRU is no 
guarantee either, we have reports of some as yet unidentified problems 
with NTLM. Test carefully before use.



Overall it is best to perform access controls at the point of entry to 
the system, rather than halfway across it, which means in those sibling 
Squids. The 3.2 login=NEGOTAITE option supports this with the front-end 
squid performing authentication of the users (any of the HTTP auth types 
you like) and passing their name info backwards to websense (if needed 
at all) down a connection secured with Squids own Kerberos credentials.
 Note that this is how NTLM and Kerberos were originally designed to be 
used. Authenticating the TCP connection directly between two services 
with no middleware accessing the authenticated connection.



Amos


Re: [squid-users] RE: Squid as Network Monitor

2012-01-10 Thread jeffrey j donovan

On Jan 10, 2012, at 9:46 AM, Babelo Gmvsdm wrote:

> 
> I checked the premissions on /var/logs/squid3/, where everything is owned by 
> proxy:proxy (access.log, cache.log etc...)
> I ran the squid3 -k rotate, the rotations worked well anyway.
> One more thing when the Pc which host the squid, use itself as a proxy, the 
> access.log populate


Sounds similar to what i was doing last week with OSX 10.6. I had to modify the 
Kernel to allow communication between en0 and en1. I had to enable " 
Ipforwarding " on the system to operate in transparent mode. I also had to turn 
ip scope routing off also.
-j

> 
> So the squid app seems to work properly, it seems the problem come from the 
> iptables which not redirect to the squid.
> On more thing I don't know why but a ps aux | grep squid give this:
> root  1456  0.0 0.1 43176 1732  ?  Ss Jan09 0:00 /usr/sbin/squid3 -YC 
> -f /etc/squid3/squid.confproxy1465  0.0 1.6 80284  17172  ?   S Jan09  
> 0:27 (squid)  -YC -f  /etc/squid3/squid.conf
> 
> Do you know why I have 2 squid processes? (i have installed squid3, just with 
> an "apt-get install squid3" )
> cheers
> HerC.
> 
> 
>> From: hercul...@hotmail.com
>> To: squid-users@squid-cache.org
>> Subject: Squid as Network Monitor
>> Date: Tue, 10 Jan 2012 12:37:20 +0100
>> 
>> 
>> Hi,
>> I have built a machine with a Squid, with lightsquid, and i would like to 
>> use it just like a network monitor.
>> So I plugged the ETH1 of  the PC on a cisco switch on a port that received 
>> each traffic send to the internet.
>> the squid is started. (transparent mode)the ip forward is set to 1I have put 
>> this iptables rule: iptables -t nat - A PREROUTING -i eth1 -p tcp --dport 80 
>> -j REDIRECT --to-port 3128
>> but the access.log does not populate, whereas on Ntop, on the same machine, 
>> I see a lot of traffic (http)
>> Something weird is the command iptables -L -t nat -v , shows no match for 
>> the rule created.
>> First I think that ntop could intercept the traffic, but stopping it did not 
>> helped?
>> Thanks for your future help.
>> Herc.
> 



Re: [squid-users] finding the bottleneck

2012-01-10 Thread jeffrey j donovan

On Jan 10, 2012, at 7:45 AM, E.S. Rosenberg wrote:

> Hi,
> We run a setup where our users are passing through 0-2 proxies before
> reaching the Internet:
> - https 0
> - http transparent 1 (soon also 2)
> - http authenticated 2
> 
> Lately we are experiencing some (extreme) slowness even-though the
> load on the line is only about half the available bandwidth, we know
> that on the ISP side our traffic is also passing through all kinds of
> proxies/filters etc.
> I would like to somehow be able to see where the slowdowns are
> happening to rule out that it's not our side at fault, but I don't
> really know what tool/tools I could use to see what is going on here.
> 
> We suspect that the slowness may be related to the ISP doing
> Man-in-the-Middle on non-banking SSL traffic (as per request of
> management), but I really want to rule our side out first
> 
> Thanks,
> Eli


Hi eli, are you caching ? or going direct.


Re: [squid-users] Can't get a page

2012-01-10 Thread Amos Jeffries

On 11.01.2012 12:51, Wladner Klimach wrote:

So there's nothing to set up in squid.conf in order to create some
soft connection like, is there?



The server admin for that site is the only one who can actually fix it.

The load balancer is IP-based. Which prevents the usual cache_peer 
workaround working. Unless you can somehow find out a publicly reachable 
internal private IP of a working backend server and link straight to 
that. That has its own problems with reliability since it is bypassing 
their recovery systems.


Amos



2012/1/10 Amos Jeffries:

On 11.01.2012 09:41, Wladner Klimach wrote:


Hello,

I can't download this page



https://clientes.smiles.com.br/eloyalty_ptb/start.swe?SWECmd=Login&SWECM=S&SWEHo=clientes.smiles.com.br
. Looks like some sort of problem is causing timeout. Any clue of 
what

might be happening?



It is working behind a BIGipServer load balancer system which is 
doing

strange things to the TC connections. Every few attempts it fails to
connect.

Amos




Re: [squid-users] SSL interception: no hits

2012-01-10 Thread Amos Jeffries

On 11.01.2012 06:33, Damir Cosic wrote:

Hello,

I am trying to configure a Squid (v3.1.11) proxy for SSL connections
between hosts on the LAN and servers on the internet. The traffic is
routed through the host on which Squid runs and iptables are used to
redirect traffic to ports 80 and 443 to ports 3128 and 3130,
respectively. Simple HTTP caching works well. First attempt is a miss
and subsequent ones are hits. For HTTPS, however, there are no hits,
only misses, even though the requested page is in the Squid's cache. 
I

would greatly appreciate any help.

The Squid configuration is based on the default file, with following
modifications (I understand that some of these are security risks, 
but
currently it is in testing environment and the only goal is to make 
it

work):

http_port 3128 intercept
https_port 3130 intercept ssl-bump cert=/etc/certs/beta-srv.crt
key=/etc/certs/beta-srv.key
always_direct allow all
ssl_bump allow all
sslproxy_cert_error allow all

The log entry when a client attempts to retrieve a page from a 
server:


Jan  2 23:51:10 beta squid: 1325573470.788 25 192.168.10.2
TCP_MISS/200 388 GET https://192.168.11.2/ - DIRECT/192.168.11.2
text/html

The cache file (the garbled part at the beginning is left out):

https://192.168.11.2/^@HTTP/1.1 200 OK^M
Date: Sat, 07 Jan 2012 21:22:42 GMT^M
Server: Apache/2.2.15 (Unix) DAV/2 mod_ssl/2.2.15 OpenSSL/1.0.0d^M
Last-Modified: Fri, 06 Jan 2012 16:25:09 GMT^M
ETag: "10d-31-4b5de7e0d2340"^M
Accept-Ranges: bytes^M
Content-Length: 49^M
Keep-Alive: timeout=5, max=100^M
Connection: Keep-Alive^M
Content-Type: text/html^M
^M
It is secure!

Please let me know if some other information would be useful.


Well, that is certainly cacheable, which explains why it is in the 
cache ;)


BUT,
 * what are the client request headers? It is possible and in some 
agents likely that they are requesting re-validation and new content to 
be fetched.


 * does a newer version work better? ssl-bump is only supported well in 
the 3.1.13 and later releases. Please try a newer release and see if the 
problem disappears.


Amos


Re: [squid-users] Can't get a page

2012-01-10 Thread Amos Jeffries

On 11.01.2012 09:41, Wladner Klimach wrote:

Hello,

I can't download this page

https://clientes.smiles.com.br/eloyalty_ptb/start.swe?SWECmd=Login&SWECM=S&SWEHo=clientes.smiles.com.br
. Looks like some sort of problem is causing timeout. Any clue of 
what

might be happening?



It is working behind a BIGipServer load balancer system which is doing 
strange things to the TC connections. Every few attempts it fails to 
connect.


Amos


Re: [squid-users] RE: Squid as Network Monitor

2012-01-10 Thread Amos Jeffries

On 11.01.2012 03:46, Babelo Gmvsdm wrote:

I checked the premissions on /var/logs/squid3/, where everything is
owned by proxy:proxy (access.log, cache.log etc...)
I ran the squid3 -k rotate, the rotations worked well anyway.
One more thing when the Pc which host the squid, use itself as a
proxy, the access.log populate

So the squid app seems to work properly, it seems the problem come
from the iptables which not redirect to the squid.


Yes.

 * Look at the order of iptables rules for NAT rules which do things to 
the packet before your interception rule.


 * Look at your network cabling layout, and routing configurations. To 
see if packets actually flow through the Squid machine when the client 
tries to connect directly to the Internet.


 * check that your Squid machine is configured as a network router 
properly. In order to pass the packets through it this is required. 
Neither "bridge" nor standalone proxy server is not enough for NAT 
interception.


One or all of these could be the problem. You need to find out why and 
we cannot help you much.


Amos



On more thing I don't know why but a ps aux | grep squid give this:
root      1456  0.0 0.1 43176 1732  ?      Ss Jan09 0:00
/usr/sbin/squid3 -YC -f /etc/squid3/squid.confproxy    1465  0.0 1.6
80284  17172  ?   S Jan09  0:27 (squid)  -YC -f
 /etc/squid3/squid.conf

Do you know why I have 2 squid processes? (i have installed squid3,
just with an "apt-get install squid3" )


One "(squid)" is the worker which processes all the HTTP traffic. The 
other "/usr/sbin/squid3" is the master control process which ensures 
there is a worker always available, even if it crashes for some reason.


Amos



[squid-users] Can't get a page

2012-01-10 Thread Wladner Klimach
Hello,

I can't download this page
https://clientes.smiles.com.br/eloyalty_ptb/start.swe?SWECmd=Login&SWECM=S&SWEHo=clientes.smiles.com.br
. Looks like some sort of problem is causing timeout. Any clue of what
might be happening?

Regards,

Wladner


Re: [squid-users] Active Directory Integrated Squid Proxy Guide

2012-01-10 Thread berry guru
Wow! I just feel dumb now.  That's my mistake.  I copied and pasted
and it worked like a charm.  Thanks James!  Excellent wiki on the
topic too, it's very helpful.

On Mon, Jan 9, 2012 at 5:43 PM, James Robertson  wrote:
>> I'm having some trouble with the Kerberos part where I need to install
>> the following package:
>> apt-get install libsasl2-modules-gssapi-mit libsasl2-modules
>>
>> It returns
>> unable to locate package libsasl2-modules-gssapi-mit
>> unable to locate package libsas12-modules
>
> Are you copying and pasting the command or typing it?
>
> You have a typo in the output from apt-get "libsas12-modules" (note
> the 1 where you should have a lower case "L"), but not in the apt-get
> install command?


[squid-users] Re:Squid 3.2 snapshot ... vanishing processes

2012-01-10 Thread alex sharaz
Well, managed to strip out all the comment and most blank lines in the  
config. This is still happening help!!


Here's the config file

auth_param basic program /usr/local/squid/libexec/basic_pam_auth -o
auth_param basic children 10
auth_param basic realm  wwwcache3-east Note: Your UserName must be of  
the form use...@hull.ac.uk

auth_param basic credentialsttl 2 hours

acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
acl localnet src fc00::/7   # RFC 4193 local private network range
acl localnet src fe80::/10  # RFC 4291 link-local (directly  
plugged) machines

acl WindowsUpdate  dstdomain -i "/usr/local/squid/etc/windowsupdate.txt"
acl BlockedUrls  url_regex -i "/usr/local/squid/etc/blockedurls"
acl McAfee  dstdomain -i "/usr/local/squid/etc/McAfee.txt"
acl Norton360  dstdomain -i "/usr/local/squid/etc/Norton360.txt"
acl to_localdomain dstdomain hull.ac.uk
acl to_newcomms dstdomain newcomms.hull.ac.uk
acl must-route-directly dstdomain "/usr/local/squid/etc/direct.acl"
acl CONNECT method CONNECT
acl wuCONNECT dstdomain www.update.microsoft.com
acl wuCONNECT dstdomain sls.microsoft.com
acl from_localhost src 127.0.0.1/32
acl to_hullnet dst 150.237.0.0/16
acl DOPOSTS method POST
acl trustedhosts src 150.237.128.0/24
acl snmppublic snmp_community HullPublic
acl zenoss src 150.237.128.173/32
acl mustauth proxy_auth REQUIRED
acl to_wwwcache1-east dstdomain wwwcache1-east.hull.ac.uk
acl to_wwwcache2-east dstdomain wwwcache2-east.hull.ac.uk
acl to_wwwcache3-east dstdomain wwwcache3-east.hull.ac.uk
acl to_wwwcache4-east dstdomain wwwcache4-east.hull.ac.uk
acl to_wwwcache1-west dstdomain wwwcache1-west.hull.ac.uk
acl to_wwwcache2-west dstdomain wwwcache2-west.hull.ac.uk
acl to_wwwcache3-west dstdomain wwwcache3-west.hull.ac.uk
acl from_wwwcache1-east srcdomain wwwcache1-east.hull.ac.uk
acl from_wwwcache2-east srcdomain wwwcache2-east.hull.ac.uk
acl from_wwwcache3-east srcdomain wwwcache3-east.hull.ac.uk
acl from_wwwcache4-east srcdomain wwwcache4-east.hull.ac.uk
acl from_wwwcache1-west srcdomain wwwcache1-west.hull.ac.uk
acl from_wwwcache2-west srcdomain wwwcache2-west.hull.ac.uk
acl from_wwwcache3-west srcdomain wwwcache3-west.hull.ac.uk
acl to_slbrealsrv1 dstdomain slb-realsrv1.hull.ac.uk
acl to_slbrealsrv2 dstdomain slb-realsrv2.hull.ac.uk
acl to_slbrealsrv3 dstdomain slb-realsrv3.hull.ac.uk
acl to_slbrealsrv4 dstdomain slb-realsrv4.hull.ac.uk
acl to_slbrealsrv5 dstdomain slb-realsrv5.hull.ac.uk
acl to_slbrealsrv6 dstdomain slb-realsrv6.hull.ac.uk
acl alex-osx src 150.237.74.2/32
acl hullnet-banned src 150.237.11.0/24
acl hullnet-banned src 150.237.27.0/24
acl hullnet-banned src 150.237.29.0/24
acl hullnet-banned src 150.237.60.0/22
acl hullnet-banned src 150.237.139.0/24
acl hullnet-banned src 150.237.157.0/24
acl hullnet-banned src 150.237.161.0/24
acl hullnet-banned src 150.237.162.0/24
acl hullnet-banned src 150.237.163.0/24
acl hullnet-banned src 150.237.165.0/24
acl hullnet-banned src 150.237.166.0/24
acl hullnet-banned src 150.237.179.0/24
acl hullnet-banned src 150.237.184.0/22
acl hullnet-banned src 150.237.188.0/24
acl hullnet-banned src 150.237.189.0/24
acl hullnet-banned src 150.237.190.0/24
acl hullnet-banned src 150.237.207.0/25
acl hullnet-banned src 150.237.227.0/24
acl hullnet-banned src 150.237.72.0/26
acl hullnet-banned src 150.237.73.0/26
acl hullnet-banned src 150.237.226.128/25
acl hullnet-banned src 150.237.192.0/23
acl hullnet-banned src 150.237.167.0/24
acl hullnet-banned src 150.237.73.64/26
acl hullnet-banned src 150.237.73.128/26

acl from-maletl src 195.195.161.0/25
acl iplayer url_regex iplayer.bbc.co.uk
acl worktime time MTWHF 08:00-17:00
acl PEERS srcdomain   wwwcache2-east.hull.ac.uk wwwcache1- 
east.hull.ac.uk wwwcache4-east.hull.ac.uk
acl PEERS srcdomain wwwcache1-west.hull.ac.uk wwwcache2- 
west.hull.ac.uk wwwcache3-west.hull.ac.uk

acl PEERS srcdomain slb-realsrv1-east.hull.ac.uk
acl localnet src 150.237.0.0/16 #
acl SSL_ports port 443
acl SSL_ports port 444
acl SSL_ports port 563
acl SSL_ports port 8000
acl SSL_ports port 8443
acl SSL_ports port 2083
acl SSL_ports port 2087
acl SSL_ports port 2096
acl SSL_ports port 4643
acl SSL_ports port 9040
acl SSL_ports port 1863
acl SSL_ports port 3
acl SSL_ports port 1011
acl SSL_ports port 8030
acl SSL_ports port 8091
acl SSL_ports port 8010
acl SSL_ports port 2050
acl SSL_ports port 4443

acl SSL_ports port 443
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl Safe_ports port 443 563

[squid-users] SSL interception: no hits

2012-01-10 Thread Damir Cosic


Hello,

I am trying to configure a Squid (v3.1.11) proxy for SSL connections 
between hosts on the LAN and servers on the internet. The traffic is 
routed through the host on which Squid runs and iptables are used to 
redirect traffic to ports 80 and 443 to ports 3128 and 3130, 
respectively. Simple HTTP caching works well. First attempt is a miss 
and subsequent ones are hits. For HTTPS, however, there are no hits, 
only misses, even though the requested page is in the Squid's cache. I 
would greatly appreciate any help.


The Squid configuration is based on the default file, with following 
modifications (I understand that some of these are security risks, but 
currently it is in testing environment and the only goal is to make it 
work):


http_port 3128 intercept
https_port 3130 intercept ssl-bump cert=/etc/certs/beta-srv.crt 
key=/etc/certs/beta-srv.key

always_direct allow all
ssl_bump allow all
sslproxy_cert_error allow all

The log entry when a client attempts to retrieve a page from a server:

Jan  2 23:51:10 beta squid: 1325573470.788 25 192.168.10.2 
TCP_MISS/200 388 GET https://192.168.11.2/ - DIRECT/192.168.11.2 text/html


The cache file (the garbled part at the beginning is left out):

https://192.168.11.2/^@HTTP/1.1 200 OK^M
Date: Sat, 07 Jan 2012 21:22:42 GMT^M
Server: Apache/2.2.15 (Unix) DAV/2 mod_ssl/2.2.15 OpenSSL/1.0.0d^M
Last-Modified: Fri, 06 Jan 2012 16:25:09 GMT^M
ETag: "10d-31-4b5de7e0d2340"^M
Accept-Ranges: bytes^M
Content-Length: 49^M
Keep-Alive: timeout=5, max=100^M
Connection: Keep-Alive^M
Content-Type: text/html^M
^M
It is secure!

Please let me know if some other information would be useful.

Best,

Damir




[squid-users] RE: Squid as Network Monitor

2012-01-10 Thread Babelo Gmvsdm

I checked the premissions on /var/logs/squid3/, where everything is owned by 
proxy:proxy (access.log, cache.log etc...)
I ran the squid3 -k rotate, the rotations worked well anyway.
One more thing when the Pc which host the squid, use itself as a proxy, the 
access.log populate

So the squid app seems to work properly, it seems the problem come from the 
iptables which not redirect to the squid.
On more thing I don't know why but a ps aux | grep squid give this:
root      1456  0.0 0.1 43176 1732  ?      Ss Jan09 0:00 /usr/sbin/squid3 -YC 
-f /etc/squid3/squid.confproxy    1465  0.0 1.6 80284  17172  ?   S Jan09  0:27 
(squid)  -YC -f  /etc/squid3/squid.conf

Do you know why I have 2 squid processes? (i have installed squid3, just with 
an "apt-get install squid3" )
cheers
HerC.


> From: hercul...@hotmail.com
> To: squid-users@squid-cache.org
> Subject: Squid as Network Monitor
> Date: Tue, 10 Jan 2012 12:37:20 +0100
>
>
> Hi,
> I have built a machine with a Squid, with lightsquid, and i would like to use 
> it just like a network monitor.
> So I plugged the ETH1 of  the PC on a cisco switch on a port that received 
> each traffic send to the internet.
> the squid is started. (transparent mode)the ip forward is set to 1I have put 
> this iptables rule: iptables -t nat - A PREROUTING -i eth1 -p tcp --dport 80 
> -j REDIRECT --to-port 3128
> but the access.log does not populate, whereas on Ntop, on the same machine, I 
> see a lot of traffic (http)
> Something weird is the command iptables -L -t nat -v , shows no match for the 
> rule created.
> First I think that ntop could intercept the traffic, but stopping it did not 
> helped?
> Thanks for your future help.
> Herc.
  

[squid-users] Fwd: Forwarding Integrated Authentication for Terminal Server / Citrix users.

2012-01-10 Thread Jason Fitzpatrick
Hi all

We are in the process of replacing an ISA cluster with a Squid Cluster
(Squid Cache: Version 3.1.14) and have run into some issues with the
forwarding of credentials to an upstream proxy.

Our setup is as follows (names and IP addresses just for explanation purposes)

Netscaller Load-ballancer    10.0.0.10:8080 [squid.domain.local]

Squid Node 1 10.0.0.11:8080
[squidnode1.domain.local] - sibling
Squid Node 2     10.0.0.12:8080
[squidnode2.domain.local] - sibling

Upstream Websense  10.0.0.20:8080 [websense.domain.local] - parent

Upstream Transparent Proxy   10.1.0.10:8080 [parent.domain.local] - parent

Clients connect in from within a Citrix / Terminal server environment
to the load-ballancer, which in turn forwards the TCP connection to
one of the squidnode's (load ballanced / round robin with failover)
The Squid then forwards the connections onto the websense system using
the following directive from squid.conf (ex from node 1)

cache_peer 10.0.0.20 parent 8080 3130 no-query login=PASS weight=4
cache_peer 10.0.0.12 sibling 8080 3130 login=PASS

The websense (running on a linux platform) then authenicates the users
and based on its access rules then forwards the request onto the
upstream server and off to the internet.

Our issue is that the websense does not seem to be authenticating all
Terminal Server / Citrix users correctly, it is set up to use IWA with
a fall back to ntlm authentication, it seems to be authenticating the
1st connection via the squid from the IP address of the TS but not the
following ones.

Websense seem to think that this is a problem with the squid
configuration but I am not sure that this is true as the squid is only
forwarding on the authentication request to the websense box. Does
Squid have the ability to differentiate between multiple users on a
single computer?

Has anyone had any experience of a similar setup where authentications
are being processed by an upstream server for Terminal Server users?

Thanks

Jay

--

"The only difference between saints and sinners is that every saint
has a past while every sinner has a future. "
— Oscar Wilde


Re: [squid-users] Squid as Network Monitor

2012-01-10 Thread jeffrey j donovan

On Jan 10, 2012, at 6:37 AM, Babelo Gmvsdm wrote:

> 
> Hi,
> I have built a machine with a Squid, with lightsquid, and i would like to use 
> it just like a network monitor.
> So I plugged the ETH1 of  the PC on a cisco switch on a port that received 
> each traffic send to the internet.
> the squid is started. (transparent mode)the ip forward is set to 1I have put 
> this iptables rule: iptables -t nat - A PREROUTING -i eth1 -p tcp --dport 80 
> -j REDIRECT --to-port 3128
> but the access.log does not populate, whereas on Ntop, on the same machine, I 
> see a lot of traffic (http)
> Something weird is the command iptables -L -t nat -v , shows no match for the 
> rule created.
> First I think that ntop could intercept the traffic, but stopping it did not 
> helped?
> Thanks for your future help.
> Herc.   


check permissions on the log files and verify the correct log file directory.

ls -la  /usr/local/squid/var/logs/

issue a squid -k rotate
-j

[squid-users] finding the bottleneck

2012-01-10 Thread E.S. Rosenberg
Hi,
We run a setup where our users are passing through 0-2 proxies before
reaching the Internet:
- https 0
- http transparent 1 (soon also 2)
- http authenticated 2

Lately we are experiencing some (extreme) slowness even-though the
load on the line is only about half the available bandwidth, we know
that on the ISP side our traffic is also passing through all kinds of
proxies/filters etc.
I would like to somehow be able to see where the slowdowns are
happening to rule out that it's not our side at fault, but I don't
really know what tool/tools I could use to see what is going on here.

We suspect that the slowness may be related to the ISP doing
Man-in-the-Middle on non-banking SSL traffic (as per request of
management), but I really want to rule our side out first

Thanks,
Eli


[squid-users] Squid as Network Monitor

2012-01-10 Thread Babelo Gmvsdm

Hi,
I have built a machine with a Squid, with lightsquid, and i would like to use 
it just like a network monitor.
So I plugged the ETH1 of  the PC on a cisco switch on a port that received each 
traffic send to the internet.
the squid is started. (transparent mode)the ip forward is set to 1I have put 
this iptables rule: iptables -t nat - A PREROUTING -i eth1 -p tcp --dport 80 -j 
REDIRECT --to-port 3128
but the access.log does not populate, whereas on Ntop, on the same machine, I 
see a lot of traffic (http)
Something weird is the command iptables -L -t nat -v , shows no match for the 
rule created.
First I think that ntop could intercept the traffic, but stopping it did not 
helped?
Thanks for your future help.
Herc. 

RE: [squid-users] Squid only forwards GET requests to cache_peer

2012-01-10 Thread Leigh Wedding
 Jenny Lee  wrote: 
> 
> 
> > Date: Mon, 9 Jan 2012 15:53:22 +1100
> > From: leigh.wedd...@bigpond.com
> > To: squid-users@squid-cache.org
> > Subject: [squid-users] Squid only forwards GET requests to cache_peer
> > 
> > Hi,
> > 
> > I have a problem with squid only forwarding HTTP GET requests to 
> > cache_peers. My setup is that the corporate network has no access to the 
> > Internet, access is only via corporate wide http proxies. I also have 
> > another separate network (NET2, which does not have Internet access), which 
> > has only restricted access to the corporate network via a firewall. I am 
> > running a squid proxy in NET2 which should connect direct to various 
> > corporate WWW resources, and should connect to the corporate proxies for 
> > any WWW resources on the Internet. This all works fine for HTTP GET 
> > requests. However for HTTP HEAD requests (eg. needed for wget -N), it does 
> > not work for WWW resources on the Internet; Squid always tries to handle 
> > HEAD requests directly, it does NOT forward them to the defined 
> > cache_peers. I have 8 cache_peers defined as follows:
> > 
> > cache_peer 10.97.216.133 parent 8080 0 no-query round-robin
> > cache_peer 10.97.216.136 parent 8080 0 no-query round-robin
> > cache_peer 10.97.216.139 parent 8080 0 no-query round-robin
> > cache_peer 10.97.216.142 parent 8080 0 no-query round-robin
> > cache_peer 10.97.217.133 parent 8080 0 no-query round-robin
> > cache_peer 10.97.217.136 parent 8080 0 no-query round-robin
> > cache_peer 10.97.217.139 parent 8080 0 no-query round-robin
> > cache_peer 10.97.217.142 parent 8080 0 no-query round-robin
> > 
> > Can anyone shed any light on what might be the problem, and what I can do 
> > to fix it?
> > 
> > I am running squid 2.7.STABLE5 on SUSE Linux Enterprise Server 11 (x86_64) 
> > PL1.
> > 
> > Thanks,
> > Leigh.
> >
>  
> nonhierarchical_direct off
>  
> should fix it for you.
>  
> Jenny   

Thanks Jenny.  The nonhierarchical_direct documentation actually referred me
to never_direct (and hence always_direct) which provided an example of
exactly what I needed.  The problem is now fixed.

I thought it would be a simple solution, I just needed pointing in the right 
direction.

Thanks,
Leigh.