Re: [squid-users] Fwd: failure notice

2013-06-11 Thread Sean Boran
As regards the original post of this thread, after upgrading v3.3.5,
my "zero byte" problems have evaporated.

As regards forwarded_for, I also had it off. However by enabling it
one allows internal addresses to be visible. See:
http://www.squid-cache.org/Doc/config/forwarded_for/
This opens privacy/tracking issues for me.  I think it should be left
off, or as suggested below, somehow enabled only for specific sites
you really need.

I don't see what forwarded_for had to do with "zero byte" problems through :-)

Sean


On 7 June 2013 15:50, Ict Security  wrote:
>
>  Hello Nuno!
> I think you are great; by removing forwarding_for off it works, and i
> think others site with problems can be resolved!
> I experienced, with some users, some of these problems that, to be
> solved, had to be natted without proxy.
>
> Now i can workaround other cases, and then i will let you know!
> Thank you again, for the moment, very very much!
> Francesco
>
> 2013/6/7 Nuno Fernandes :
> >
> > Em Sexta, Junho de 7 de 2013 10:26 WEST, Ict Security
> >  escreveu:
> >
> >> Hello,
> >>
> >> i notice, in Squid 3.1.1 and previous version, some problem when
> >> accessing some websites.
> >>
> >> It happens both on transparent and explicited proxy mode.
> >>
> >> As example, this site cannot be opened behing Squid 3.1.1:
> >> http://www.prefettura.it
> >>
> >> It is a government italian site.
> >> As this, there are some others site, that manifest problems in squid...
> >>
> >> Thank you,
> >> Francesco Collini
> >
> >
> >
> >
> > Do you have "forwarded_for off"  in your configuration? If so remove it.
> > That site requires valid forward_for:
> >
> > wget --header='X-Forwarded-For: 192.168.1.1' -S -O /dev/null
> > www.prefettura.it # WORKS
> > wget -S -O /dev/null www.prefettura.it
> > # WORKS
> > wget --header='X-Forwarded-For: unknown' -S -O /dev/null
> > www.prefettura.it  # NOT WORKING
> >
> > Maybe they are checking that value Better yet is to use header acl
> > to remove that header to that specific site...
> >
> > Best regards,
> > Nuno Fernandes


Re: [squid-users] ssl interception causes "zero byte replies" sometimes

2013-06-06 Thread Sean Boran
I've started getting increasing "zero sized reply" messages in the
browser when visiting sslbump'ed sites again.
A reload in the browser usually works, but its annoying for users.

Last night I got many many of these while trying to access
https://drupal.org/user/password
In there logs there is not much:
TCP_MISS/502 4600 POST https://drupal.org/user/password -
PINNED/140.211.10.16 text/html

Are others having such issues, what is current recommendation? I'm
still running trunk from dec with the 19190-001.patch referenced
below.

Sean


On 20 December 2012 08:55, Sean Boran  wrote:
>
> I applied that the first patch a few days back, no complaints so far.
>
> Sean
>
>
> On 13 December 2012 16:14, Alex Rousskov
>  wrote:
>>
>> On 12/11/2012 02:40 AM, Sean Boran wrote:
>> > Hi,
>> >
>> > It happens a few times daily  that on submitting a login request to
>> > sites like Atlassian confluence (not just at atlassian, but elsewhere
>> > too), or Redmine, that the user gets a screen "The requested URL could
>> > not be retrieved" and with a "zero sized reply".
>> >
>> > It does not happen every time.
>> > If one refreshes the browser it is ok.
>> > If the destination is excluded from SSL interception, it does not
>> > happen.
>>
>> Yes, this is a known issue with bumped requests and persistent
>> connection races. Our patch for this bug is available at
>> http://article.gmane.org/gmane.comp.web.squid.devel/19190
>>
>> and we are also working on a better approach to address the same bug:
>> http://article.gmane.org/gmane.comp.web.squid.devel/19256
>>
>>
>> HTH,
>>
>> Alex.
>>
>>
>>
>


Re: [squid-users] Re: Kerberos load balancer and AD

2013-05-23 Thread Sean Boran
Referencing that "Kerberos-load-balancer-and-AD" thread, yes it does work :-).
A user is created in AD, and an SPN with the lB FQDN points to that user.
That user is then used to create the keytab on each proxy.

Sean

On 22 May 2013 22:41, SPG  wrote:
> Hi,
>
> then, with this option you don't need create an account for all squids
> servers and duplicate spn in  each account of squid. Only need a account for
> load balancer service. I question it, because I read this post in the
> morning and I have doubts . Is it true?
>
> http://squid-web-proxy-cache.1019090.n4.nabble.com/kerberos-auth-failing-behind-a-load-balancer-td4658773.html
>
> A lot of thanks Markus.
>
>
>
> --
> View this message in context: 
> http://squid-web-proxy-cache.1019090.n4.nabble.com/Kerberos-load-balancer-and-AD-tp4660187p4660207.html
> Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] kerberos auth failing behind a load balancer

2013-05-23 Thread Sean Boran
Chiming in here about the kemps
I used the kemps because they were available for this project. They have
worked quite well and as very easy to manage. HA works fine. Troubleshooting
is OK too (its looks like a BSD box under the hood).
L7 so that (as noted by Brett), I see to see the client IPs. Squid does some
routing, and logging that require that.

I've not tried HA proxy, or tproxy yet.

Sean


On 23 May 2013 08:11, Eliezer Croitoru  wrote:
>
> On 5/23/2013 8:42 AM, Brett Lymn wrote:
>>
>> One problem with using L2 is that you then lose the ability to log the
>> client IP address, everything appears to come from the load balancer.
>> Using L7 you can, at least on some load balancers, insert a
>> X-FORWARDED-FOR header with the client IP in it so you can log this in
>> squid using a custom log line.
>
> Unless you use TPROXY which is very simple to use if you understand the
> concepts and ideas.
> Also there is an option to use LVS or PROXY protocol in many cases.
> I dont remeber if squid support proxy protocol stickily but L2 LB is far
> more easy to debug and use rather then a L7 one which requires a much more
> advanced CPU ram and other stuff.
>
> Eliezer


Re: [squid-users] kerberos auth does not work for ftp traffic?

2013-04-17 Thread Sean Boran
Hmm.
I'm just using chrome as a client, so the same client as for http/s.
There is no authentications on the ftp server itself (example
ftp://ftp.epson.com/), its anonymous.
So that scenario should just work, since as you note the client->squid
should negotiate as usual.

On 17 April 2013 09:53, Amos Jeffries  wrote:
> On 17/04/2013 6:56 p.m., Sean Boran wrote:
>>
>> Hi,
>>
>> Kerberos is authenticating http/s traffic for me from certain client
>> addresses just fine.
>> However ftp is being rejected, does the browser+squid not auth ftp in
>> the same way as http?
>>
>> If ftp does work with kerberos, is there a way (ACL) that ftp traffic
>> can be excluded from kerberos auth?
>>
>> Thanks in advance,
>>
>> Sean
>
>
> FTP protocol only supports a form of Basic authentication. So Squid maps FTP
> server authentication to www-auth headers as Basic scheme.
>
> The link between client and Squid is of course HTTP and can use the full
> range of HTTP schemes normally.
>
> The two levels of authentication, client->server and client->squid are
> completely independent so the client can login with Negotiate/Kerberos to
> the proxy and Basic to the FTP server simultaeneously. The main problems are
> lack of proper HTTP support (or just Kerberos support) in FTP clients which
> claim to support HTTP proxies.
>
> Amos
>


[squid-users] Re: kerberos auth does not work for ftp traffic?

2013-04-17 Thread Sean Boran
One partial answer to my own question: in the proxypac, ftp traffic
could be diverted to another proxy:
  if (shExpMatch(url, "ftp:*")) {
return "PROXY otherproxy.mysite.ch:80";
  }

On 17 April 2013 08:56, Sean Boran  wrote:
> Hi,
>
> Kerberos is authenticating http/s traffic for me from certain client
> addresses just fine.
> However ftp is being rejected, does the browser+squid not auth ftp in
> the same way as http?
>
> If ftp does work with kerberos, is there a way (ACL) that ftp traffic
> can be excluded from kerberos auth?
>
> Thanks in advance,
>
> Sean


[squid-users] kerberos auth does not work for ftp traffic?

2013-04-16 Thread Sean Boran
Hi,

Kerberos is authenticating http/s traffic for me from certain client
addresses just fine.
However ftp is being rejected, does the browser+squid not auth ftp in
the same way as http?

If ftp does work with kerberos, is there a way (ACL) that ftp traffic
can be excluded from kerberos auth?

Thanks in advance,

Sean


Fwd: [squid-users] Re: Re: kerberos auth failing behind a load balancer

2013-03-26 Thread Sean Boran
Hi,

FYI ...  I got the two squids working behind the (Kemp) load balancer
with kerberos auth

Procedure:
0. myproxy.vptt.ch points to the IP of the load balancer. This is
referenced in wpad.dat or browser settings. Squid runs on port 80, so
the URL of the proxy is http://myproxy.ch:80

1. create an AD service account account
  lets call it my-kerb
2. add an SPN for the LB to that AD account. Did this on windows:
setspn -S http/myproxy.ch my-kerb

3. create a keytab on each squid
rm /etc/krb5.keytab
net ads keytab CREATE HTTP -U my-kerb

ktutil
ktutil:  rkt /etc/krb5.keytab
addent -password -p HTTP/myproxy.ch -k 5 -e rc4-hmac  (use the my-kerb passwd)
ktutil:  wkt /etc/krb5.keytab

chmod 644 /etc/krb5.keytab   (or use a group to allow the squid user
to read it).


Regards,

Sean Boran


Re: [squid-users] Re: Re: kerberos auth failing behind a load balancer

2013-03-14 Thread Sean Boran
3 20:49, Markus Moeller  wrote:
> Hi Sean,
>
>  Can you do a klist -ekt  on both squid servers and send me
> the output ? I assume you are missing entries.
>
> Markus
>
> "Sean Boran"  wrote in message
> news:CAOnghjtWpc0fPBVVB=yf3beglgfrrf1jqoxlzvbfhuhbvyl...@mail.gmail.com...
>
> (sorry for the slow answer, an over-eager spam filter swallowed this msg).
>
> In wireshark, the server name sent in the ticket is correct
> (proxy.example.com) , encryption is rc4-hmac and knvo=5.
> This is the same kvno as seen in "klist -ekt /etc/krb5.keytab" (with
> des-cbc-crc, des-cbc-md5, arcfour-hmac).
>
> Now there are two squids behind the balancer; one of them will behave
> correctly and accept kerberos authentication to the balanced  proxy
> name. (I had not realised the second one worked before). Comparing the
> quid and kerb config does not explain the difference.
>
> However on a windows client, querying SPN for the balanced name only
> lists the squid proxy that works(proxy2) and no mention of proxy3.
>
> C:\temp>cscript spn_query.vbs http/proxy.example.com example.net
> CN=proxy2,OU=Ubuntu,OU=Server,..
> O,DC=example,DC=net
> Class: computer
> Computer DNS: proxy2.example.com
> -- http/proxy.example.com
> -- HTTP/proxy.example.com/proxy2
> -- HTTP/proxy.example.com/proxy2.example.com
> -- HTTP/proxy2
> -- HTTP/proxy2.example.com
> -- HOST/proxy2.example.com
> -- HOST/PROXY2
>
> Next, tried to use the windows tool setspn to add an spn for proxy3:
> setspn -S http/proxy.example.com proxy3
> but it says "Duplicate SPN found, aborting operation!"
> which makes me think I'm misunderstanding. Its is not possible to
> assign the same SPN to real names of both the squids behind the
> balancer?
>
> Thanks,
>
> Sean
>
>
> On 1 March 2013 21:06, Markus Moeller  wrote:
>>
>> That should work. What do you see in Wireshark when you look at the
>> traffic
>> to the proxy ?  If you exand the Negotiate header you should see what is
>> the
>> principal name and kvno. Both must match what is in your keytab ( check
>> with
>> klist -ekt /etc/keytab)
>>
>> Markus
>>
>>
>> "Sean Boran"  wrote in message
>> news:caonghjuye0oyoomkquwl5frmnyozfrvuekslbnxyao0kel_...@mail.gmail.com...
>>
>> Hi,
>>
>> I’ve received (kemp) load balancers to put in front of squids to
>> provide failover.
>> The failover / balancing  works fine until I enable Kerberos auth on the
>> squid.
>>
>> Test setup:
>> Browser ==> Kemp balancer ==> Squid  ==>
>> Internet
>> proxy.example.com proxy3.example.com
>>
>> The client in Windows7 in an Active Directory domain.
>> If the browser proxy is set to proxy3.example.com  (bypassing the LB),
>> Kerberos auth works just fine, but via the kemp (proxy.example.com)
>> the browser prompts for a username/password which is not accepted
>> anyway
>>
>> Googling on Squid+LBs, the key is apparently to add a principal for the
>> LB,
>> e.g.
>> net ads keytab add HTTP/proxy.example.com
>>
>> In the logs (below), one can see the client sending back a Krb ticket
>> to squid, but it rejects it:
>> "negotiate_wrapper: Return 'BH gss_accept_sec_context() failed:
>> Unspecified GSS failure.  "
>> When I searched on that. one user suggested changing the encryption in
>> /etc/krb5.conf . In /etc/krb5.conf   I tried with the recommended
>> squid settings (see below), and also with none at all. The results
>> were the same. Anyway, if encryption was the issue, it would not work,
>> via LB or directly.
>>
>>
>> Analysis:
>> -
>> When the client sent a request, squid replies with:
>>
>> HTTP/1.1 407 Proxy Authentication Required
>> Server: squid
>> X-Squid-Error: ERR_CACHE_ACCESS_DENIED 0
>> X-Cache: MISS from proxy3.example.com
>> Via: 1.1 proxy3.example.com (squid)
>>
>> ok so far. the client answer with a kerberos ticket:
>>
>> Proxy-Authorization: Negotiate YIIWpgYGKwYBXXX
>>
>> UserRequest.cc(338) authenticate: header Negotiate
>> YIIWpgYGKwYBXXX
>> UserRequest.cc(360) authenticate: No connection authentication type
>> Config.cc(52) CreateAuthUser: header = 'Negotiate YIIWpgYGKwYBBQUC
>> auth_negotiate.cc(303) decode: decode Negotiate authentication
>> UserRequest.cc(93) valid: Validated. Auth::UserRequest '0x20d68d0'.
>> UserRequest.cc(51) authentic

Re: [squid-users] Re: kerberos auth failing behind a load balancer

2013-03-11 Thread Sean Boran
(sorry for the slow answer, an over-eager spam filter swallowed this msg).

In wireshark, the server name sent in the ticket is correct
(proxy.example.com) , encryption is rc4-hmac and knvo=5.
This is the same kvno as seen in "klist -ekt /etc/krb5.keytab" (with
des-cbc-crc, des-cbc-md5, arcfour-hmac).

Now there are two squids behind the balancer; one of them will behave
correctly and accept kerberos authentication to the balanced  proxy
name. (I had not realised the second one worked before). Comparing the
quid and kerb config does not explain the difference.

However on a windows client, querying SPN for the balanced name only
lists the squid proxy that works(proxy2) and no mention of proxy3.

C:\temp>cscript spn_query.vbs http/proxy.example.com example.net
CN=proxy2,OU=Ubuntu,OU=Server,..
O,DC=example,DC=net
Class: computer
Computer DNS: proxy2.example.com
-- http/proxy.example.com
-- HTTP/proxy.example.com/proxy2
-- HTTP/proxy.example.com/proxy2.example.com
-- HTTP/proxy2
-- HTTP/proxy2.example.com
-- HOST/proxy2.example.com
-- HOST/PROXY2

Next, tried to use the windows tool setspn to add an spn for proxy3:
setspn -S http/proxy.example.com proxy3
but it says "Duplicate SPN found, aborting operation!"
which makes me think I'm misunderstanding. Its is not possible to
assign the same SPN to real names of both the squids behind the
balancer?

Thanks,

Sean


On 1 March 2013 21:06, Markus Moeller  wrote:
> That should work. What do you see in Wireshark when you look at the traffic
> to the proxy ?  If you exand the Negotiate header you should see what is the
> principal name and kvno. Both must match what is in your keytab ( check with
> klist -ekt /etc/keytab)
>
> Markus
>
>
> "Sean Boran"  wrote in message
> news:caonghjuye0oyoomkquwl5frmnyozfrvuekslbnxyao0kel_...@mail.gmail.com...
>
> Hi,
>
> I’ve received (kemp) load balancers to put in front of squids to
> provide failover.
> The failover / balancing  works fine until I enable Kerberos auth on the
> squid.
>
> Test setup:
> Browser ==> Kemp balancer ==> Squid  ==>
> Internet
> proxy.example.com proxy3.example.com
>
> The client in Windows7 in an Active Directory domain.
> If the browser proxy is set to proxy3.example.com  (bypassing the LB),
> Kerberos auth works just fine, but via the kemp (proxy.example.com)
> the browser prompts for a username/password which is not accepted
> anyway
>
> Googling on Squid+LBs, the key is apparently to add a principal for the LB,
> e.g.
> net ads keytab add HTTP/proxy.example.com
>
> In the logs (below), one can see the client sending back a Krb ticket
> to squid, but it rejects it:
> "negotiate_wrapper: Return 'BH gss_accept_sec_context() failed:
> Unspecified GSS failure.  "
> When I searched on that. one user suggested changing the encryption in
> /etc/krb5.conf . In /etc/krb5.conf   I tried with the recommended
> squid settings (see below), and also with none at all. The results
> were the same. Anyway, if encryption was the issue, it would not work,
> via LB or directly.
>
>
> Analysis:
> -
> When the client sent a request, squid replies with:
>
> HTTP/1.1 407 Proxy Authentication Required
> Server: squid
> X-Squid-Error: ERR_CACHE_ACCESS_DENIED 0
> X-Cache: MISS from gsiproxy3.vptt.ch
> Via: 1.1 gsiproxy3.vptt.ch (squid)
>
> ok so far. the client answer with a kerberos ticket:
>
> Proxy-Authorization: Negotiate YIIWpgYGKwYBXXX
>
> UserRequest.cc(338) authenticate: header Negotiate
> YIIWpgYGKwYBXXX
> UserRequest.cc(360) authenticate: No connection authentication type
> Config.cc(52) CreateAuthUser: header = 'Negotiate YIIWpgYGKwYBBQUC
> auth_negotiate.cc(303) decode: decode Negotiate authentication
> UserRequest.cc(93) valid: Validated. Auth::UserRequest '0x20d68d0'.
> UserRequest.cc(51) authenticated: user not fully authenticated.
> UserRequest.cc(198) authenticate: auth state negotiate none. Received
> blob: 'Negotiate
> YIIWpgYGKwYBBQUCoIIWmjCCFpagMDAuBgkqhkiC9xIBAXX
> ..
> UserRequest.cc(101) module_start: credentials state is '2'
> helper.cc(1407) helperStatefulDispatch: helperStatefulDispatch:
> Request sent to negotiateauthenticator #1, 7740 bytes
> negotiate_wrapper: Got 'YR YIIWpgYGKwYBBQXXX
> negotiate_wrapper: received Kerberos token
> negotiate_wrapper: Return 'BH gss_accept_sec_context() failed:
> Unspecified GSS failure.  Minor code may provide more information.
>
>
> Logs for a (successful) auth without LB:
> .. as above 
> negotiate_wrapper: received Kerberos token
> negotiate_wrapper:

[squid-users] kerberos auth failing behind a load balancer

2013-02-28 Thread Sean Boran
Hi,

I’ve received (kemp) load balancers to put in front of squids to
provide failover.
The failover / balancing  works fine until I enable Kerberos auth on the squid.

Test setup:
Browser ==> Kemp balancer ==> Squid  ==> Internet
 proxy.example.com proxy3.example.com

 The client in Windows7 in an Active Directory domain.
If the browser proxy is set to proxy3.example.com  (bypassing the LB),
Kerberos auth works just fine, but via the kemp (proxy.example.com)
the browser prompts for a username/password which is not accepted
anyway

Googling on Squid+LBs, the key is apparently to add a principal for the LB, e.g.
net ads keytab add HTTP/proxy.example.com

In the logs (below), one can see the client sending back a Krb ticket
to squid, but it rejects it:
"negotiate_wrapper: Return 'BH gss_accept_sec_context() failed:
Unspecified GSS failure.  "
When I searched on that. one user suggested changing the encryption in
/etc/krb5.conf . In /etc/krb5.conf   I tried with the recommended
squid settings (see below), and also with none at all. The results
were the same. Anyway, if encryption was the issue, it would not work,
via LB or directly.


Analysis:
-
When the client sent a request, squid replies with:

HTTP/1.1 407 Proxy Authentication Required
Server: squid
X-Squid-Error: ERR_CACHE_ACCESS_DENIED 0
X-Cache: MISS from gsiproxy3.vptt.ch
Via: 1.1 gsiproxy3.vptt.ch (squid)

ok so far. the client answer with a kerberos ticket:

Proxy-Authorization: Negotiate YIIWpgYGKwYBXXX

UserRequest.cc(338) authenticate: header Negotiate
YIIWpgYGKwYBXXX
UserRequest.cc(360) authenticate: No connection authentication type
Config.cc(52) CreateAuthUser: header = 'Negotiate YIIWpgYGKwYBBQUC
auth_negotiate.cc(303) decode: decode Negotiate authentication
UserRequest.cc(93) valid: Validated. Auth::UserRequest '0x20d68d0'.
UserRequest.cc(51) authenticated: user not fully authenticated.
UserRequest.cc(198) authenticate: auth state negotiate none. Received
blob: 'Negotiate
YIIWpgYGKwYBBQUCoIIWmjCCFpagMDAuBgkqhkiC9xIBAXX
..
UserRequest.cc(101) module_start: credentials state is '2'
helper.cc(1407) helperStatefulDispatch: helperStatefulDispatch:
Request sent to negotiateauthenticator #1, 7740 bytes
negotiate_wrapper: Got 'YR YIIWpgYGKwYBBQXXX
negotiate_wrapper: received Kerberos token
negotiate_wrapper: Return 'BH gss_accept_sec_context() failed:
Unspecified GSS failure.  Minor code may provide more information.


Logs for a (successful) auth without LB:
 .. as above 
 negotiate_wrapper: received Kerberos token
 negotiate_wrapper: Return 'AF oYGXXA== u...@example.net


- configuration ---
Ubuntu 12.04 + std kerberod. Squid 3.2 bzr head from lat Jan.
- squid.conf:
- debug_options ALL,2 29,9 (to catch auth)
auth_param negotiate program
/usr/local/squid/libexec/negotiate_wrapper_auth -d --kerberos
/usr/local/squid/libexec/negotiate_kerberos_auth -s GSS_C_NO_NAME
--ntlm /usr/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp
auth_param negotiate children 20 startup=20  idle=20 auth_param
negotiate keep_alive on

- The LB is configured as a Generic Proxy (does not try to interpret
the HTTP stream), with with Layer 7 transparency
  (it forwards traffic to the squid, the squid see the real client IP,
and squid traffic is routed back though the LB)
   I've tried playing with the LB Layer 7 settings, to no avail.

Samba:
net ads join -U USER
net ads testjoin
  Join is OK

net ads keytab add HTTP -U USER
net ads keytab add HTTP/proxy.example.com  -U USER
chgrp proxy /etc/krb5.keytab
chmod 640 /etc/krb5.keytab
strings /etc/krb5.keytab   # check contents
net ads keytab list

/etc/krb5.conf
 [libdefaults]
default_realm = EXAMPLE.NET
kdc_timesync = 1
ccache_type = 4
forwardable = true
proxiable = true
fcc-mit-ticketflags = true
default_keytab_name = FILE:/etc/krb5.keytab
dns_lookup_realm = no
ticket_lifetime = 24h

[realms]
EXAMPLE.net = {
kdc = ldap.EXAMPLE.net
master_kdc = ldap.EXAMPLE.net
admin_server = ldap.EXAMPLE.net
default_domain = EXAMPLE.net
}
[domain_realm]
.corproot.net = EXAMPLE.NET
corproot.net = EXAMPLE.NET


Any suggestions on where I could dig further?

Thanks in advance,

Sean Boran


Re: [squid-users] Auth Kerberos and AD Group

2013-01-03 Thread Sean Boran
Well that looks like the ldap "BINDUSER" is not being specified
correctly or does not have enough permissions.
Read up on Openldap :-)

Sean


> On 2 January 2013 16:17, Noc Phibee Telecom 
> wrote:
>>
>> Thanks,
>>
>> i have a error:
>>
>> # search result
>> search: 2
>> result: 1 Operations error
>> text: 04DC: LdapErr: DSID-0C0906DD, comment: In order to perform this
>> ope
>>  ration a successful bind must be completed on the connection., data 0,
>> v1772
>>
>>
>>
>> do you know this error ?
>>
>>
>>
>> Le 27/12/2012 16:28, Sean Boran a écrit :
>>
>>> ldapsearch -x -D
>>> cn='BINDUSER,ou=SOMETHING,ou=SOMETHING,dc=mydomain,dc=net' -b
>>> 'dc=mydomain,dc=net' '(cn=USERTOLOOKFOR)' -h ldap.mydomain.net -W
>>>
>>>
>>> On 26 December 2012 14:56, Kinkie  wrote:
>>>>
>>>> Hi,
>>>>Active Directory exposes LDAP APIs, so you should be able to use any
>>>> LDAP browser, including the command-line ldapsearch utility.
>>>>
>>>> On Wed, Dec 26, 2012 at 2:43 PM, Noc Phibee Telecom
>>>>  wrote:
>>>>>
>>>>> Le 26/12/2012 13:03, Kinkie a écrit :
>>>>>
>>>>>> On Dec 24, 2012 4:15 PM, "Noc Phibee Telecom"
>>>>>> 
>>>>>> wrote:
>>>>>>>
>>>>>>> Hi
>>>>>>>
>>>>>>> If i want change my authentication process from NTLM/Samba to
>>>>>>> Kerberos,
>>>>>>> what is the process for add a group check ?
>>>>>>>
>>>>>>> Actually i use wbinfo_group.pl, but in kerberos, i can't start
>>>>>>> winbind
>>>>>>> process.
>>>>>>> what is the solution ?
>>>>>>
>>>>>> Hi,
>>>>>>  You should be able to use the LDAP-based group authorization
>>>>>> helper
>>>>>> against Active Directory.
>>>>>>
>>>>>>
>>>>> Thanks for your answer.
>>>>>
>>>>> do you know the process for browse the active directory on linux ?
>>>>>
>>>>> best regards
>>>>> Jerome
>>>>>
>>>>
>>>>
>>>> --
>>>>  /kinkie
>>>
>>>
>>
>


Re: [squid-users] Auth Kerberos and AD Group

2012-12-27 Thread Sean Boran
ldapsearch -x -D
cn='BINDUSER,ou=SOMETHING,ou=SOMETHING,dc=mydomain,dc=net' -b
'dc=mydomain,dc=net' '(cn=USERTOLOOKFOR)' -h ldap.mydomain.net -W


On 26 December 2012 14:56, Kinkie  wrote:
> Hi,
>   Active Directory exposes LDAP APIs, so you should be able to use any
> LDAP browser, including the command-line ldapsearch utility.
>
> On Wed, Dec 26, 2012 at 2:43 PM, Noc Phibee Telecom
>  wrote:
>> Le 26/12/2012 13:03, Kinkie a écrit :
>>
>>> On Dec 24, 2012 4:15 PM, "Noc Phibee Telecom" 
>>> wrote:

 Hi

 If i want change my authentication process from NTLM/Samba to Kerberos,
 what is the process for add a group check ?

 Actually i use wbinfo_group.pl, but in kerberos, i can't start winbind
 process.
 what is the solution ?
>>>
>>> Hi,
>>> You should be able to use the LDAP-based group authorization helper
>>> against Active Directory.
>>>
>>>
>>
>> Thanks for your answer.
>>
>> do you know the process for browse the active directory on linux ?
>>
>> best regards
>> Jerome
>>
>
>
>
> --
> /kinkie


Re: [squid-users] FW: SSL Bump "Zero Sized Reply"

2012-12-27 Thread Sean Boran
For me, I was pointed to this patch, and it worked for me:
http://article.gmane.org/gmane.comp.web.squid.devel/19190

but apparently its not the final solution to the problem:
http://article.gmane.org/gmane.comp.web.squid.devel/19256

Sean Boran



On 26 December 2012 21:17, Daniel Niasoff
 wrote:
>
> Just tried latest source from trunk and problem still occurs.
>
> Zero Sized Reply
>
> Squid did not receive any data for this request.
>
> -Original Message-
> From: Daniel Niasoff
> Sent: 26 December 2012 02:42
> To: 'squid-users@squid-cache.org'
> Subject: SSL Bump "Zero Sized Reply"
>
> :SSL Bump "Zero Sized Reply"
>
> Hi,
>
> I am using SSL Bump in 3.3.0.2.
>
> Here is my config.
>
> always_direct allow all
> ssl_bump server-first all
> cache deny ssl
> cache allow all
>
> 99% of the time it works ok but when I try to login to certain sites or
> make payments on shopping sites I quite often get "zero sized reply" on
> submitting my details.
>
> This occurs whether I use squid as a transparent or explicit proxy.
>
> Paypal is a good example where this occurs.
>
> Any ideas?
>
> Thanks
>
> Daniel
>
>


[squid-users] icap dies on downloading www.gliffy.com plugin

2012-12-20 Thread Sean Boran
Hi,

the URL 
http://www.gliffy.com/products/confluence-plugin/download/archive/gliffy-confluence-plugin-5.0.3.jar
Consistently gives the error:
ICAP protocol error.
The system returned: [No Error]
This means that some aspect of the ICAP communication failed.
Some possible problems are:
The ICAP server is not reachable.
An Illegal response was received from the ICAP server.

In the squid log:
TCP_MISS/500 3347 GET
http://www.gliffy.com/products/confluence-plugin/download/archive/gliffy-confluence-plugin-5.0.3.jar
- HIER_DIRECT/63.246.25.247 text/html

Increasing icap debug to level2:
Fri Dec 21 06:14:03 2012, general, DEBUG
squidclamav_init_request_data: initializing request data handler.
Fri Dec 21 06:14:03 2012, general, DEBUG
squidclamav_check_preview_handler: processing preview header.
Fri Dec 21 06:14:03 2012, general, DEBUG
squidclamav_check_preview_handler: preview data size is 1024
Fri Dec 21 06:14:03 2012, general, DEBUG
squidclamav_check_preview_handler: X-Client-IP: 2001:918:x
Fri Dec 21 06:14:03 2012, general, DEBUG
squidclamav_check_preview_handler: URL requested:
http://www.gliffy.com/products/confluence-plugin/download/archive/gliffy-confluence-plugin-5.0.3.jar
Fri Dec 21 06:14:03 2012, general, DEBUG
squidclamav_check_preview_handler: Content-Type:
application/x-java-archive
Fri Dec 21 06:14:05 2012, general, Bug in the service. Please report
to the servive author
Fri Dec 21 06:14:05 2012, general, DEBUG
squidclamav_release_request_data: Releasing request data.

grep icap /etc/squid/squid.conf:
icap_enable on
icap_send_client_ip on
icap_send_client_username on
icap_client_username_encode off
icap_client_username_header X-Authenticated-User
icap_preview_enable on
icap_preview_size 1024
icap_service service_req reqmod_precache bypass=0
icap://127.0.0.1:1344/squidclamav
icap_service service_resp respmod_precache bypass=0
icap://127.0.0.1:1344/squidclamav

egrep -v '^#|^ *$' /etc/c-icap/c-icap.conf
PidFile /var/run/c-icap/c-icap.pid
CommandsSocket /var/run/c-icap/c-icap.ctl
Timeout 300
MaxKeepAliveRequests 100
KeepAliveTimeout 600
StartServers 30
MaxServers 200
MinSpareThreads 10
MaxSpareThreads 20
ThreadsPerChild 10
MaxRequestsPerChild  0
Port 1344
User c-icap
Group nogroup
ServerAdmin r...@mydomain.ch
ServerName proxy.mydomain.ch
TmpDir /tmp
MaxMemObject 131072
DebugLevel 2
ModulesDir /usr/lib/c_icap
ServicesDir /usr/lib/c_icap
TemplateDir /usr/share/c_icap/templates/
TemplateDefaultLanguage en
LoadMagicFile /etc/c-icap/c-icap.magic
RemoteProxyUsers off
RemoteProxyUserHeader X-Authenticated-User
RemoteProxyUserHeaderEncoded on
ServerLog /var/log/c-icap/server.log
AccessLog /var/log/c-icap/access.log
Service squidclamav squidclamav.so
Service echo srv_echo.so

egrep -v '^#|^ *$' /etc/c-icap/srv_squidclamav.conf
maxsize 500
redirect http://myproxy.ch:8080/cgi-bin/clwarn.cgi
clamd_local /var/run/clamav/clamd.ctl
timeout 1
logredir 1
dnslookup 0
abort ^.*\.(ico|gif|png|jpg)$
abortcontent ^image\/.*$
abortcontent ^text\/.*$
abortcontent ^application\/x-javascript$
abortcontent ^video\/x-flv$
abortcontent ^video\/mp4$
abort ^.*\.swf$
abortcontent ^application\/x-shockwave-flash$
abortcontent ^.*application\/x-mms-framed.*$
whitelist .*\.clamav.net
whitelist .*sourceforge\.net/.*clamav


Any suggestions please? Can other verify this issue?

I'm running a new compile about a week old.

Thanks.


[squid-users] ssl interception causes "zero byte replies" sometimes

2012-12-11 Thread Sean Boran
Hi,

It happens a few times daily  that on submitting a login request to
sites like Atlassian confluence (not just at atlassian, but elsewhere
too), or Redmine, that the user gets a screen "The requested URL could
not be retriueved" and with a "zero sized reply".

It does not happen every time.
If one refreshes the browser it is ok.
If the destination is excluded from SSL interception, it does not happen.

In cache .log:
2012/12/07 15:51:28 kid1| helperOpenServers: Starting 1/40 'ssl_crtd' processes
2012/12/07 15:51:39 kid1| WARNING: HTTP: Invalid Response: No object
data received for https://support.atlassian.com/login.jsp AKA
support.atlassian.com/login.jsp
2012/12/07 15:51:39 kid1| WARNING: HTTP: Invalid Response: No object
data received for https://support.atlassian.com/favicon.ico AKA
support.atlassian.com/favicon.ico
(and 20 or so like this).

Running Squid 3.3 HEAD from Nov 30th.

Thanks,

Sean Boran
..


[squid-users] SSL servers not responding for 3 minutes

2012-12-07 Thread Sean Boran
Hi,

I get these a few times a day in cache.log
(squid-1): SSL servers not responding for 3 minutes

Runing HEAD (3.3) from 30th nov, with SSL interception.

In more detail (some auth strings replace with A-LONG-STRING)
--
2012/12/07 11:12:47| negotiate_wrapper: Got 'YR A-LONG-STRING==' from squid
(length: 59).
2012/12/07 11:12:47| negotiate_wrapper: Decode 'A-LONG-STRING==' (decoded
length: 40).
2012/12/07 11:12:47| negotiate_wrapper: received type 1 NTLM token
2012/12/07 11:12:47| negotiate_wrapper: Return 'TT
TlRMTVNT-A-LONG-STRING-aAAA
'
2012/12/07 11:12:50 kid1| Closing HTTP port [::]:80
2012/12/07 11:12:50 kid1| storeDirWriteCleanLogs: Starting...
2012/12/07 11:12:50 kid1|   Finished.  Wrote 4179 entries.
2012/12/07 11:12:50 kid1|   Took 0.00 seconds (2538882.14 entries/sec).
FATAL: SSL servers not responding for 3 minutes
Squid Cache (Version 3.HEAD-BZR): Terminated abnormally.
CPU Usage: 185.988 seconds = 122.460 user + 63.528 sys
Maximum Resident Size: 549472 KB
Page faults with physical i/o: 0
Memory usage for squid via mallinfo():
total space in arena:  131480 KB
Ordinary blocks:   128782 KB   1464 blks
Small blocks:   0 KB  1 blks
Holding blocks:  2348 KB  3 blks
Free Small blocks:  0 KB
Free Ordinary blocks:2697 KB
Total in use:  131130 KB 100%
Total free:  2697 KB 2%
2012/12/07 11:12:50 kid1| WARNING: freeing ssl_crtd helper with 80 requests
queued
2012/12/07 11:12:53 kid1| Starting Squid Cache version 3.HEAD-BZR for
x86_64-unknown-linux-gnu...
2012/12/07 11:12:53 kid1| Process ID 6699
2012/12/07 11:12:53 kid1| Process Roles: worker
2012/12/07 11:12:53 kid1| With 4096 file descriptors available
2012/12/07 11:12:53 kid1| Initializing IP Cache...
2012/12/07 11:12:53 kid1| DNS Socket created at [::], FD 7
2012/12/07 11:12:53 kid1| DNS Socket created at 0.0.0.0, FD 8


Any suggestions?

Thanks,

Sean


Re: AW: [squid-users] essential ICAP service is suspended

2012-12-04 Thread Sean Boran
Ah, I had presumes that squid would just retry on the next request.
You dont use bypass for either reqmod_precache or respmod_precache?

Sean

On 4 December 2012 16:08, Eliezer Croitoru  wrote:
> Unless you have problems such as the bug mentioned or any other reason dont
> use the bypass.
>
> Be ware that bypass leave you with with no knowledge about existing problem.
> You can have ICAP service almost "OFF" for weeks and you will not know about
> it.
>
> Regards,
> Eliezer
>
>
> On 12/4/2012 4:55 PM, Sean Boran wrote:
>>
>> HI,
>>
>> I'm now running HEAD (for latest SSL bump fixes) on revision 8886 on a
>> new box pulled last Friday, so may it has those fixed. The above bug
>> was from HEAD from early august.
>>
>> If that does not help, I'll try "icap_persistent_connections off".
>>
>> I also found ready that one should have "bypass=1" to ensure squid
>> continue if it cannot talk to icap?
>>
>> icap_service service_req reqmod_precache bypass=1
>> icap://127.0.0.1:1344/squidclamav
>> icap_service service_resp respmod_precache bypass=1
>> icap://127.0.0.1:1344/squidclamav
>>
>> Thanks,
>>
>> Sean
>
>
> --
> Eliezer Croitoru
> https://www1.ngtech.co.il
> sip:ngt...@sip2sip.info
> IT consulting for Nonprofit organizations
> eliezer  ngtech.co.il


Re: [squid-users] Re: how do you deploy after building squid yourself?

2012-12-04 Thread Sean Boran
Yes I build on production.
But I have several boxes, build and test new release on one box, and
only build on production once that works Ok.

Sean


On 4 December 2012 15:05, carteriii  wrote:
> Are you building the source on your production server?
>
> If you're not building the source on the server, yet pointing to /usr/local,
> then it would seem that you'd have to copy up all the individual directories
> of files (e.g. bin/, sbin/, lib/, etc.) rather than being able to just tar
> up on directory and have everything you need.  Or do you configure your
> build with something like /usr/local/squid (rather than just /usr/local)
> such that everything is below that one directory?
>
> I prefer not to build on the production server because it makes a roll-back
> far more involved and problematic, having to rebuild an older version of
> source while being offline until something is ready.  I also prefer not to
> upload separate directories and/or files which makes it more likely that
> I'll miss something and file transfers never complete at the same time so it
> potentially creates a moment in time with a partial install of old & new.
>
> Would you please tell me a bit more about your situation? I'm having
> difficulty creating a self-contained build, but your suggestion gives me the
> idea to consider building to /usr/local/squid rather than just /usr/local.
>
>
>
> --
> View this message in context: 
> http://squid-web-proxy-cache.1019090.n4.nabble.com/how-do-you-deploy-after-building-squid-yourself-tp4657529p4657552.html
> Sent from the Squid - Users mailing list archive at Nabble.com.


Re: AW: [squid-users] essential ICAP service is suspended

2012-12-04 Thread Sean Boran
HI,

I'm now running HEAD (for latest SSL bump fixes) on revision 8886 on a
new box pulled last Friday, so may it has those fixed. The above bug
was from HEAD from early august.

If that does not help, I'll try "icap_persistent_connections off".

I also found ready that one should have "bypass=1" to ensure squid
continue if it cannot talk to icap?

icap_service service_req reqmod_precache bypass=1
icap://127.0.0.1:1344/squidclamav
icap_service service_resp respmod_precache bypass=1
icap://127.0.0.1:1344/squidclamav

Thanks,

Sean


On 4 December 2012 15:33, Eliezer Croitoru  wrote:
> Hey Sean and Knop,
>
> There was a bug in squid ICAP for a very long time since 3.1 and it there
> was a patch tried and applied the last week.
>
> If you had problems before you can try the last 3.3 revision 12500.
> I will test it myself in the next month\year.
>
> Eliezer
>
>
> On 12/4/2012 4:22 PM, Knop, Uwe wrote:
>>
>> Hi Sean,
>>
>> the conversion of squid 3.1 to 3.2, we had the same error.
>> We solved the problem with the parameter "icap_persistent_connections off"
>>
>> Bye
>> UK
>>
>>> -Ursprüngliche Nachricht-
>>> Von: bo...@boran.ch [mailto:bo...@boran.ch] Im Auftrag von Sean Boran
>>> Gesendet: Dienstag, 4. Dezember 2012 10:49
>>> An: squid-users@squid-cache.org
>>> Betreff: [squid-users] essential ICAP service is suspended
>>>
>>> Hi,
>>>
>>> I've been running a squid 3.3 live with SSL inspection for over a week,
>>> AV
>>> scanning with clamav+c-icap work fine until now (about 500k GETS per
>>> day).
>>> Then users started seen icap errors in their browser::
>>>
>>> In the squid logs:
>>> essential ICAP service is suspended: icap://127.0.0.1:1344/squidclamav
>>> [down,susp,fail11]
>>>
>>> c-icap was then tuned  a bit:
>>> - increase the number of processes (now have 90)
>>> - set debug=0 (less logs`: they were massive)
>>> - exclude large files from scanning and certain media types
>>>
>>> The system was not that heavily loaded (load bout 0.3, icap getting maybe
>>> 20 requests/sec), the above measure did seem to make much difference.
>>> Any suggestions for avoiding this?
>>>
>>> Also, when this happens, squid takes a few minutes to talk to icap again:
>>>   15:30:31 kid1| essential ICAP service is suspended:
>>> icap://127.0.0.1:1344/squidclamav [down,susp,fail11]
>>>   15:33:31 kid1| essential ICAP service is up:
>>> icap://127.0.0.1:1344/squidclamav [up]
>>>
>>> Is there a timeout variable to ask squid to talk to icap much quick
>>> again?
>>>
>>> Squid config:
>>> icap_enable on
>>> icap_send_client_ip on
>>> icap_send_client_username on
>>> icap_client_username_encode off
>>> icap_client_username_header X-Authenticated-User icap_preview_enable
>>> on icap_preview_size 1024 scanned via squidclamav Service via ICAP
>>> icap_service service_req reqmod_precache bypass=1
>>> icap://127.0.0.1:1344/squidclamav adaptation_access service_req deny
>>> CONNECT adaptation_access service_req allow all icap_service service_resp
>>> respmod_precache bypass=0 icap://127.0.0.1:1344/squidclamav
>>> adaptation_access service_resp deny CONNECT adaptation_access
>>> service_resp allow all
>>>
>>> Sean
>
>
> --
> Eliezer Croitoru
> https://www1.ngtech.co.il
> sip:ngt...@sip2sip.info
> IT consulting for Nonprofit organizations
> eliezer  ngtech.co.il


[squid-users] essential ICAP service is suspended

2012-12-04 Thread Sean Boran
Hi,

I've been running a squid 3.3 live with SSL inspection for over a week, AV
scanning with clamav+c-icap work fine until now (about 500k GETS per day).
Then users started seen icap errors in their browser::

In the squid logs:
essential ICAP service is suspended: icap://127.0.0.1:1344/squidclamav
[down,susp,fail11]

c-icap was then tuned  a bit:
- increase the number of processes (now have 90)
- set debug=0 (less logs`: they were massive)
- exclude large files from scanning and certain media types

The system was not that heavily loaded (load bout 0.3, icap getting maybe 20
requests/sec), the above measure did seem to make much difference.
Any suggestions for avoiding this?

Also, when this happens, squid takes a few minutes to talk to icap again:
 15:30:31 kid1| essential ICAP service is suspended:
icap://127.0.0.1:1344/squidclamav [down,susp,fail11]
 15:33:31 kid1| essential ICAP service is up:
icap://127.0.0.1:1344/squidclamav [up]

Is there a timeout variable to ask squid to talk to icap much quick again?

Squid config:
icap_enable on
icap_send_client_ip on
icap_send_client_username on
icap_client_username_encode off
icap_client_username_header X-Authenticated-User
icap_preview_enable on
icap_preview_size 1024
scanned via squidclamav Service via ICAP
icap_service service_req reqmod_precache bypass=1
icap://127.0.0.1:1344/squidclamav
adaptation_access service_req deny CONNECT
adaptation_access service_req allow all
icap_service service_resp respmod_precache bypass=0
icap://127.0.0.1:1344/squidclamav
adaptation_access service_resp deny CONNECT
adaptation_access service_resp allow all

Sean


Re: [squid-users] Re: how do you deploy after building squid yourself?

2012-12-03 Thread Sean Boran
Hi,

On 12.04, I do install prerequisite packages/libraries, but not any
squid package (it's cleaner), just build and run a "make install"
(which puts squid in /usr/local)
I have my own custom /etc/init.d/squid that starts squid with the
config in /etc/squid/squid.conf and there specify log file locations,
the proxy user etc.

Sean


On 2 December 2012 23:08, Amos Jeffries  wrote:
>
> On 03.12.2012 09:55, carteriii wrote:
>>
>> I found where the user & group are being set, so I have more confidence in my
>> plan (detailed below), but would still appreciate some feedback.
>>
>> For future reference . . .
>>
>
> There is no need to build with special parameters. Squid will run fine on 
> Ubuntu with any ./configure build options. However as you noticed when 
> building with ones different to Ubuntu you need to do all the scripts, 
> directories and permissions setup yourself.
>
>
> If you want to integrate it with the Ubuntu package scripts then you need to 
> install that package first to get all the special setup, then custom-build 
> the sources with the Debian/Ubuntu build options:
> http://wiki.squid-cache.org/KnowledgeBase/Ubuntu#Compiling
>
> I recommend installing the "squid" package, since the "squid3" package has a 
> lot of patching to file paths adding that '3' which is not configurable in 
> the official Squid sources.
>
>
> Amos
>


Re: [squid-users] How to set /etc/logrotate.d/squid to have good sarg reports?

2012-11-29 Thread Sean Boran
Hi,

I also only do daily around 6h30, all from /etc/logrotate.d/squid:
/var/log/squid/*.log {
daily
prerotate
sarg 2>&1 | logger
/usr/lib/calamaris/calamaris-cron-script | logger
endscript
postrotate
/etc/init.d/squid restart | logger
endscript

Sean

On 29 November 2012 14:26, Helmut Hullen  wrote:
> Hallo, Bartosz,
>
> you wrote to "[squid-users] How to set /etc/logrotate.d/squid to have good 
> sarg reports?":
>
>>> My system runs the "sarg" reports at the end of the day, as a
>>> separate cronjob, and "logrotate" runs in the very early morning, as
>>> part of "cron.daily".
>>> Helmut
>
>> So how can you create weekly and monthly reports if you create every
>> day new log file?
>
> I create only daily reports.
>
> For quota etc. I use "squish".
>
>> And after rotating you are having only one day in log file, dont you?
>
> That's another problem; I've just seen that rotating doesn't work as
> expected ...
>
> Viele Gruesse!
> Helmut


Re: [squid-users] Allowing skype through on an ssl bumped proxy

2012-11-28 Thread Sean Boran
Thanks for the various suggestions.
- Running on HEAD from August, I would have thought I'm running
(almost) the newest 3.3, Server bumping is in there.
- http://wiki.squid-cache.org/ConfigExamples/Chat/Skype does not help,
it is basically saying allow 443, and explains how to allow HTTP to
all numeric addresses. I dont want to disable bumping for all numeric
addresses.
- If I run head Im not allowed to report issues here? :-)

I'll pull the latest  HEAD and recompile and try that.

Sean


On 28 November 2012 00:03, Amos Jeffries  wrote:
> On 28.11.2012 11:32, Marcus Kool wrote:
>>
>> I have seen this issue on 3.1.x and cannot find anything in the Changelog
>> that indicates that this issue is resolved in 3.3.
>>
>> What I observed in 3.1 is that sslbump assumes that all
>> CONNECTs are used for SSL-wrapped HTTP traffic and lets
>> all applications that use port 443 for other protocols hang
>> when the SSL handshake fails.
>>
>> Marcus
>>
>
> "How evil can it be? oh. It's interception. Well then."
>
> 3.1 and 3.2 as you say, the situation is all-or-nothing. There are also not
> going to be any more feature changes to them.
>
> 3.3 server-first bumping is a large step in the direction of proper
> transparent interception for CONNECT. With server-bump failures it is
> possible to take the bumping out of the transaction and relay the traffic as
> if bumping was not being performed at all.
>  I'm not sure exactly where the testing and operational status of that
> particular failover handling is now, but it was one of several design goals
> behind server-bump.
>
> So, with my maintainer hat on... If you need HTTPS interception please skip
> straight to 3.3. And please report your issues with that one to *bugzilla*
> or *squid-dev*.
>
>
>
> ... back to the question at hand though...
>
>
>
>> On 11/27/2012 11:48 AM, Eliezer Croitoru wrote:
>>>
>>> if it's linux machine try to use firewall rules to block all traffic with
>>> TCP-RESET except dst port 80 and 443.
>>>
>>> This will close some of the things for you.
>>> but 3.head 1408 it's kind of old.
>>> you can try the latest 3.3.0.1 beta which have pretty good chance of to
>>> solve it by the new features.
>>>
>>> Regards,
>>> Eliezer
>>>
>>>
>>> On 11/27/2012 3:19 PM, Sean Boran wrote:
>>>>
>>>> Typically one wishes to block Skype, but I'd like to enable it :-)
>>>>
>>>> Looking at the access.log, the following domains were excluded from ssl
>>>> bump:
>>>> .skype.com
>>>> .skypeassets.com
>>>> skype.tt.omtrdc.net
>
>
> Please read: http://wiki.squid-cache.org/ConfigExamples/Chat/Skype
>
> The ACLs should work equally well for ssl_bump_access as for http_access.
>
>
> Amos


[squid-users] Allowing skype through on an ssl bumped proxy

2012-11-27 Thread Sean Boran
Typically one wishes to block Skype, but I'd like to enable it  :-)

Looking at the access.log, the following domains were excluded from ssl bump:
.skype.com
.skypeassets.com
skype.tt.omtrdc.net

But skype still tried for ages to login and never succeeds.
In skype, despite have configure a proxy, it still tries to do lots of
direct connections too.
I did find a skype admin guide, but nothing useful on how to debug
that opaque tool's traffic..
https://support.skype.com/resources/sites/SKYPE/content/live/DOCUMENTS/0/DO5/en_US/skype-it-administrators-guide.pdf

Running 3.HEAD-20120814-r12282.

Any tips?

Sean


[squid-users] browser authentication: for unknown users (or: difference between access denied pages and browser auth dialog)

2012-10-09 Thread Sean Boran
Hi,

I've having fun trying to get the Browser popup dialog box to enter
authentications details, perhaps someone could explain how the
interaction squid/browser works for denies, when is it a page, when a
dialog?

Details: Squid is setup to:
1) Allow access from certain IPs with no authentication
2) Authenticate from active directory (using kerberos, with ntlm fallback)
3) And finally ldap.

1) works fine, as does 2) from Windows machine in the domain
(kerberos/NTLM does its job).
The ldap mechanism on its own also works fine.

3) When (windows) machines not in the domain connect, they are *not*
prompted for (LDAP) credentials, "Cache Access Denied" page appears.
(This happens in all browsers)

But squid is sending headers to tell the browser to authenticate:
  HTTP/1.1 407 Proxy Authentication Required
  Server: squid/3.HEAD-20120814-r12282
  X-Squid-Error: ERR_CACHE_ACCESS_DENIED 0
  Proxy-Authenticate: Negotiate
  Proxy-Authenticate: Basic realm="Proxy LDAP - Enter credentials"

The browser replies with NTLM:
  Proxy-Authorization: Negotiate
TlRMTVNTUAABB4IIogAFASgKDw==
  2012/10/09 10:20:20| negotiate_wrapper: received type 1 NTLM token

And squid is unhappy:
  HTTP/1.1 407 Proxy Authentication Required
  X-Squid-Error: ERR_CACHE_ACCESS_DENIED 0

Presumably the browser first tries with the local windows logon
credentials, but then it should popup  a dialog and request
user/password? Hmm, maybe the problem is squid not send
"Proxy-Authenticate:" in the second reply?

Summary of squid.conf:
auth_param negotiate program
/usr/local/squid/libexec/negotiate_wrapper_auth 
auth_param basic program /usr/local/squid/libexec/basic_ldap_auth  ..
external_acl_type memberof %LOGIN
/usr/local/squid/libexec/ext_ldap_group_acl ..
acl ldapgroups external memberof "/etc/squid/ldapgroups.txt" 

acl our_networks src "/etc/squid/our_networks.list"
http_access allow our_networks
http_access deny !ldapgroups   (also tried "http_access allow
ldapgroups" and "http_access deny !ldapgroups all")
http_access allow localhost
http_access deny all

I did find one related thread:
http://squid-web-proxy-cache.1019090.n4.nabble.com/Squid-LDAP-re-challenges-browser-on-http-access-deny-td1041726.html
but there the focus was on _not_ having a popup :-)

Also read http://www.squid-cache.org/Doc/config/http_access/
After reading http://wiki.squid-cache.org/Features/Authentication, also tried
  http_access deny !ldapgroups all
  http_access allow all

And tried just authentication with no authorisation:
  acl mustlogin proxy_auth REQUIRED
  http_access deny !mustlogin
  http_access allow localnetworks
  http_access deny all

In all cases, the browser does not want to popup an auth dialog :-(

Thanks in advance,

Sean Boran


Re: [squid-users] Error downloading squid 3.2.2

2012-10-08 Thread Sean Boran
Worked for me right now:

Connecting to www.squid-cache.org
(www.squid-cache.org)|198.186.193.234|:80... connected.
Saving to: `squid-3.2.2.tar.gz'

Sean


On 7 October 2012 22:00, Jose-Marcio Martins da Cruz
 wrote:
>
> H...
>
> $ /usr/sfw/bin/wget
> http://www.squid-cache.org/Versions/v3/3.2/squid-3.2.2.tar.gz
> --21:57:56--  http://www.squid-cache.org/Versions/v3/3.2/squid-3.2.2.tar.gz
>=> `squid-3.2.2.tar.gz'
> Resolving www.squid-cache.org... 198.186.193.234, 209.169.10.131
> Connecting to www.squid-cache.org|198.186.193.234|:80... connected.
> HTTP request sent, awaiting response... 404 Not Found
> 21:57:56 ERROR 404: Not Found.
>
>
>
> --


[squid-users] [squid users] ext_ldap_group_acl: nested groups

2012-10-08 Thread Sean Boran
Hi,

ext_ldap_group_acl is working to authorize users, i.e. check that
authenticated users belong to a specific LDAP group.

However, in the AD backend, there groups with groups. this scripts
seems to only check the first level.
Is there a way of authorizing against a nested AD group, on linux?
Seems like more of an openldap issue?

There is ext_ad_group_acl, but thats only for Windows servers.
Maybe one needs to do an SQL query based on ext_sql_session_acl?

Thanks in advance,

Sean Boran


[squid-users] The importance of the proxy name when using kerberos authentication

2012-10-02 Thread Sean Boran
Hi,

This is not a question, but information I wanted to share :-)

Having got kerberos authentication working a few weeks ago with squid
on a test box, I came back to test again and could not get kerberos to
work, The Browser(s) kept sending NTLM to squid (resulting in the
omnious 'BH received type 1 NTLM token' log entries).

Now, the proxy in the browser had just been defined by its IP address,
changing that to the FQDN suddenly allowed kerberos to work (klist
showed a ticket for HTTP/FQDN), and squid was once again able to
identify vis Kerberos.

So be careful when defining proxy names in the browser or proxypac!

Sean Boran


[squid-users] Fallback from NTLM to LDAP authentication

2012-10-02 Thread Sean Boran
Hi,

For (windows) machines in the Domain, NTLM can be used, as can LDAP to
authenticate my users.

Next would be NTLM will fall back to LDAP, to allow Linux users, and WIndows
machines not in the domain access:

auth_param ntlm program /usr/bin/ntlm_auth
--helper-protocol=squid-2.5-ntlmssp
auth_param ntlm children 10 startup=1  idle=5
auth_param basic program /usr/local/squid/libexec/basic_ldap_auth -d -R -b
"dc=mydomain,dc=net" -D  accou...@mydomain.net -W /etc/squid/ldappass.txt -f
sAMAccountName=%s -h ldap.mydomain.net
auth_param basic realm Proxy LDAP - Enter credentials


If machines are not in the domain, LDAP on its own will work, but not the
fallback from NTLM to LDAP
In the logs, there are entries like the following, that would seem to
indicate that its not falling over to ldap correctly:

Proxy-Authenticate: Basic realm="Proxy LDAP - Enter credentials"
Proxy-Authorization: NTLM DUMMYSTUFFAFASgKDw==
Proxy-Authenticate: NTLM
DUMMYSTUFFIABAAOAHYAcAB0AHQALgBjAGgAAwAoAHMAaQBzAHQAZwBkAGIAbwBzAGUAMQAyAC4AdgBwAHQAdAAuAGMAaAAA

I've been trying with several different browsers, and they behave each a
little differently.

Should it be possible to do ntlm and then fall back to ldap, is there
a configuration option I've missed perhaps?

Thanks,

Sean


Re: [squid-users] Squid with LDAP digest error

2012-09-14 Thread Sean Boran
should the input not be user + password separated by a space?
   echo "usuario1 password" |

Sean


On 13 September 2012 14:42, Bijoy Lobo  wrote:
> Hello all,
>
> I am trying to make Squid + LDAP work with MD5 digest. Ive tried this command,
>
> echo '"usuario1":"Squid proxy-caching web server"' |
> /usr/lib/squid3/digest_ldap_auth -b "ou=people,dc=paladion,dc=com" -u
> "uid=%s" -A "userPassword" -D "cn=admin,dc=test,dc=com" -w "test@123"
> -e -v 3 -h 127.0.0.1
>
> output is
> ERR No such user
>
>
> LDAP Search Output
>
> root@Proxy:~# ldapsearch -xLLL | grep usuario
> dn: uid=usuario1,ou=people,dc=test,dc=com
> uid: usuario1
>
> --
> Thanks and Regards
> Bijoy Lobo
> Paladion Networks


Re: [squid-users] squid_kerb_auth for AD auth

2012-09-12 Thread Sean Boran
Hi,

Thanks. Actually spend time yesterday building a new machine from
scratch and build build and associated components, because the
kerberos behavior (keytab) did not seem right. My test box had been
used for several squid test versions, and thus may have had a mixture
of binaries.,,,

Anyway, after the fresh install, kerberos "just worked"!

- The logging to cache.log by the auth processes is as expected too.
- tested with IE and Chrome on a Windows machine in the domain,
kerberos did its job. Usernames are visible in the access log for
example.

Both of the following worked (for those who search this thread later.-)

   auth_param negotiate program
/usr/local/squid/libexec/negotiate_wrapper_auth -i --kerberos
/usr/local/squid/libexec/negotiate_kerberos_auth -s GSS_C_NO_NAME
--ntlm /usr/bin/ntlm_auth  --helper-protocol=squid-2.5-ntlmssp
--domain=MYDOMAIN

  auth_param negotiate program
/usr/local/squid/libexec/negotiate_kerberos_auth -s GSS_C_NO_NAME


On a windown machine *not* in the domain, access is denied (as
expected), but the user is not prompted for a password.
So I think ldap is needed too?

Tested ldap alone, as follows. Works
auth_param basic program /usr/local/squid/libexec/basic_ldap_auth -d
-R -b "dc=mydomain,dc=net" -D myacco...@mydomain.net -W
/etc/squid/ldappass.txt -f sAMAccountName=%s -h ldap.mydomain.net -p
3268

Then re-enabled the kerberos with ldap after it.
Kerberos works as before, but on the testPC not in the domain,
entering the username/pw in the browser pop never allows access. I
think kerberos is causing the popup (the ldap realm, for example), is
not shown

All the doc I found online just indicated adding one after the other.
he auth_param doc (http://www.squid-cache.org/Doc/config/auth_param/)
does not explain how the hand off between the authentication methods
woorks.

Any suggestions please?


Sean


---
Sep 11, 2012; 12:14am   Markus Moeller  wrote:
Hi Sean,

  When I said client I meant the Windows client ( or do you have also Unix
clients ?)  On Windows you can install a tool called kerbtay which shows you
the ticket you have.  If you dont' see any ticket for HTTP/ you
need to use a capture tool like wireshark and loot at the traffic on port 88
( the kerberos authentictaion port). You should see TGS request from the
client to AD and a TGS reply from AD with either the ticket or an error
message. Let me know what error message you get as I assume you will have
one.

Markus


Re: [squid-users] squid_kerb_auth for AD auth

2012-09-10 Thread Sean Boran
Markus,

" If you see NTLM tokens in squid_kerb_auth then either you have not
created a keytab for squid--"

   Running on ubuntu I have the following into the upstart config file
/etc/init/squid.conf
   env KRB5_KTNAME=/etc/krb5.keytab
   And put it into /etc/environment so that the proxy user always has
this setting.
   And file permissions allow squid to read it:
   -rw-r- 1 root proxy 545 Sep  6 10:15 /etc/krb5.keytab

  The keytab was generated as follows:
net ads keytab CREATE -U myuser
net ads keytab add -U myuser HTTP
chgrp proxy /etc/krb5.keytab
chmod 640 /etc/krb5.keytab

Running ktutil as the proxy user "rkt /etc/krb5.keytab" show a list of
9 entries with variations of the proxy hostname. So the proxy user can
read the keytab, and sees the same entries as root.

"... or the client can not get a HTTP/ ticket from AD..."
How can I test that on the command line? Trying "kinit -V
HTTP/MYDOMAIN.NET" as the proxy user give the error "not found in
Kerberos database while getting initial credentials", but I dont
understand what I'm doing with than command :-)


There was no port 88 traffic on the client during a test right now,
may the kerberos part is cached,
Using the nice "follow tcp stream" in wireshark, the headers going
back and forward are as following, starting with the client:
GET http://mysite.ch/foo.html HTTP/1.1
> HTTP/1.1 407 Proxy Authentication Required
> X-Squid-Error: ERR_CACHE_ACCESS_DENIED 0
> Proxy-Authenticate: Negotiate
GET http://mysite.ch/foo.html HTTP/1.1
Proxy-Authorization: Negotiate
TlRMTVNTUAABl4II4gAGAbEdDw==
> HTTP/1.1 407 Proxy Authentication Required
> X-Squid-Error: ERR_CACHE_ACCESS_DENIED 0
> Proxy-Authenticate: Negotiate

Wireshark is able to interpret that "Proxy-Authorization: Negotiate"
line as "NTLM Secure Service Provider", with a list of flag indicating
degrees of NTLM supported, and "Version 6.1 (Build 7601); NTLM Current
Revision 15".
IE and chrome sent exactly the same Proxy-Authorization reply.
The above negociate strict is also in the cache.log on the squid side:
 squid_kerb_auth: DEBUG: Got 'YR
TlRMTVNTUAABl4II4gAGAbEdDw==' from squid
(length: 59).
WARNING: received type 1 NTLM token


How can I debug squid_kerb_auth to see what config it is reading, what
exactly it is trying to do, etc?
Hmm, looking at the source file
helpers/negotiate_auth/wrapper/negotiate_wrapper.cc and
helpers/negotiate_auth/kerberos/negotiate_kerberos_auth.cc, I realise
that I was calling squid_kerb_auth instead  negotiate_kerberos_auth (a
squid_kerb_auth file was available, probably due to an older squid
compilation).
=> So, at least one error fixed:,  the squid.conf line is now:
auth_param negotiate program
/usr/local/squid/libexec/negotiate_wrapper_auth -d --kerberos
/usr/local/squid/libexec/negotiate_kerberos_auth -d -i -s
GSS_C_NO_NAME --ntlm /usr/bin/ntlm_auth
--helper-protocol=squid-2.5-ntlmssp --domain=MYDOMAIN

This gives a little bit more info in cache.log, although
authentications still does not work:
grep negotiate_kerberos_auth cache.log
 negotiate_wrapper: Kerberos command:
/usr/local/squid/libexec/negotiate_kerberos_auth -d -i -s
GSS_C_NO_NAME
grep negotiate_kerberos_auth cache.log
 negotiate_kerberos_auth.cc(271): pid=5725 :2012/09/10 11:04:51|
negotiate_kerberos_auth: INFO: Starting version 3.0.4sq

So at least the correct wrapper is being started now, and some debug
messages are arriving.

Aside: in negotiate_kerberos_auth.cc the string "squid_kerb_auth"
appears in the usage: minor doc bug?

Looking at negotiate_kerberos_auth.cc,  the "INFO" message above is
printed out at:
debug((char *) "%s| %s: INFO: Starting version %s\n", LogTime(),
PROGRAM, SQUID_KERB_AUTH_VERSION);
but no messages are logged from the child processes in the while(1) loop?

Thanks in advance,

Sean Boran



>>>>>>>>>>
Hi Sean,

   If you see NTLM tokens in squid_kerb_auth then either you have not
created a keytab for squid or the client can not get a HTTP/ ticket
from AD.  Please capture traffic on port 88 for kerberos traffic on the
client and 3128 for squid traffic.

Markus



> For windows system in a domain, what is the typicaly strategy, would
> one usually
> A. Authenticate via Kerberos (only IE browsers, or also chrome/FF?)
> B. else authenticate via ntlkm (IE only?)
> C. else use ldap (all other browsers and Linux, or Windows PCs not in
> the domain).
>
> It is right to say that if kerberos is enabled, but not basic/ldap,
> then non IE browsers cannot login?
> Or will kerberos work for all browsers in a Windows system in the domain?
>
> Or have I completely misunderstood? :-)
>
> Starting off with C) squid_ldap_auth, which works fine, the next step
> is kerberos.
>
>

[squid-users] squid_kerb_auth for AD auth

2012-09-07 Thread Sean Boran
For windows system in a domain, what is the typicaly strategy, would
one usually
A. Authenticate via Kerberos (only IE browsers, or also chrome/FF?)
B. else authenticate via ntlkm (IE only?)
C. else use ldap (all other browsers and Linux, or Windows PCs not in
the domain).

It is right to say that if kerberos is enabled, but not basic/ldap,
then non IE browsers cannot login?
Or will kerberos work for all browsers in a Windows system in the domain?

Or have I completely misunderstood? :-)

Starting off with C) squid_ldap_auth, which works fine, the next step
is kerberos.

For kerberos, my main reading references are:
http://wiki.squid-cache.org/ConfigExamples/Authenticate/Kerberos
http://klaubert.wordpress.com/2008/01/09/squid-kerberos-authentication-and-ldap-authorization-in-active-directory/
http://wiki.bitbinary.com/index.php/Active_Directory_Integrated_Squid_Proxy

Runing squid/3.HEAD-20120814-r12282.

On the linux level kerberos and samba are installed,/configured the
system is in the domain (wbinfo -t) and  "kinit -V username" works
fine. Ntml auth on the command line looks ok too (/usr/bin/ntlm_auth
--domain=MYDOMAIN --username=myuser)

In squid , kerberos configured as follows:
auth_param negotiate program /usr/local/squid/libexec/squid_kerb_auth
-d -i -s GSS_C_NO_NAME
auth_param negotiate children 10 startup=1  idle=5
auth_param negotiate keep_alive on
acl restricted proxy_auth REQUIRED


After restart squid, log entries look good:
Sep  7 09:10:31 proxy squid[26997]: helperOpenServers: Starting 1/10
'squid_kerb_auth' processes

Trying to connect with IE causes a login box to popup on the bowser
and squid to log:
ERROR: Negotiate Authentication validating user. Error returned 'BH
received type 1 NTLM token'

in cache.log:
2012/09/07 09:22:53.421| ACL::checklistMatches: checking 'restricted'
2012/09/07 09:22:53.421| Acl.cc(65) AuthenticateAcl: returning 3
sending authentication challenge.

I can give in a valid or invalid username/password to the popup, box
but no access is granted and I dont see any usernames or
squid_kerb_auth lines in the cache.log.

Question: how can one debug in detail what squid_kerb_auth is doing?
The "-d" option does not seem to show much? (debug_options ALL,1 83,5
23,2 26,9 28,9 33,4 84,3: any better suggestions?)

Doing some "tcpdumnp -A" tracing:
- browser sends: GET http://google.com/ HTTP/1.1
-proxy answers
HTTP/1.1 407 Proxy Authentication Required
X-Squid-Error: ERR_CACHE_ACCESS_DENIED 0
Proxy-Authenticate: Negotiate
- browser send back:
Proxy-Authorization: Negotiate
TlRMTVNTUAABl4II4gAGAbEdDw==
-proxy answers
HTTP/1.1 407 Proxy Authentication Required
X-Squid-Error: ERR_CACHE_ACCESS_DENIED 0
Proxy-Authenticate: Negotiate

Also tried kerberos with NTLM, in this case access is always denied,
no popup. Tcpdump show similar handshaking.
auth_param negotiate program
/usr/local/squid/libexec/negotiate_wrapper_auth -d --ntlm
/usr/bin/ntlm_auth  --helper-protocol=squid-2.5-ntlmssp
--domain=MYDOMAIN --kerberos /usr/local/squid/libexec/squid_kerb_auth
-d -i -s GSS_C_NO_NAME
-

Thanks in advance for any tips :-)


Re: [squid-users] How to write an acl that forces authentication only from specific networks?

2012-09-05 Thread Sean Boran
Thanks, "http_access allow client1 password" works.

Sean

On 5 September 2012 19:46, Yanier Salazar Sanchez
 wrote:
>
>
>
>
> -Original Message-
> From: bo...@boran.ch [mailto:bo...@boran.ch] On Behalf Of Sean Boran
> Sent: Wednesday, September 05, 2012 9:41 AM
> To: squid-users@squid-cache.org
> Subject: [squid-users] How to write an acl that forces authentication only
> from specific networks?
>
> On my internal network, no user authenication is currently used, so the acl
> is like acl our_networks src "/etc/squid/our_networks.list"
> http_access allow our_networks
> http_access allow localhost
> http_access deny all
>
> Now I'd like to force authentication only from one IP 10.90.195.47s and
> tried:
> auth_param basic program /usr/local/squid/libexec/basic_ncsa_auth
> /etc/squid/passwd
>
> acl password proxy_auth REQUIRED
> acl client1 src 10.90.195.47/32
>
> add addin the following before "http_access allow our_networks":
> http_access allow password src client1
> change you http_access for this http_access allow client1 password
>
> but that https_acces line is wrong it kill squid :-)
>
> Is there a way of doing this?
>
> Thanks in advance,
> Sean


[squid-users] How to write an acl that forces authentication only from specific networks?

2012-09-05 Thread Sean Boran
On my internal network, no user authenication is currently used, so the acl
is like
acl our_networks src "/etc/squid/our_networks.list"
http_access allow our_networks
http_access allow localhost
http_access deny all

Now I'd like to force authentication only from one IP 10.90.195.47s and
tried:
auth_param basic program /usr/local/squid/libexec/basic_ncsa_auth
/etc/squid/passwd

acl password proxy_auth REQUIRED
acl client1 src 10.90.195.47/32

add addin the following before "http_access allow our_networks":
http_access allow password src client1
but that https_acces line is wrong it kill squid :-)

Is there a way of doing this?

Thanks in advance,
Sean


Re: Fwd: [squid-users] stopping sslbump to domains with invalid or unsigned certs

2011-12-22 Thread Sean Boran
Thanks, after changing  ERR_SECURE_CONNECT_FAIL and restarting squid,
I'm still seeing the original text, then noticed that is was actually
reading /usr/local/squid/share/errors/en/ERR_SECURE_CONNECT_FAIL
messages
How are the lang specific files regenerated from the template?

Opened a bug to track this: http://bugs.squid-cache.org/show_bug.cgi?id=3457

Sean


On 22 December 2011 14:38, Christos Tsantilas  wrote:
> On 12/22/2011 11:01 AM, Sean Boran wrote:
>> Thanks.
>> I also found a page that explains several things about sslbump that I
>> did not understand yet, e.g. how to ignore domain errors:
>> http://wiki.squid-cache.org/Features/SslBump
>>
>> Error messages:
>> /usr/local/squid/share/errors/templates/error-details.txt is very
>> interesting indeed, thanks.
>> Its make me wonder a bit though.
>> When visiting a site with an invalid cert one sees:
>> -- snip--
>> The following error was encountered while trying to retrieve the URL:
>> https://wiki.squid-cache.org/
>>     Failed to establish a secure connection to 77.93.254.178
>> The system returned: (71) Protocol error
>> This proxy and the remote host failed to negotiate a mutually
>> acceptable security settings for handling your request. It is possible
>> that the remote host does not support secure connections, or the proxy
>> is not satisfied with the host security credentials.
>> -- snip--
>>
>> i.e. the specific cert problems are not transmitted to the end user
>> (expired cert, self signed), nor the cert contents
>> "/C=--/ST=SomeState/L=SomeCity/O=SomeOrganization/OU=SomeOrganizationalUnit/CN=localhost.localdomain/emailAddress=root@localhost.localdomain".
>> Is this something that can be improved already, or does one have to
>> wait for the this first:
>> http://wiki.squid-cache.org/Features/BumpSslServerFirst  ?
>
>
> I see. The ERR_SECURE_CONNECT_FAIL error page does not provide the SSL
> error details. It return only a system error...
> You may want to open a bug report for this problem.
>
> However it is not so difficult to fix it with your own.
> You can add the following two lines in your ERR_SECURE_CONNECT_FAIL
> error templates:
>  Error Name: %x 
> Error details: %D 
>
> The "%x" formating code replaced with the SSL error name and the "%D"
> replaced with the error details.
>
> The error details for an SSL error can be customized using the
> "error-details.txt" templates.
>
>
>>
>>
>> MimicSslServerCert: I'll followup separately on that, thanks.
>>
>> Regards,
>>
>>
>> Sean
>>
>>
>>
>> On 21 December 2011 18:02, Christos Tsantilas  wrote:
>>>
>>> On 12/20/2011 04:34 PM, Sean Boran wrote:
>>>> Hi,
>>>>
>>>> sslbump allows me to interrupts ssl connections and run an AV check on 
>>>> them.
>>>> It generates a certs for the target domain (via sslcrtd), so that the
>>>> users browser sees a server cert signed by the proxy.
>>>>
>>>> If the target domain has a certificate that is expired, or it not
>>>> signed by a recognised CA, its important that the lack of trust is
>>>> communicated to the end user.
>>>>
>>>> Example, on connecting direct (not via a proxy) to
>>>> https://wiki.squid-cache.org the certificated presented is expired 2
>>>> years ago and not signed by known CA  .
>>>> Noext on connecting via a sslbump proxy (v3.2.0.14), the proxy creates
>>>> a valid cert for wiki.squid-cache.org and in the user's browsers it
>>>> looks like wiki.squid-cache.org has a valid cert signed by the proxy.
>>>>
>>>> So my question is:
>>>> What ssl_bump settings would allow the proxy to handle such
>>>> destinations with expired or non trusted sites by, for example:
>>>> a) Not bumping the connection but piping it through to the user
>>>> unchanged, so the user browser notices the invalid certs?
>>>
>>> It is not possible yet. This feature described here:
>>>  http://wiki.squid-cache.org/Features/MimicSslServerCert
>>> But is not available at this time in squid. If you are interested for
>>> this feature please contact Alex Rousskov and Measurement Factory.
>>>
>>>
>>>> b) Refuses the connection with a message to the user, if the
>>>> destination is not on an allowed ACL of exceptions.
>>>
>>> Yes it is possible.
>>>
>>>>
>>>> Looking at squid.conf, there is sslproxy_flags

Fwd: [squid-users] stopping sslbump to domains with invalid or unsigned certs

2011-12-22 Thread Sean Boran
Thanks.
I also found a page that explains several things about sslbump that I
did not understand yet, e.g. how to ignore domain errors:
http://wiki.squid-cache.org/Features/SslBump

Error messages:
/usr/local/squid/share/errors/templates/error-details.txt is very
interesting indeed, thanks.
Its make me wonder a bit though.
When visiting a site with an invalid cert one sees:
-- snip--
The following error was encountered while trying to retrieve the URL:
https://wiki.squid-cache.org/
    Failed to establish a secure connection to 77.93.254.178
The system returned: (71) Protocol error
This proxy and the remote host failed to negotiate a mutually
acceptable security settings for handling your request. It is possible
that the remote host does not support secure connections, or the proxy
is not satisfied with the host security credentials.
-- snip--

i.e. the specific cert problems are not transmitted to the end user
(expired cert, self signed), nor the cert contents
"/C=--/ST=SomeState/L=SomeCity/O=SomeOrganization/OU=SomeOrganizationalUnit/CN=localhost.localdomain/emailAddress=root@localhost.localdomain".
Is this something that can be improved already, or does one have to
wait for the this first:
http://wiki.squid-cache.org/Features/BumpSslServerFirst  ?


MimicSslServerCert: I'll followup separately on that, thanks.

Regards,


Sean



On 21 December 2011 18:02, Christos Tsantilas  wrote:
>
> On 12/20/2011 04:34 PM, Sean Boran wrote:
> > Hi,
> >
> > sslbump allows me to interrupts ssl connections and run an AV check on them.
> > It generates a certs for the target domain (via sslcrtd), so that the
> > users browser sees a server cert signed by the proxy.
> >
> > If the target domain has a certificate that is expired, or it not
> > signed by a recognised CA, its important that the lack of trust is
> > communicated to the end user.
> >
> > Example, on connecting direct (not via a proxy) to
> > https://wiki.squid-cache.org the certificated presented is expired 2
> > years ago and not signed by known CA  .
> > Noext on connecting via a sslbump proxy (v3.2.0.14), the proxy creates
> > a valid cert for wiki.squid-cache.org and in the user's browsers it
> > looks like wiki.squid-cache.org has a valid cert signed by the proxy.
> >
> > So my question is:
> > What ssl_bump settings would allow the proxy to handle such
> > destinations with expired or non trusted sites by, for example:
> > a) Not bumping the connection but piping it through to the user
> > unchanged, so the user browser notices the invalid certs?
>
> It is not possible yet. This feature described here:
>  http://wiki.squid-cache.org/Features/MimicSslServerCert
> But is not available at this time in squid. If you are interested for
> this feature please contact Alex Rousskov and Measurement Factory.
>
>
> > b) Refuses the connection with a message to the user, if the
> > destination is not on an allowed ACL of exceptions.
>
> Yes it is possible.
>
> >
> > Looking at squid.conf, there is sslproxy_flags, sslproxy_cert_error
> > #  TAG: sslproxy_flags
> > #           DONT_VERIFY_PEER    Accept certificates that fail verification.
> > #           NO_DEFAULT_CA       Don't use the default CA list built in
> >  to OpenSSL.
> > #  TAG: sslproxy_cert_error
> > #       Use this ACL to bypass server certificate validation errors.
> >
> > So, the following config would then implement scenario b) above?
> >
> > # Verify destinations: yes, but allow exceptions
> > sslproxy_flags DONT_VERIFY_PEER
> > #sslproxy_flags none
> > # ignore Certs with certain cites
> > acl TrustedName url_regex ^https://badcerts.example.com/
> > sslproxy_cert_error allow TrustedName
> > sslproxy_cert_error deny all
>
> First comment out the sslproxy_flags configuration parameter. Then you
> can use ssl_error acls to define which ssl errors allowed. An example
> configuration which allows only the self signed certificates is the
> following:
>
> # comment out the sslproxy_flags
> #sslproxy_flags DONT_VERIFY_PEER
> acl SSLERR ssl_error X509_V_ERR_DEPTH_ZERO_SELF_SIGNED_CERT
> X509_V_ERR_SELF_SIGNED_CERT_IN_CHAIN
> sslproxy_cert_error allow SSLERR
> sslproxy_cert_error deny all
>
> A good source of the available errors in squid is your:
>  "/your/squid/install/path//share/errors/templates/error-details.txt"
>
> Unfortunately is not well documented in squid.conf...
>
>
> Regards,
>   Christos
>
>
> >
> > ==> But then, why does it not throw an error when connecting to
> > https://wiki.squid-cache.org ?
> >
> > Next I though it might be an idea to delete any cached certs and

Re: [squid-users] stopping sslbump to domains with invalid or unsigned certs

2011-12-21 Thread Sean Boran
On 21 December 2011 12:49, Amos Jeffries  wrote:
> On 21/12/2011 10:16 p.m., Sean Boran wrote:
>>
>> So, as a test I set SSL_FLAG_DONT_VERIFY_PEER, then modified
>> ssl/support.cc sslCreateClientContext() to ignore
>> SSL_FLAG_DONT_VERIFY_PEER to that it would always verify.
>>
>> Then my three test cases woks just fine. The unsigned cert is refused
>> (although with a not very precise error message to the user), and the
>> two valid ones work.
>
>
> So you hard-code Squid to behave as if the flag was not set. How is that
> different to not setting it in the first place?

Well there does not seem to be a flag that says "do verify peer".



> sslproxy_* directives configure what Squid does on SSL connections to
> servers.
>
>
>>
>> Another quick hack:
>> ssl_verify_cb(): disabled domain checking (so that amazon.com worked
>> with its insufficient www.amazon.com cert.   (Setting
>> sslflags=DONT_VERIFY_DOMAIN on the http_port config line did not
>> work..)
>
>
> Makes sense, I dont believe client certificates contain a domain name.
> Although they may include the TCP level details of the client as
> equivalents.
>
> http_port sslflags= are configuring what to verify on received client
> certificates.

Ah I see.


> SSL has no functionality for the server to tell the remote client what not
> to verify. That would defeat the entire purpose of certificates.

Yes, but we need squid options to tell sslbump how strict it should be
about checking (policy), and what action it should take when policy is
breached (refuse connection, inform, continue etc..)


>> Obviously the above hacking is not the proper solution though, should
>> I move this conversation to the squid-dev list? What would you suggest
>> as the next step Amos?
>
>
> Yes please move this to squid-dev. Christos and Alex who authored the bump
> feature do not often read this mailing list regularly.

OK.
Where can I find information on the work planned by The Measurement
Factory? Maybe I can dig up some sponsorship for them if its aligned
with my site-specific needs..

Sean


Re: [squid-users] stopping sslbump to domains with invalid or unsigned certs

2011-12-21 Thread Sean Boran
So, as a test I set SSL_FLAG_DONT_VERIFY_PEER, then modified
ssl/support.cc sslCreateClientContext() to ignore
SSL_FLAG_DONT_VERIFY_PEER to that it would always verify.

Then my three test cases woks just fine. The unsigned cert is refused
(although with a not very precise error message to the user), and the
two valid ones work.

Another quick hack:
ssl_verify_cb(): disabled domain checking (so that amazon.com worked
with its insufficient www.amazon.com cert.   (Setting
sslflags=DONT_VERIFY_DOMAIN on the http_port config line di not
work..)


Obviously the above hacking is not the proper solution though, should
I move this conversation to the squid-dev list? What would you suggest
as the next step Amos?

Sean


On 21 December 2011 08:36, Sean Boran  wrote:
> According to the doc, sslproxy_flags only has only  one other value
> NO_DEFAULT_CA.
> That doesn't seem of much use... it does recognise and refuse the
> expired cert though:
>
> 2011/12/21 07:30:01.269| Self signed certificate:
> /C=--/ST=SomeState/L=SomeCity/O=SomeOrganization/OU=SomeOrganizationalUnit/CN=localhost.localdomain/emailAddress=root@localhost.localdomain
> 2011/12/21 07:30:01.269| confirming SSL error 18
> 2011/12/21 07:30:01.269| fwdNegotiateSSL: Error negotiating SSL
> connection on FD 29: error:14090086:SSL
> routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed
> (1/-1/0)
>
> But also refuses a well know bank:
> Self signed certificate in certificate chain:
> /1.3.6.1.4.1.311.60.2.1.3=CH/2.5.4.15=Private
> Organization/serialNumber=CH-020.3.906.075-9/C=CH/postalCode=8001/ST=Zuerich/L=Zuerich/streetAddress=Paradeplatz
> 8/O=Credit Suisse Group AG/CN=www.credit-suisse.com
> 2011/12/21 07:32:47.859| confirming SSL error 19
>
> And amazon:
> Unable to get local issuer certificate:
> /C=US/ST=Washington/L=Seattle/O=Amazon.com Inc./CN=www.amazon.com
>
> I had expected  to mean "dont verify peer if it is the
> except acl".
> Hmm.
> Digging in the sources, in ssl/support.cc, there are more that two
> constants defined (I had just looked at the docs so far..).  There is
> no actual VERIFY_PEER though.
>
> Looking at the sources it seems necessary that
> SSL_FLAG_DONT_VERIFY_PEER not be set if this is to be called:
> SSL_CTX_set_verify(sslContext, SSL_VERIFY_PEER ...);
>
> So, compiled the lastest HEAD and tried both VERIFY_CRL,
> VERIFY_CRL_ALL which would presumably have done some additional CRL
> checking, but the example sites above fail on that too:
>
> Unable to get certificate CRL:
> /C=US/ST=Washington/L=Seattle/O=Amazon.com Inc./CN=www.amazon.com
>
> Which would look like its requires the existence of a CRL for each 
> destination?
> Tried setting capath to an empty directory, but it probably requires
> some standard CRLs.
>
> Squid pull its standard CA list from openssl (/etc/ssl/certs ?), but
> should just accept empty crl lists if there are none?  Setting
> capath=/etc/ssl/certs and crlfile=/emptyfile does not help.
>
> I muust still be missing something..
>
>
> As regards The Measurement Factory, their website looks interesting,
> but I dont see any relevant references. Is there a discussion or
> ticket on what they are planning and how to contact them ? Should I
> ask on squid-dev?
>
> Thanks,
>
> Sean
>
>
> On 21 December 2011 01:02, Amos Jeffries  wrote:
>> On 21/12/2011 3:34 a.m., Sean Boran wrote:
>>>
>>> Hi,
>>>
>>> sslbump allows me to interrupts ssl connections and run an AV check on
>>> them.
>>> It generates a certs for the target domain (via sslcrtd), so that the
>>> users browser sees a server cert signed by the proxy.
>>>
>>> If the target domain has a certificate that is expired, or it not
>>> signed by a recognised CA, its important that the lack of trust is
>>> communicated to the end user.
>>>
>>> Example, on connecting direct (not via a proxy) to
>>> https://wiki.squid-cache.org the certificated presented is expired 2
>>> years ago and not signed by known CA  .
>>> Noext on connecting via a sslbump proxy (v3.2.0.14), the proxy creates
>>> a valid cert for wiki.squid-cache.org and in the user's browsers it
>>> looks like wiki.squid-cache.org has a valid cert signed by the proxy.
>>>
>>> So my question is:
>>> What ssl_bump settings would allow the proxy to handle such
>>> destinations with expired or non trusted sites by, for example:
>>> a) Not bumping the connection but piping it through to the user
>>> unchanged, so the user browser notices the invalid certs?
>>> b) Refuses the connection with a message to the user, if the
>>> dest

Re: [squid-users] stopping sslbump to domains with invalid or unsigned certs

2011-12-20 Thread Sean Boran
According to the doc, sslproxy_flags only has only  one other value
NO_DEFAULT_CA.
That doesn't seem of much use... it does recognise and refuse the
expired cert though:

2011/12/21 07:30:01.269| Self signed certificate:
/C=--/ST=SomeState/L=SomeCity/O=SomeOrganization/OU=SomeOrganizationalUnit/CN=localhost.localdomain/emailAddress=root@localhost.localdomain
2011/12/21 07:30:01.269| confirming SSL error 18
2011/12/21 07:30:01.269| fwdNegotiateSSL: Error negotiating SSL
connection on FD 29: error:14090086:SSL
routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed
(1/-1/0)

But also refuses a well know bank:
Self signed certificate in certificate chain:
/1.3.6.1.4.1.311.60.2.1.3=CH/2.5.4.15=Private
Organization/serialNumber=CH-020.3.906.075-9/C=CH/postalCode=8001/ST=Zuerich/L=Zuerich/streetAddress=Paradeplatz
8/O=Credit Suisse Group AG/CN=www.credit-suisse.com
2011/12/21 07:32:47.859| confirming SSL error 19

And amazon:
Unable to get local issuer certificate:
/C=US/ST=Washington/L=Seattle/O=Amazon.com Inc./CN=www.amazon.com

I had expected DONT_VERIFY_PEER to mean "dont verify peer if it is the
except acl".
Hmm.
Digging in the sources, in ssl/support.cc, there are more that two
constants defined (I had just looked at the docs so far..).  There is
no actual VERIFY_PEER though.

Looking at the sources it seems necessary that
SSL_FLAG_DONT_VERIFY_PEER not be set if this is to be called:
SSL_CTX_set_verify(sslContext, SSL_VERIFY_PEER ...);

So, compiled the lastest HEAD and tried both VERIFY_CRL,
VERIFY_CRL_ALL which would presumably have done some additional CRL
checking, but the example sites above fail on that too:

Unable to get certificate CRL:
/C=US/ST=Washington/L=Seattle/O=Amazon.com Inc./CN=www.amazon.com

Which would look like its requires the existence of a CRL for each destination?
Tried setting capath to an empty directory, but it probably requires
some standard CRLs.

Squid pull its standard CA list from openssl (/etc/ssl/certs ?), but
should just accept empty crl lists if there are none?  Setting
capath=/etc/ssl/certs and crlfile=/emptyfile does not help.

I muust still be missing something..


As regards The Measurement Factory, their website looks interesting,
but I dont see any relevant references. Is there a discussion or
ticket on what they are planning and how to contact them ? Should I
ask on squid-dev?

Thanks,

Sean


On 21 December 2011 01:02, Amos Jeffries  wrote:
> On 21/12/2011 3:34 a.m., Sean Boran wrote:
>>
>> Hi,
>>
>> sslbump allows me to interrupts ssl connections and run an AV check on
>> them.
>> It generates a certs for the target domain (via sslcrtd), so that the
>> users browser sees a server cert signed by the proxy.
>>
>> If the target domain has a certificate that is expired, or it not
>> signed by a recognised CA, its important that the lack of trust is
>> communicated to the end user.
>>
>> Example, on connecting direct (not via a proxy) to
>> https://wiki.squid-cache.org the certificated presented is expired 2
>> years ago and not signed by known CA  .
>> Noext on connecting via a sslbump proxy (v3.2.0.14), the proxy creates
>> a valid cert for wiki.squid-cache.org and in the user's browsers it
>> looks like wiki.squid-cache.org has a valid cert signed by the proxy.
>>
>> So my question is:
>> What ssl_bump settings would allow the proxy to handle such
>> destinations with expired or non trusted sites by, for example:
>> a) Not bumping the connection but piping it through to the user
>> unchanged, so the user browser notices the invalid certs?
>> b) Refuses the connection with a message to the user, if the
>> destination is not on an allowed ACL of exceptions.
>
>
> Pretty much. The Measurement Factory has a project underway to fix this
> limitation.
> Please contact Alex about sponsoring their work to make it happen faster, or
> get access to the experimental code.
>
>
>>
>> Looking at squid.conf, there is sslproxy_flags, sslproxy_cert_error
>> #  TAG: sslproxy_flags
>> #           DONT_VERIFY_PEER    Accept certificates that fail
>> verification.
>> #           NO_DEFAULT_CA       Don't use the default CA list built in
>>  to OpenSSL.
>> #  TAG: sslproxy_cert_error
>> #       Use this ACL to bypass server certificate validation errors.
>>
>> So, the following config would then implement scenario b) above?
>>
>> # Verify destinations: yes, but allow exceptions
>> sslproxy_flags DONT_VERIFY_PEER
>> #sslproxy_flags none
>> # ignore Certs with certain cites
>> acl TrustedName url_regex ^https://badcerts.example.com/
>> sslproxy_cert_error allow TrustedName
>> sslproxy_cert_error deny all
>>
>> ==>  But then, why does it not thro

Re: [squid-users] After reloading squid3, takes about 2 minutes to serve pages?

2011-12-20 Thread Sean Boran
How do you "reload", by doing restart or "-k reconfigure" (must faster)

Sean

On 20 December 2011 16:48, Terry Dobbs  wrote:
> Thanks.
>
> After looking into it more, it appears squidGuard seems to be taking a
> while to initialize the blacklists. The only reason I have to reload
> squid3 is for squidGuard to recognize the new blacklist entries.
>
> I am using Berkley DB for the first time, perhaps that's why it takes
> longer? Although, I don't really see what Berkley DB is doing for me as
> I am still using flat files for my domains/urls? Guess I should take
> this to the squidGuard list!
>
> -Original Message-
> From: Eliezer Croitoru [mailto:elie...@ec.hadorhabaac.com]
> Sent: Monday, December 19, 2011 1:04 PM
> To: squid-users@squid-cache.org
> Subject: Re: [squid-users] After reloading squid3, takes about 2 minutes
> to serve pages?
>
> On 19/12/2011 19:12, Terry Dobbs wrote:
> it's an old issue from squid 3.1 to 3.2 there is nothing yet as far as i
>
> know that solves this issue.
>
> Regards
> Eliezer
>> Hi All.
>>
>> I just installed squid3 after running squid2.5 for a number of years.
> I
>> find after reloading squid3 and trying to access the internet on a
> proxy
>> client it takes about 2 minutes until pages load. For example, if I
>> reload squid3 and try to access a page, such as www.tsn.ca it will try
>> to load for a minute or 2 until it finally displays. I understand I
>> shouldn't need to reload squid3 too much, but is there something I am
>> missing to make this happen? I am not using it for cacheing just for
>> monitoring/website control. Here is the log from when I was trying to
>> access the mentioned site:
>>
>> 1324310991.377      2 192.168.70.97 TCP_DENIED/407 2868 GET
>> http://www.tsn.ca/ - NONE/- text/html [Accept: image/gif, image/jpeg,
>> image/pjpeg, image/pjpeg, application/x-shockwave-flash,
>> application/xaml+xml, application/vnd.ms-xpsdocument,
>> application/x-ms-xbap, application/x-ms-application,
>> application/vnd.ms-excel, application/vnd.ms-powerpoint,
>> application/msword, */*\r\nAccept-Language: en-us\r\nUser-Agent:
>> Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET
> CLR
>> 2.0.50727; InfoPath.1)\r\nAccept-Encoding: gzip,
>> deflate\r\nProxy-Connection: Keep-Alive\r\nHost: www.tsn.ca\r\nCookie:
>> TSN=NameKey={ffc1186b-54bb-47ef-b072-097f5fafc5f2};
>> __utma=54771374.1383136889.1323806167.1324305925.1324309890.7;
>>
> __utmz=54771374.1323806167.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(n
>> one); __utmb=54771374.1.10.1324309890\r\n] [HTTP/1.0 407 Proxy
>> Authentication Required\r\nServer: squid/3.0.STABLE19\r\nMime-Version:
>> 1.0\r\nDate: Mon, 19 Dec 2011 16:09:51 GMT\r\nContent-Type:
>> text/html\r\nContent-Length: 2485\r\nX-Squid-Error:
>> ERR_CACHE_ACCESS_DENIED 0\r\nProxy-Authenticate: NTLM\r\n\r]
>> 1324310991.447      5 192.168.70.97 TCP_DENIED/407 3244 GET
>> http://www.tsn.ca/ - NONE/- text/html [Accept: image/gif, image/jpeg,
>> image/pjpeg, image/pjpeg, application/x-shockwave-flash,
>> application/xaml+xml, application/vnd.ms-xpsdocument,
>> application/x-ms-xbap, application/x-ms-application,
>> application/vnd.ms-excel, application/vnd.ms-powerpoint,
>> application/msword, */*\r\nAccept-Language: en-us\r\nUser-Agent:
>> Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET
> CLR
>> 2.0.50727; InfoPath.1)\r\nAccept-Encoding: gzip,
>> deflate\r\nProxy-Connection: Keep-Alive\r\nCookie:
>> TSN=NameKey={ffc1186b-54bb-47ef-b072-097f5fafc5f2};
>> __utma=54771374.1383136889.1323806167.1324305925.1324309890.7;
>>
> __utmz=54771374.1323806167.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(n
>> one); __utmb=54771374.1.10.1324309890\r\nProxy-Authorization: NTLM
>> TlRMTVNTUAABB4IIogAFASgKDw==\r\nHost:
>> www.tsn.ca\r\n] [HTTP/1.0 407 Proxy Authentication Required\r\nServer:
>> squid/3.0.STABLE19\r\nMime-Version: 1.0\r\nDate: Mon, 19 Dec 2011
>> 16:09:51 GMT\r\nContent-Type: text/html\r\nContent-Length:
>> 2583\r\nX-Squid-Error: ERR_CACHE_ACCESS_DENIED
> 0\r\nProxy-Authenticate:
>> NTLM
>>
> TlRMTVNTUAACEgASADAFgomid3FHZLqI7WsAAIoAigBCQwBPAE4A
>>
> VgBFAEMAVABPAFIAAgASAEMATwBOAFYARQBDAFQATwBSAAEACgBTAFEAVQBJAEQABAAmAGEA
>>
> cwBzAG8AYwBpAGEAdABlAGQAYgByAGEAbgBkAHMALgBjAGEAAwA0AHUAYgB1AG4AdAB1AC4A
>> YQBzAHMAbwBjAGkAYQB0AGUAZABiAHIAYQBuAGQAcwAuAGMAYQAA\r\n\r]
>


Re: [squid-users] integrating with wlc

2011-12-20 Thread Sean Boran
It might be possible to sent the WLC logs to a syslog server, where
one could pipe into a parser to extract the pairs needed and front
there create an ACL for squid?

Sean

2011/12/20 Henrik Nordström :
> tis 2011-12-20 klockan 14:09 +0200 skrev E.S. Rosenberg:
>
>> About the wlc I don't know for sure yet, I can probably create a
>> script/program that when presented with an IP can convert it to a
>> username on the Radius server...
>> But I don't know how that would then interact with squid...
>> Thanks,
>
> You can then plug that into Squid via the extenal acl interface. See
> external_acl_type.
>
>  http://www.squid-cache.org/Doc/config/external_acl_type/
>
> Regards
> Henrik
>


[squid-users] stopping sslbump to domains with invalid or unsigned certs

2011-12-20 Thread Sean Boran
Hi,

sslbump allows me to interrupts ssl connections and run an AV check on them.
It generates a certs for the target domain (via sslcrtd), so that the
users browser sees a server cert signed by the proxy.

If the target domain has a certificate that is expired, or it not
signed by a recognised CA, its important that the lack of trust is
communicated to the end user.

Example, on connecting direct (not via a proxy) to
https://wiki.squid-cache.org the certificated presented is expired 2
years ago and not signed by known CA  .
Noext on connecting via a sslbump proxy (v3.2.0.14), the proxy creates
a valid cert for wiki.squid-cache.org and in the user's browsers it
looks like wiki.squid-cache.org has a valid cert signed by the proxy.

So my question is:
What ssl_bump settings would allow the proxy to handle such
destinations with expired or non trusted sites by, for example:
a) Not bumping the connection but piping it through to the user
unchanged, so the user browser notices the invalid certs?
b) Refuses the connection with a message to the user, if the
destination is not on an allowed ACL of exceptions.

Looking at squid.conf, there is sslproxy_flags, sslproxy_cert_error
#  TAG: sslproxy_flags
#   DONT_VERIFY_PEERAccept certificates that fail verification.
#   NO_DEFAULT_CA   Don't use the default CA list built in
 to OpenSSL.
#  TAG: sslproxy_cert_error
#   Use this ACL to bypass server certificate validation errors.

So, the following config would then implement scenario b) above?

# Verify destinations: yes, but allow exceptions
sslproxy_flags DONT_VERIFY_PEER
#sslproxy_flags none
# ignore Certs with certain cites
acl TrustedName url_regex ^https://badcerts.example.com/
sslproxy_cert_error allow TrustedName
sslproxy_cert_error deny all

==> But then, why does it not throw an error when connecting to
https://wiki.squid-cache.org ?

Next I though it might be an idea to delete any cached certs and try again.
Looking in /var/lib/squid_ssl_db/index.txt, there is an extra for the
destination:
V   121107103058Z   0757348Eunknown /CN=www.squid-cache.org
So, then I deleted 0757348E.pem to force a new cert to be generated,
and restarted squid.

Connecting to https://wiki.squid-cache.org/ resulted in a new cert
being silently generated, stored in 075734AD.pem and the https
connection signed.

What am I going wrong?

Finally had a look at the sources:
sslproxy_flags  led to Config.ssl_client.flags in cf_parser.cci which
led to ssl_client.sslContext in cache_cf.cc to initiateSSL() in
forward.cc and finally ssl_verify_cb in ssl/support.cc.

There one finds nice debugs prefixed with "83", so, enabled high
debugging for 83:
   debug_options ALL,1 83,20 23,2 26,10 33,4 84,3
Restarted squid, and watched with
   tail -f cache.log|egrep -i "SSL|certificate"
but dont see certificate errors.

Any suggestions?


Thanks,
Sean


[squid-users] sslBump + signed proxy (hierarchical CA) cert

2011-12-13 Thread Sean Boran
Hi,


The problem:
after successful tests with a self-signed cert for sslbump, the idea
is to use a "real" cert signed by a CA know in common browsers. Such a
cert has a hierarchy "chain", i.e. the proxy cert is signed by a
official CA, which is signed by a CA who's keys is in browsers.

Support for such cert chaining was introduced in squid  3.2 I
understand, but I've not had luck in getting it running so far :-(  .
See also http://bugs.squid-cache.org/show_bug.cgi?id=3426

Perhaps someone on squid-users has a few tips to help me understand if
the issue is with my config, or the sslbump code.

The Test environment:
-
Running the recent squid-3.2.0.14 tarball, on Ubuntu 10.04
A few debug options to try and see useful logs:
  debug_options ALL,1 83,8 23,2 84,5
  sslcrtd_program /usr/local/squid/libexec/ssl_crtd -d -s
/var/lib/squid_ssl_db -M 4MB

The proxy's cert was generated by:
- openssl genrsa -out proxy.vptt.ch.key 2048
- send to CA and get back a .crt file
- create a file containing the private keys, signed public key, and
public keys of the CA chain:
cat proxy.cer proxy.pem proxy.key  CA_1_pem.crt Root_CA_1_pem.crt > proxy.chain

http_port 80 ssl-bump cert=/etc/squid/ssl/proxy.chain
generate-host-certificates=on dynamic_cert_mem_cache_size=4MB

Before starting, wipe all cached certs:
  /etc/init.d/squid stop
  \rm -rf /var/lib/squid_ssl_db
  /usr/local/squid/libexec/ssl_crtd -c -s /var/lib/squid_ssl_db
  chown -R proxy /var/lib/squid_ssl_db
  /etc/init.d/squid start


Starting squid:

Having started squid, visit https://www.squid-cache.org,
the browser (FF 8.0.1 on Windows) complains "www.squid-cache.org" uses
an invalid security certificate"

Asking the browser to show the cert details, one sees that the
certificate hierarchy only has just one level, www.squid-cache.org
signed by the proxy (i.e. no sign of the intermediate CAs).


Analysis:
---
Details logs are listed below,
- sslbump is being activated, a new cert is generated for the
destination website, and signed. Two public certs are visible in the
logs:
  a)  the proxy's cert, which when analysed (pasted to a file.crt and
viewed under windows) contains a correct hierarchy proxy > CA1
>root_CA
  b) a cert for www.squid-cache.org, which is issued by the proxy, but
does not contain hierarchy information.

- the browser replies finally with "unknown ca"


Any suggestions as to what I'm doing wrong, or what measures to take
to debug in more detail?

Thanks in advance,

Sean



-- snip  logs 

2011/12/13 13:08:55.564| Accepting SSL bumped HTTP Socket connections
at local=[::]:80 remote=[::] FD 22 flags=9
2011/12/13 13:08:56| storeLateRelease: released 0 objects
..
2011/12/13 13:09:06.961| client_side_request.cc(1469) doCallouts:
Doing calloutContext->hostHeaderVerify()
2011/12/13 13:09:06.962| client_side_request.cc(1476) doCallouts:
Doing calloutContext->clientAccessCheck()
2011/12/13 13:09:06.963| urlParse: URI has whitespace:
{icap://127.0.0.1:1344/squidclamav ICAP/1.0
}
2011/12/13 13:09:06.963| urlParse: URI has whitespace:
{icap://127.0.0.1:1344/squidclamav ICAP/1.0
}
2011/12/13 13:09:06.967| client_side_request.cc(1505) doCallouts:
Doing calloutContext->clientAccessCheck2()
2011/12/13 13:09:06.967| client_side_request.cc(1512) doCallouts:
Doing clientInterpretRequestHeaders()
2011/12/13 13:09:06.967| client_side_request.cc(1344) sslBumpNeeded:
sslBump required: Yes
2011/12/13 13:09:06.967| client_side_request.cc(1568) doCallouts:
calling processRequest()
2011/12/13 13:09:06.967| GetFirstAvailable: Running servers 5
2011/12/13 13:09:06.967| helperDispatch: Request sent to ssl_crtd #1, 3739 bytes
2011/12/13 13:09:06.967| helperSubmit: new_certificate 3717
host=www.squid-cache.org
-BEGIN CERTIFICATE-

-END CERTIFICATE-
-BEGIN RSA PRIVATE KEY-
..
-END RSA PRIVATE KEY-
2011/12/13 13:09:07.034| helperHandleRead: 1885 bytes from ssl_crtd #1
2011/12/13 13:09:07.034| helperHandleRead: 'OK 1876
-BEGIN CERTIFICATE-
MIICrDCCAZQCBAdgtFYwDQYJKoZIhvcNAQEFBQAwgZoxCzAJBgNVBAYTAkNIMQ0w
CwYDVQQIEwRCZXJuMQ0wCwYDVQQHEwRCZXJuMRUwEwYDVQQKEwxTd2lzc2NvbSBM
dGQxITAfBgNVBAsTGFN0cmF0ZWd5IGFuZCBJbm5vdmF0aW9uczEWMBQGA1UEAxMN
cHJveHkudnB0dC5jaDEbMBkGCSqGSIb3DQEJARYMcm9vdEB2cHR0LmNoMB4XDTEx
MTIxMTEyMDkwN1oXDTE0MTEyMzEzMjIzM1owHjEcMBoGA1UEAxMTd3d3LnNxdWlk
LWNhY2hlLm9yZzCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkCgYEAoNjuzSwl4Ri7
M1h7QiWAEWVGMUkPTxFP4Nl4+X6JvoZ6+dQ+Dprd/ng+o01j2ckq9Y7hKfjWpugd
MthuRDGAbkd4alzmQwfEcoXoXr5wAkofBkxonXAwgHtpVXeDDkBpRxnpgYkxc2Jk
Dkz0xvHRzxWTLZBM+LvTl9Yppyt9bUMCAwEAATANBgkqhkiG9w0BAQUFAAOCAQEA
nGlozvURpAhxk6S6zYqCryq3MZqHR6uJufzB5YVQW1VIGKTkmwlp3nb5zB3D54S6
jNHJq6enPNzxd9XiNM8NIukmgwacYWKiyPaPILTofASM/FezszGbBpZe0fOPzl78
CHG7s6g7tv9oSgjRZJuzEjaXqmxxcVo99rApnjeBB75atCh1RTPtikC2Y/paeGzO
Dq1+ItQ9oVljd5D4DP13Kx9Tj/Y+OvVgAyOVyQcW7vi3pa9AKN1yOpieAe55AH71
hvCjlewZioAFJvFzX97ZsB9qi2gVsZin9BCmuUCeXHK91T8RvnXmpCF2W3qk4UHi
QJHkll9Yv+GwNnNoJcXNVw==
-END CERTIFICATE

Re: [squid-users] increasing file descriptors in Ubuntu 10.04/2.7.STABLE9

2011-12-06 Thread Sean Boran
squid starts as root, but runs as the proxy user, or rather it changes
its uid to that after starting (cache_effective_user).

So I'm not sure is max_filedesc is effective before or after the chuid.
I'll report back if I get the descriptor warning again.

Thanks,

Sean


On 6 December 2011 10:59, Amos Jeffries  wrote:
> On 6/12/2011 7:54 p.m., Sean Boran wrote:
>>
>> Hi,
>>
>> On  squid proxy using the stock Ubuntu squid packages, the file
>> descriptors need to be increased.
>>
>> I found two suggestions:
>>
>> http://chrischan.blog-city.com/ubuntu_804_lts_increasing_squid_file_descriptors.htm
>> but ulimit -n was still 1024 after rebooting.
>> (and it also talks about recompiling squid with
>> --with-filedescriptors=8192, but Id prefer to keep the stock ubuntu
>> package if possible).
>>
>> This link:
>>
>> http://www.cyberciti.biz/faq/squid-proxy-server-running-out-filedescriptors/
>> suggests alternative settings in /etc/security/limits.conf
>> but "ulimit -a | grep 'open files'" still says 1024
>>
>> There was also a suggestion found to set a value in
>> /proc/sys/fs/file-max, but the current value was already 392877
>>
>> Finally, the second article suggests (for red hat) just setting
>> max_filedesc 4096
>> in squid.conf
>> and this actually works, i.e.
>> "squidclient -p 80  mgr:info | grep 'file descri'"
>> reports 4096
>>
>> So my question: is the squid.conf sufficient? How is the squid setting
>> related to ulimit, if at all?
>
>
> They are related. ulimit sets the OS limits squid can use,
> max_filedescriptors (with its alias for RHEL) sets how many Squid tries to
> use.
> When Squid is run as root or with the right libcap security privileges it
> should not need the ulimit, but if in doubt it wont hurt.
>
> Amos


Re: [squid-users] squid dies: ssl_crtd helpers are crashing too rapidly

2011-12-06 Thread Sean Boran
Hi,

Hmm. Is that negotiation between browser and squid or between squid
and the destination site?

Openssl is 0.9.8k (standard with Ubuntu Lucid 10.04)

I wiped  /var/lib/squid_ssl_db/certs, and re-ran
/usr/local/squid/libexec/ssl_crtd -c -s /var/lib/squid_ssl_db
/var/lib/squid_ssl_db/certs
 so that new certs would be generated.

... and so far, no crashes.

It this resolves the issue, the perhaps the problem was that I changed
the proxy's CA key several times during tests, so some target sites
would have generated with different CA keys, and would have still be
cached in /var/lib/squid_ssl_db/certs.

The lesson would then be to empty /var/lib/squid_ssl_db/certs if one
changes the CA key :-)

Thanks,

Sean





On 2 December 2011 17:48, Amos Jeffries  wrote:
> On 3/12/2011 4:44 a.m., Sean Boran wrote:
>>
>> With squid running sslbump in routing mode, and used by a handful of
>> users, squid is crashing regularly, linked to visiting SSL sites.
>>
>> Logs
>> --
>> 2011/11/29 11:39:36| clientNegotiateSSL: Error negotiating SSL connection
>> on FD
>> 45: error:1408F10B:SSL routines:SSL3_GET_RECORD:wrong version number
>> (1/-1)
>
>
> Something in your OpenSSL library is incompatible with the SSL or TLS
> version being used by one of the certificates.
>
> Given your helper problems I would not put it past being a corrupted local
> certificate file in the helpers databse.
>
>
>> 2011/11/29 11:39:43| WARNING: ssl_crtd #2 (FD 11) exited
>> 2011/11/29 11:39:43| Too few ssl_crtd processes are running (need 1/50)
>> 2011/11/29 11:39:43| Starting new helpers
>> 2011/11/29 11:39:43| helperOpenServers: Starting 1/50 'ssl_crtd' processes
>> 2011/11/29 11:39:43| client_side.cc(3462) sslCrtdHandleReply: "ssl_crtd"
>> helper
>> return  reply
>
>
> Major problem. Why is the helper dying on startup?
>
>
>> 2011/11/29 11:39:44| WARNING: ssl_crtd #1 (FD 9) exited
>> 2011/11/29 11:39:44| Too few ssl_crtd processes are running (need 1/50)
>> 2011/11/29 11:39:44| storeDirWriteCleanLogs: Starting...
>> 2011/11/29 11:39:44|   Finished.  Wrote 0 entries.
>> 2011/11/29 11:39:44|   Took 0.00 seconds (  0.00 entries/sec).
>> FATAL: The ssl_crtd helpers are crashing too rapidly, need help!
>> --
>>
>> So ssl_crtd is dying which is one issue, but its also killing squid which
>> is
>> even worse.
>
>
> As designed. These helper dying is not as trivial as you seem to think. It
> is happening immediately on starting the helper. Ignoring the crash abort in
> Squid only works if the helpers get some work done between dying. Ignoring
> startup crashes will lead to the machine CPU(s) being overloaded.
>
>
> Amos


[squid-users] increasing file descriptors in Ubuntu 10.04/2.7.STABLE9

2011-12-05 Thread Sean Boran
Hi,

On  squid proxy using the stock Ubuntu squid packages, the file
descriptors need to be increased.

I found two suggestions:
http://chrischan.blog-city.com/ubuntu_804_lts_increasing_squid_file_descriptors.htm
but ulimit -n was still 1024 after rebooting.
(and it also talks about recompiling squid with
--with-filedescriptors=8192, but Id prefer to keep the stock ubuntu
package if possible).

This link:
http://www.cyberciti.biz/faq/squid-proxy-server-running-out-filedescriptors/
suggests alternative settings in /etc/security/limits.conf
but "ulimit -a | grep 'open files'" still says 1024

There was also a suggestion found to set a value in
/proc/sys/fs/file-max, but the current value was already 392877

Finally, the second article suggests (for red hat) just setting
max_filedesc 4096
in squid.conf
and this actually works, i.e.
"squidclient -p 80  mgr:info | grep 'file descri'"
reports 4096

So my question: is the squid.conf sufficent? How is the squid setting
related to ulimit, if at all?

Thanks in advance,

Sean


Re: [squid-users] squid/sslbump + IE9

2011-12-04 Thread Sean Boran
Yes it is classical forgery as you say, but that is how SSL interception works.
And yes, I created a self signed CA cert for the proxy and manually
installed it into FF and IE browsers.

Firefox: Open 'Options' > 'Advanced' > 'Encryption' > 'View
Certificates' >e 'Authorities' >'Import' button, select the .der file
attached press 'OK'
IE: Tools > Options > Content > Certificates > Trusted Root
Certification Authorities

Sean


On 3 December 2011 04:11, Amos Jeffries  wrote:
>
> On 3/12/2011 6:22 a.m., Sean Boran wrote:
>>
>> Well yes, we are trying to incept...
>> I dont see where the "forgery" is, if my proxy CA is trusted and a
>> cert is generated for that target, signed by that CA, why should the
>> browser complain?
>
>
> The "forgery" is that you are creating a certificate claiming to be fetched 
> from that website and authorizing you to act as their intermediary with 
> complete security clearance. When it is not. Exactly like me presenting 
> someone with a cheque against your bank account signed by myself. Forgery, by 
> the plain and simple definition of the word. This is why the browser 
> complains unless it has explicitly been made to trust the CA you use to sign.
>
> I missed the part where you had your signing CA already in the browser and 
> read that as the browser not complaining when only presented with the plain 
> cert.
>
>
>> And why would FF not complain but IE9 does?
>
>
> The one complaining does not trust the certificate or some part of its CA 
> chain. As others have said, each of the three browser engines uses their own 
> CA collections.
>
> Amos


Re: [squid-users] squid/sslbump + IE9

2011-12-02 Thread Sean Boran
Well yes, we are trying to incept...
I dont see where the "forgery" is, if my proxy CA is trusted and a
cert is generated for that target, signed by that CA, why should the
browser complain?

And why would FF not complain but IE9 does?

Sean


On 2 December 2011 17:29, Amos Jeffries  wrote:
> On 3/12/2011 4:16 a.m., Sean Boran wrote:
>>
>> Yes it was add to the Windows cert store.  (Tools>  Options>  Content
>>>
>>> Certiifcates>  Trusted Root Certification Authorities).
>>
>> Not all all HTTPS websites cause errors either, e..g
>> https://www.credit-suisse.com is fine.
>
>
> Ouch. Their certificate is permitting any third-party (including your Squid)
> to forge their site credentials.
>
>
> Amos


[squid-users] squid dies: ssl_crtd helpers are crashing too rapidly

2011-12-02 Thread Sean Boran
With squid running sslbump in routing mode, and used by a handful of
users, squid is crashing regularly, linked to visiting SSL sites.

Logs
--
2011/11/29 11:39:36| clientNegotiateSSL: Error negotiating SSL connection on FD
45: error:1408F10B:SSL routines:SSL3_GET_RECORD:wrong version number (1/-1)
2011/11/29 11:39:43| WARNING: ssl_crtd #2 (FD 11) exited
2011/11/29 11:39:43| Too few ssl_crtd processes are running (need 1/50)
2011/11/29 11:39:43| Starting new helpers
2011/11/29 11:39:43| helperOpenServers: Starting 1/50 'ssl_crtd' processes
2011/11/29 11:39:43| client_side.cc(3462) sslCrtdHandleReply: "ssl_crtd" helper
return  reply
2011/11/29 11:39:44| WARNING: ssl_crtd #1 (FD 9) exited
2011/11/29 11:39:44| Too few ssl_crtd processes are running (need 1/50)
2011/11/29 11:39:44| storeDirWriteCleanLogs: Starting...
2011/11/29 11:39:44|   Finished.  Wrote 0 entries.
2011/11/29 11:39:44|   Took 0.00 seconds (  0.00 entries/sec).
FATAL: The ssl_crtd helpers are crashing too rapidly, need help!
--

So ssl_crtd is dying which is one issue, but its also killing squid which is
even worse.

Initially I though it might be  lack of ssL_crtd resources, so the
process count was
increased up from 5 to 50, but that didn't help

Some config settings:
--
http_port 80 ssl-bump cert=/etc/squid/ssl/www.sample.com.pem
generate-host-certificates=on dynamic_cert_mem_cache_size=4MB
sslproxy_flags DONT_VERIFY_PEER
sslcrtd_program /usr/local/squid/libexec/ssl_crtd -s /var/lib/squid_ssl_db -M
4MB
sslcrtd_children 50
--

This has happened with squid 3.1 and currently on 3.2 HEAD.
A bug report has been opened http://bugs.squid-cache.org/show_bug.cgi?id=3436

Has anyone a workaround to keep squid running and somehow reset its
run away ssl children?

Sean


Re: [squid-users] Transparent HTTP Proxy and SSL-BUMP feature

2011-12-02 Thread Sean Boran
I'm not sure you can use sslbump in transparent mode.
I remember reading something to that effect.
There are also articles like this that might help:
https://dvas0004.wordpress.com/2011/03/22/squid-transparent-ssl-interception/

Sean


On 2 December 2011 13:02, Maret Ludovic  wrote:
> Hi there !
>
> I want to configure a transparent proxy for HTTP and SSL. HTTP works
> pretty well but i'm stuck with SSL even if i use the ssl-bump feature.
>
> Right now, it almost works if i use 2 differents ports for the http_port
> & https_port :
>
> http_port 3129 transparent
> https_port 3130 ssl-bump cert=/etc/squid/ssl_cert/partproxy01-test.pem
> key=/etc/squid/ssl_cert/private/partproxy01-key-test.pem
>
> HTTP is ok, i get the warning about a probable man-in-the-middle attack
> when i tried to access a SSL web site. I did just add an exception. And
> i get an error : Invalid URL
>
> In the logs, i found :
>
> 1322820580.454 0 10.194.2.63 NONE/400 3625 GET /pki – NONE/- text/html
>
> When i tried to access https://www.switch.ch/pki
> Apparently, squid cut the URL and remove the host.domain part…
>
> When i tried to use CONNECT method and ssl-bump on http_port. I get an
> error in the browser “ssl_error_rx_record_too_long” or
> “ERR_SSL_PROTOCOL_ERROR”
>
> Any clues ?
>
> Many Thanks
>
> Ludovic


Re: [squid-users] squid/sslbump + IE9

2011-12-02 Thread Sean Boran
Yes it was add to the Windows cert store.  (Tools > Options > Content
> Certiifcates > Trusted Root Certification Authorities).

Not all all HTTPS websites cause errors either, e..g
https://www.credit-suisse.com is fine.

Sean

On 2 December 2011 15:03, Guy Helmer  wrote:
>
> On Dec 2, 2011, at 3:52 AM, Sean Boran wrote:
>
> > Hi,
> >
> > I'm testing squid v3 with SSL interception  (the interception is to do
> > AV checking with icap) in routing mode.
> > Sslbump/dynamic certs are configured. A self-signed cert is used on
> > the proxy, and installed as a ca on browsers.
> >
> > https to several sites (such as Gmail.com boi.com) works with FF
> > (although FF is initially much slower); but gives errors in IE9
> > "Internet Explorer blocked this website from displaying content with
> > security certificate errors"
> >
> > Clicking on the lock icon shows the certificate with name
> > accounts.google.com and signed by myproxy.com, which is fine. So why
> > is IE not happy?
> >
> > In the squid logs:
> > NONE/000 0 CONNECT accounts.google.com:443 - HIER_NONE/- -
> > TCP_MISS/200 9497 GET https://accounts.google.com/ServiceLogin? -
> > HIER_DIRECT/209.85.148.84 text/html
> > NONE/000 0 CONNECT ssl.google-analytics.com:443 - HIER_NONE/- -
> > NONE/000 0 CONNECT mail.google.com:443 - HIER_NONE/- -
> > NONE/000 0 CONNECT ssl.gstatic.com:443 - HIER_NONE/- -
> > TCP_MISS/200 1301 POST
> > http://safebrowsing.clients.google.com/safebrowsing/downloads
> >
> > Is IE9 fussier that other browsers regarding SSL?
> >
> >
> > Any tips/best practices to get SSL interception running smoothly ? :-)
> >
> > Thanks,
> >
> > Sean
>
> I believe Firefox uses its own certificate store while IE uses the Windows 
> certificate store. Was the self-signed cert added to the Windows cert store?
>
> Guy
> This message has been scanned by ComplianceSafe, powered by Palisade's 
> PacketSure.


[squid-users] Re: squid/sslbump + IE9

2011-12-02 Thread Sean Boran
Hi,

I'm testing squid v3 with SSL interception  (the interception is to do
AV checking with icap) in routing mode.
Sslbump/dynamic certs are configured. A self-signed cert is used on
the proxy, and installed as a ca on browsers.

https to several sites (such as Gmail.com boi.com) works with FF
(although FF is initially much slower); but gives errors in IE9
"Internet Explorer blocked this website from displaying content with
security certificate errors"

Clicking on the lock icon shows the certificate with name
accounts.google.com and signed by myproxy.com, which is fine. So why
is IE not happy?

In the squid logs:
 NONE/000 0 CONNECT accounts.google.com:443 - HIER_NONE/- -
TCP_MISS/200 9497 GET https://accounts.google.com/ServiceLogin? -
HIER_DIRECT/209.85.148.84 text/html
NONE/000 0 CONNECT ssl.google-analytics.com:443 - HIER_NONE/- -
 NONE/000 0 CONNECT mail.google.com:443 - HIER_NONE/- -
NONE/000 0 CONNECT ssl.gstatic.com:443 - HIER_NONE/- -
TCP_MISS/200 1301 POST
http://safebrowsing.clients.google.com/safebrowsing/downloads

Is IE9 fussier that other browsers regarding SSL?


Any tips/best practices to get SSL interception running smoothly ? :-)

Thanks,

Sean