Re: [squid-users] External ACL Auth Session DB for 100+ clients behind NAT

2012-06-01 Thread Amos Jeffries

On 31/05/2012 3:40 a.m., Nishant Sharma wrote:

On Sun, May 27, 2012 at 5:28 PM, Amos Jeffries wrote:

If you could send in sample strings - received and final expected
result, I can help with hacking Perl code.

Thank you. Expected input is strings like:

1 foo bar   -  channel-ID=1,  UUID=foo bar
2 hello   -  channel-ID=2, UUID=hello

Only numerics in the channel-ID, followed by one SP to separate them, then
anything including more SP characters in the UUID portion.


my $string = 1 foo bar;
$string =~ m/^(\d+)\s(.*)$/;
my ($cid, $uuid) = ($1, $2);

Above code will give values for $cid and $uuid as:

$cid = 1
$uuid = foo bar

Let me know if that's as expected.


Thank you. Works perfectly.

Amos


Re: [squid-users] External ACL Auth Session DB for 100+ clients behind NAT

2012-05-30 Thread Nishant Sharma
On Sun, May 27, 2012 at 5:28 PM, Amos Jeffries squ...@treenet.co.nz wrote:
 If you could send in sample strings - received and final expected
 result, I can help with hacking Perl code.

 Thank you. Expected input is strings like:

 1 foo bar   - channel-ID=1,  UUID=foo bar
 2 hello       - channel-ID=2, UUID=hello

 Only numerics in the channel-ID, followed by one SP to separate them, then
 anything including more SP characters in the UUID portion.


my $string = 1 foo bar;
$string =~ m/^(\d+)\s(.*)$/;
my ($cid, $uuid) = ($1, $2);

Above code will give values for $cid and $uuid as:

$cid = 1
$uuid = foo bar

Let me know if that's as expected.


 code submission and auditing procedures are detailed at:
 http://wiki.squid-cache.org/MergeProcedure


Thanks for this. I will submit it soon.

regards,
Nishant


Re: [squid-users] External ACL Auth Session DB for 100+ clients behind NAT

2012-05-27 Thread Amos Jeffries

On 22/05/2012 6:36 p.m., Nishant Sharma wrote:

Hi Amos,

Thanks for your detailed response.

On Tue, May 22, 2012 at 4:56 AM, Amos Jeffries wrote:

external_acl_type hosted_auth ttl=0 %SRC  /etc/squid/auth.pl
acl loggedin external hosted_auth
deny_info https://hostedserver/auth.html loggedin
http_access deny !loggedin
http_access allow all


Please be ware there is no authentication in this setup, despite the login
on your portal page.
What you have is session-based *authorization*.
It is a razor-thin line, but critical to be aware of. Since NAT erases and
plays with the %SRC key which you are using to identify clients. 1) NAT
hides unwanted visitors on the POP networks. 2) The XFF workaround to undo
the NAT is header based with risks of header forgery. So NAT introduces
multiple edge cases where attacks can leak through and hijack sessions.

I understand the difference between Authentication and Authorization,
but here the prime motive is to enforce user based access rules and
perform AuthN / AuthZ over a secured channel against IMAP.

If we segregate the zones as Trusted and Non-Trusted where the
trusted zone is our HO and a proxy forwards the requests to our
publicly hosted squid with XFF header while Non-Trusted zones are
our spokes and roadwarrior users who are behind a simple NAT. Trusted
zone users are allowed to access the proxy with just authorization
(session / form based) and Non-Trusted zone users need to authenticate
compulsorily (explicit proxy-auth). This way, we could enforce the
policies based on users instead of IPs.

Again, the problem is the secured authentication against IMAPS. Mail
is hosted on google and we can't use DIGEST that we receive from
browsers. BASIC auth is ruled out again due to security reasons. VPN /
Stunnel is not considered due to user credential / machine management.


  While the HTML file displays a login
form over HTTPS and sends request to a CGI script which authenticates
against IMAPS and populates the DB with session information. I
understand that I can not use cookies for authentication as browser
will not include cookie set by our authentication page for request to
other domains.

Correct.

On some more googling, I found something called Surrogate Cookies here:
https://kb.bluecoat.com/index?page=contentid=KB3407
https://kb.bluecoat.com/index?page=contentid=KB2877

 From what I could understand is their primary usage is with the
reverse proxy in front of the webservers with limited domains behind
them but it is being used for surrogate authentication with normal
proxy deployments by forcing proxies to accept cookies for any domain?
Even the commercial proxies advise against using surrogate credentials
wherever possible. The major disadvantage I can see is they can't be
used with wget, lynx, elinks, java applets etc. which expect usual
proxy authentication.


bit lacking in how to merge the format %SRC %{X-Forwarded-For} into one
UUID token. There is the space between the two tokens and XFF header is
likely to contain spaces internally which the script as published can;t
handle.
HINT: If anyone has a fix for that *please* let me know. I know its
possible, I stumbled on a perl trick ages back that would do it then lost
the script that was in :(

Following snippet should help if you just want to strip spaces in the
$token string:

my $token = %SRC %{X-Forwarded-For};
$token =~ s/\ //; # This should remove only the first space
$token =~ s/\ //g; # This removes all the spaces in the string

If you could send in sample strings - received and final expected
result, I can help with hacking Perl code.


Thank you. Expected input is strings like:

1 foo bar   - channel-ID=1,  UUID=foo bar
2 hello   - channel-ID=2, UUID=hello

Only numerics in the channel-ID, followed by one SP to separate them, 
then anything including more SP characters in the UUID portion.


I think my initial was something nasty like explode on S, then strip the 
channel-ID followed by space from the original and call the remainder 
UUID. Any improvements on that would be great.





I have also written an auth helper based on the existing POP3 auth
helper. It authenticates against IMAP and IMAPS depending on the
arguments provided e.g.:

## IMAPS against google but return ERR if user tries to authenticate
with @gmail.com
imap_auth imaps://imap.google.com mygooglehostedmail.com

## IMAP auth against my own IMAP server
imap_auth imap://imap.mydomain.com mydomain.com

Where should I submit that as contribution to Squid?


code submission and auditing procedures are detailed at:
http://wiki.squid-cache.org/MergeProcedure

Essentially email a patch or the helper sub-folder to squid-dev at 
squid-cache.org with description of what its for. Under our naming 
scheme this would be basic_imap_auth.


I'm also asking helper contributors to be willing to support their 
helper for a reasonable period (a year or so) here in squid-users to 
reduce workload for everyone and get issues fixed faster.



Amos


Re: [squid-users] External ACL Auth Session DB for 100+ clients behind NAT

2012-05-22 Thread Nishant Sharma
Hi Amos,

Thanks for your detailed response.

On Tue, May 22, 2012 at 4:56 AM, Amos Jeffries squ...@treenet.co.nz wrote:
 external_acl_type hosted_auth ttl=0 %SRC  /etc/squid/auth.pl
 acl loggedin external hosted_auth
 deny_info https://hostedserver/auth.html loggedin
 http_access deny !loggedin
 http_access allow all

 Please be ware there is no authentication in this setup, despite the login
 on your portal page.
 What you have is session-based *authorization*.
 It is a razor-thin line, but critical to be aware of. Since NAT erases and
 plays with the %SRC key which you are using to identify clients. 1) NAT
 hides unwanted visitors on the POP networks. 2) The XFF workaround to undo
 the NAT is header based with risks of header forgery. So NAT introduces
 multiple edge cases where attacks can leak through and hijack sessions.

I understand the difference between Authentication and Authorization,
but here the prime motive is to enforce user based access rules and
perform AuthN / AuthZ over a secured channel against IMAP.

If we segregate the zones as Trusted and Non-Trusted where the
trusted zone is our HO and a proxy forwards the requests to our
publicly hosted squid with XFF header while Non-Trusted zones are
our spokes and roadwarrior users who are behind a simple NAT. Trusted
zone users are allowed to access the proxy with just authorization
(session / form based) and Non-Trusted zone users need to authenticate
compulsorily (explicit proxy-auth). This way, we could enforce the
policies based on users instead of IPs.

Again, the problem is the secured authentication against IMAPS. Mail
is hosted on google and we can't use DIGEST that we receive from
browsers. BASIC auth is ruled out again due to security reasons. VPN /
Stunnel is not considered due to user credential / machine management.

  While the HTML file displays a login
 form over HTTPS and sends request to a CGI script which authenticates
 against IMAPS and populates the DB with session information. I
 understand that I can not use cookies for authentication as browser
 will not include cookie set by our authentication page for request to
 other domains.

 Correct.

On some more googling, I found something called Surrogate Cookies here:
https://kb.bluecoat.com/index?page=contentid=KB3407
https://kb.bluecoat.com/index?page=contentid=KB2877

From what I could understand is their primary usage is with the
reverse proxy in front of the webservers with limited domains behind
them but it is being used for surrogate authentication with normal
proxy deployments by forcing proxies to accept cookies for any domain?
Even the commercial proxies advise against using surrogate credentials
wherever possible. The major disadvantage I can see is they can't be
used with wget, lynx, elinks, java applets etc. which expect usual
proxy authentication.

 bit lacking in how to merge the format %SRC %{X-Forwarded-For} into one
 UUID token. There is the space between the two tokens and XFF header is
 likely to contain spaces internally which the script as published can;t
 handle.
 HINT: If anyone has a fix for that *please* let me know. I know its
 possible, I stumbled on a perl trick ages back that would do it then lost
 the script that was in :(

Following snippet should help if you just want to strip spaces in the
$token string:

my $token = %SRC %{X-Forwarded-For};
$token =~ s/\ //; # This should remove only the first space
$token =~ s/\ //g; # This removes all the spaces in the string

If you could send in sample strings - received and final expected
result, I can help with hacking Perl code.

I have also written an auth helper based on the existing POP3 auth
helper. It authenticates against IMAP and IMAPS depending on the
arguments provided e.g.:

## IMAPS against google but return ERR if user tries to authenticate
with @gmail.com
imap_auth imaps://imap.google.com mygooglehostedmail.com

## IMAP auth against my own IMAP server
imap_auth imap://imap.mydomain.com mydomain.com

Where should I submit that as contribution to Squid?

 Having edge proxies in the POP also enables you to setup a workaround for
 NAT which XFF was designed for
 * The edge proxies add client (pre-NAT) IP address to XFF header, and
 forward to the central proxy.
 * The central proxy only trusts traffic from the edge proxies (eliminating
 WAN attacks).
 * The central proxy trusts *only* the edge proxies in an ACL used by
 follow_x_forwarded_for allow directive. Doing so alters Squid %SRC parameter
 to be the client the POP edge proxy received.
 This setup also allows you to encrypt the TCP links between POP edge proxies
 and central if you want, or to bypass the central proxy for specific
 requests if you need to, and/or to offload some of the access control to
 site-specific controls into the POP edge proxies.

Thanks for the detailed setup guidance. I have actually already put
the proxy in place as you have suggested and follow_x_forwarded_for is
working great as expected for the HO 

[squid-users] External ACL Auth Session DB for 100+ clients behind NAT

2012-05-21 Thread Nishant Sharma
Hi,

Greetings to all from a new user to the list.

A little background on my implementation scenario:

* There are around 60 site offices
* Each site has around 5-6 users
* Head Office has 100+ users
* Currently we are back-hauling all the traffic to HO and using squid
for access control

The obvious drawback is that site offices are not able to utilise
their full bandwidth (DSL 512kbps - 1Mbps) as HO is the bottleneck
with 4Mbps of 1:1 line. The alternative solution that we are working
on is to:

1. Configure squid on a hosted server
2. Ask all the users to configure the hosted proxy
3. Squid will be configured for Authentication
4. Authentication has to be done against IMAPS server

Now, the problem is, we can not use BASIC auth over public Internet
and if we use DIGEST auth, we can not authenticate against IMAP. I had
a look at external_acl_type authentication mechanism discussed in the
list and have configured something like:

external_acl_type hosted_auth ttl=0 %SRC  /etc/squid/auth.pl
acl loggedin external hosted_auth
deny_info https://hostedserver/auth.html loggedin
http_access deny !loggedin
http_access allow all

This auth.pl will check against a session DB (probably MySql) if user
is already authenticated or not.  While the HTML file displays a login
form over HTTPS and sends request to a CGI script which authenticates
against IMAPS and populates the DB with session information. I
understand that I can not use cookies for authentication as browser
will not include cookie set by our authentication page for request to
other domains.

I went through Amos' ext_sql_session_acl.pl which I am planning to use
in place of auth.pl. But here's another catch - since there are more
than 1 users behind the NAT, what parameters like %SRC could be used
to identify a user uniquely in the session database, which should be
persistently present in every request to Squid?

I see a mention of the UUID tokens in the script as well, but was not
able to understand how to use them.

Any pointers would be of great help.

Thanks  regards,
Nishant


Re: [squid-users] External ACL Auth Session DB for 100+ clients behind NAT

2012-05-21 Thread Amos Jeffries

On 22.05.2012 00:58, Nishant Sharma wrote:

Hi,

Greetings to all from a new user to the list.

A little background on my implementation scenario:

* There are around 60 site offices
* Each site has around 5-6 users
* Head Office has 100+ users
* Currently we are back-hauling all the traffic to HO and using squid
for access control

The obvious drawback is that site offices are not able to utilise
their full bandwidth (DSL 512kbps - 1Mbps) as HO is the bottleneck
with 4Mbps of 1:1 line. The alternative solution that we are working
on is to:

1. Configure squid on a hosted server
2. Ask all the users to configure the hosted proxy
3. Squid will be configured for Authentication
4. Authentication has to be done against IMAPS server

Now, the problem is, we can not use BASIC auth over public Internet
and if we use DIGEST auth, we can not authenticate against IMAP. I 
had

a look at external_acl_type authentication mechanism discussed in the
list and have configured something like:

external_acl_type hosted_auth ttl=0 %SRC  /etc/squid/auth.pl
acl loggedin external hosted_auth
deny_info https://hostedserver/auth.html loggedin
http_access deny !loggedin
http_access allow all

This auth.pl will check against a session DB (probably MySql) if user
is already authenticated or not.


Please be ware there is no authentication in this setup, despite the 
login on your portal page.


What you have is session-based *authorization*.

The difference is that in real auth the client has to be who they 
claim. In sessions any attacker which can copy or generate a clients 
session details can access through the proxy, the client details are 
checked but not validated beyond the request where session was created.


It is a razor-thin line, but critical to be aware of. Since NAT erases 
and plays with the %SRC key which you are using to identify clients. 1) 
NAT hides unwanted visitors on the POP networks. 2) The XFF workaround 
to undo the NAT is header based with risks of header forgery. So NAT 
introduces multiple edge cases where attacks can leak through and hijack 
sessions.




 While the HTML file displays a login
form over HTTPS and sends request to a CGI script which authenticates
against IMAPS and populates the DB with session information. I
understand that I can not use cookies for authentication as browser
will not include cookie set by our authentication page for request to
other domains.


Correct.



I went through Amos' ext_sql_session_acl.pl which I am planning to 
use

in place of auth.pl. But here's another catch - since there are more
than 1 users behind the NAT, what parameters like %SRC could be used
to identify a user uniquely in the session database, which should be
persistently present in every request to Squid?


I suggest the %{X-Forwarded-For} as well. In its entirety the XFF 
header *should* be containing a whole path from the client to your 
proxy. It is unsafe to trust every entry individually, but the whole 
thing can be hashed to unique value for each path to an end-client.




I see a mention of the UUID tokens in the script as well, but was not
able to understand how to use them.


The UUID is the %SRC parameter passed in.

As I noted with publication, the script is not perfect. My perl skills 
are a bit lacking in how to merge the format %SRC %{X-Forwarded-For} 
into one UUID token. There is the space between the two tokens and XFF 
header is likely to contain spaces internally which the script as 
published can;t handle.
HINT: If anyone has a fix for that *please* let me know. I know its 
possible, I stumbled on a perl trick ages back that would do it then 
lost the script that was in :(



The script is designed for Captive Portal use, where the clients are 
connecting directly to the proxy. To use it in a hierarchy I recommend 
having a local proxy at each POP which forwards to your central proxy. 
The edge proxies set XFF header for your central proxy to use.



Having edge proxies in the POP also enables you to setup a workaround 
for NAT which XFF was designed for


* The edge proxies add client (pre-NAT) IP address to XFF header, and 
forward to the central proxy.
* The central proxy only trusts traffic from the edge proxies 
(eliminating WAN attacks).
* The central proxy trusts *only* the edge proxies in an ACL used by 
follow_x_forwarded_for allow directive. Doing so alters Squid %SRC 
parameter to be the client the POP edge proxy received.


This setup also allows you to encrypt the TCP links between POP edge 
proxies and central if you want, or to bypass the central proxy for 
specific requests if you need to, and/or to offload some of the access 
control to site-specific controls into the POP edge proxies.



Depending on how complex and specific your access control is it may be 
worth pushing much of it into the POPs and having database links back to 
HQ for the smaller traffic load of details checking, rather than the 
full HTTP workload all going through HQ.



Amos