[squid-users] Strange problem with squid

2014-02-03 Thread P K
Hi,

I've got a mysterious problem with Squid as reverse proxy  and I would
be grateful if someone could help me out. Basically, I use an external
acl to validate the session id when someone accesses my site that is
reverse proxied.

..snip..

external_acl_type ext_session_page ttl=180 negative_ttl=0 %SRC
%{Cookie:;MYSESSIONID} /usr/bin/php /path/to/myvalidator.php
acl user_session external ext_session_page
http_access deny !user_session
deny_info https://logon.domain.com/logon.php?url=%u user_session

..snip..

My logon page logon.php creates a new session id and stores a cookie.
When a user has successfully logged on, I redirect to his chosen site.
Squid then validates the cookie using my external acl (OK or ERR).
This works fine 99.5% of the time.

But sometimes squid gets confused and sends the older session id (one
before the current session id in deny_page) to my external acl which
is really weird. As a result, the external acl keeps returning ERR.
Then it sorts itself out. Or a restart squid sorts it out. What could
be causing this? Is this a bug with squid? I've also noticed that it
seems to happen at night around 9 PM ish.

The logic is simple:

1. User visits the reverse proxied site (config not shown).
2. Squid checks the external acl to see if the cookie is valid.
3. If OK it lets it go to the site.
4. If ERR, logon.php is presented which creates a new session id and
stores a cookie.
5. User logs on
6. If successful, logon.php redirects to the reverse proxied site. (At
this point, external acl will be checked i.e. step 2. External acl
will reply OK as the cookie is valid.)
7. If failed, logon.php does not redirect i.e. stays on deny_page.


Thanks


Re: [squid-users] Strange problem with squid

2014-02-03 Thread P K
Hi Amos,


squid -v

Squid Cache: Version 3.4.2
configure options:  '--enable-ssl' '--prefix=/usr/local/squid'


I store the cookie on the parent domain (say domain.com). The reverse
proxied site is x.domain.com, y.domain.com etc.. So the cookie is
always made available by the browser. It works 99.5% of the time but
starts to play up at night around 9 PM. I don't know if it provides
any clues but it used to happen in the morning 8 AM ish. Then I
changed the TTL values when the problem switched to night time.

8 AM problem (negative ttl defaults to ttl):
ttl=3


9 PM problem:  (current config)
ttl=180 negative_ttl=0


Thanks

On 3 February 2014 09:35, Amos Jeffries squ...@treenet.co.nz wrote:
 On 3/02/2014 10:00 p.m., P K wrote:
 Hi,

 I've got a mysterious problem with Squid as reverse proxy  and I would
 be grateful if someone could help me out. Basically, I use an external
 acl to validate the session id when someone accesses my site that is
 reverse proxied.

 ..snip..

 external_acl_type ext_session_page ttl=180 negative_ttl=0 %SRC
 %{Cookie:;MYSESSIONID} /usr/bin/php /path/to/myvalidator.php
 acl user_session external ext_session_page
 http_access deny !user_session
 deny_info https://logon.domain.com/logon.php?url=%u user_session

 ..snip..

 My logon page logon.php creates a new session id and stores a cookie.
 When a user has successfully logged on, I redirect to his chosen site.
 Squid then validates the cookie using my external acl (OK or ERR).
 This works fine 99.5% of the time.

 But sometimes squid gets confused and sends the older session id (one
 before the current session id in deny_page) to my external acl which
 is really weird. As a result, the external acl keeps returning ERR.
 Then it sorts itself out. Or a restart squid sorts it out. What could
 be causing this? Is this a bug with squid? I've also noticed that it
 seems to happen at night around 9 PM ish.

 The logic is simple:

 1. User visits the reverse proxied site (config not shown).
 2. Squid checks the external acl to see if the cookie is valid.
 3. If OK it lets it go to the site.
 4. If ERR, logon.php is presented which creates a new session id and
 stores a cookie.
 5. User logs on
 6. If successful, logon.php redirects to the reverse proxied site. (At
 this point, external acl will be checked i.e. step 2. External acl
 will reply OK as the cookie is valid.)
 7. If failed, logon.php does not redirect i.e. stays on deny_page.


 What is the output of squid -v please?


 How are you fooling the browser into sending the same Cookie for all
 requests no matter what domain is being fetched?


 Squid sends the ACL helper the Cookie header sub-string starting with
 MYSESSIONID= in the request it receives. I suspect the browser is
 sending stale Cookies.


 Amos


Re: [squid-users] Strange problem with squid

2014-02-03 Thread P K
Forgot to mention - I rotate squid logs using -k rotate daily which is
not related as it happens even if I don't rotate it. I've noticed that
squid -k rotate causes 5 helper processes to be reduced to 1 (as seen
in ps -ef). I suspect this is a known issue.

Just thought I'd mention it although not related to my mysterious problem.

On 3 February 2014 10:21, P K getp...@gmail.com wrote:
 Hi Amos,


 squid -v

 Squid Cache: Version 3.4.2
 configure options:  '--enable-ssl' '--prefix=/usr/local/squid'


 I store the cookie on the parent domain (say domain.com). The reverse
 proxied site is x.domain.com, y.domain.com etc.. So the cookie is
 always made available by the browser. It works 99.5% of the time but
 starts to play up at night around 9 PM. I don't know if it provides
 any clues but it used to happen in the morning 8 AM ish. Then I
 changed the TTL values when the problem switched to night time.

 8 AM problem (negative ttl defaults to ttl):
 ttl=3


 9 PM problem:  (current config)
 ttl=180 negative_ttl=0


 Thanks

 On 3 February 2014 09:35, Amos Jeffries squ...@treenet.co.nz wrote:
 On 3/02/2014 10:00 p.m., P K wrote:
 Hi,

 I've got a mysterious problem with Squid as reverse proxy  and I would
 be grateful if someone could help me out. Basically, I use an external
 acl to validate the session id when someone accesses my site that is
 reverse proxied.

 ..snip..

 external_acl_type ext_session_page ttl=180 negative_ttl=0 %SRC
 %{Cookie:;MYSESSIONID} /usr/bin/php /path/to/myvalidator.php
 acl user_session external ext_session_page
 http_access deny !user_session
 deny_info https://logon.domain.com/logon.php?url=%u user_session

 ..snip..

 My logon page logon.php creates a new session id and stores a cookie.
 When a user has successfully logged on, I redirect to his chosen site.
 Squid then validates the cookie using my external acl (OK or ERR).
 This works fine 99.5% of the time.

 But sometimes squid gets confused and sends the older session id (one
 before the current session id in deny_page) to my external acl which
 is really weird. As a result, the external acl keeps returning ERR.
 Then it sorts itself out. Or a restart squid sorts it out. What could
 be causing this? Is this a bug with squid? I've also noticed that it
 seems to happen at night around 9 PM ish.

 The logic is simple:

 1. User visits the reverse proxied site (config not shown).
 2. Squid checks the external acl to see if the cookie is valid.
 3. If OK it lets it go to the site.
 4. If ERR, logon.php is presented which creates a new session id and
 stores a cookie.
 5. User logs on
 6. If successful, logon.php redirects to the reverse proxied site. (At
 this point, external acl will be checked i.e. step 2. External acl
 will reply OK as the cookie is valid.)
 7. If failed, logon.php does not redirect i.e. stays on deny_page.


 What is the output of squid -v please?


 How are you fooling the browser into sending the same Cookie for all
 requests no matter what domain is being fetched?


 Squid sends the ACL helper the Cookie header sub-string starting with
 MYSESSIONID= in the request it receives. I suspect the browser is
 sending stale Cookies.


 Amos


Re: [squid-users] Squid accel only after logon

2013-11-29 Thread P K
Hi Amos,

Thanks a lot for your reply. It gave me clues on how to go about
finding a solution. I wasn't confused as such between proxy mode and
reverse proxy mode. Basically, I had access to squid and an apache
server  to build an authentication mechanism to  my target websites.

I used the splash page mechanism but wrote my own external acl type in
PHP. I used deny_info to present a PHP logon page which stored the
session info (user, last accessed etc.)  in the database table and
stored a cookie on the client. I then configured squid to pass the
Cookie header (%{Cookie:;PHPSESSID}) field to my external acl PHP
script which validated the session or modified or destroyed if not
valid any more.

It works great.



On 27 November 2013 11:19, Amos Jeffries squ...@treenet.co.nz wrote:
 On 27/11/2013 8:58 p.m., P K wrote:
 Hi,

 I want to use Squid as a reverse proxy (accel) to my main website but
 only if they've authenticated - something like a captive portal (not
 sure if that's the right phrase). By authenticated, I don't mean
 basic or digest etc. I want to provide my own logon page (say php) - I
 can host another authentication website to host that.

 How do I go about achieving that? Splash page functionality is
 something that looks promising in squid but I can't get my head around
 how to force squid to reverse proxy my site only after users have
 authenticated on my php splash page. Also I need to terminate their
 session after 3 hours.


 Okay. I think you misunderstand what a reverse proxy does and how it
 operates in relation to the main web server.

 A reverse proxy is simply a gateway to the main server which is used to
 offload serving of static files, do server-side caching, routing between
 different backends, certain types of access control and reduce impact
 from DoS attacks.



 It is better to simply pass all trafic through the proxy on its way to
 the main web server.

 The type of authentication you are describing is called
 application-layer authentication and exists outside of HTTP and thus
 outside of the normal capabilities of an HTTP reverse proxy. It can be
 done but with great complexity and difficulty.


 Once again it is better to leave the authentication non-authenticatino
 decisions to the main web server and have it send back appropriate HTTP
 headers to inform the proxy how to handle the different responses.



 http://wiki.squid-cache.org/ConfigExamples/Portal/Splash


 No your requirements do not match with the limits or capabilities of a
 captive portal. Captive portal uses an *implicit* session. Your system
 uses an *explicit* session.

 Please also note that captive portal *doe not* do authentication in any
 reiable way. The splash page can have application-layer authentication
 built in, BUT what the HTTP layer is doing is assuming / guessing that
 any request with a similar fingerprint as the authenticated one is
 authorized to access the resource.
  Being an assumption this authorization has a relatively high rate of
 failure and vulnerability to a large number of attacks.

 For example; the captive portal works mostly okay in situations where
 the portal device is itself allocating the IP address or has access to
 the clients MAC address information.
  Doing it on a reverse proxy will immediately have trouble from NAT,
 relay routers, and ISP-based proxies - all of which obfuscate the IP
 address details.


 I can do something like this:

 #Show auth.php
 external_acl_type splash_page ttl=60 concurrency=100 %SRC
 /usr/local/sbin/squid/ext_session_acl -t 7200 -b
 /var/lib/squid/session.db

 acl existing_users external splash_page

 http_access deny !existing_users

 # Deny page to display
 deny_info 511:https://myauthserver/auth.php?url=%s existing_users
 #end authphp

 #reverse proxy

 https_port 443 cert=/path/to/x_domain_com.pem
 key=/path/to/x_domain_com.pem accel

 cache_peer 1.1.1.1 parent 443 0 no-query originserver ssl
 sslflags=DONT_VERIFY_PEER name=x_domain_com
 acl sites_server_x_domain_com dstdomain x.domain.com
 cache_peer_access x_domain_com allow sites_server_x_domain_com
 http_access allow sites_server_x_domain_com
 # end reverse proxy


 But how is this going to work? I can present a username/password on my
 auth.php and present a submit button to validate. But how do I tell
 squid that it is OK to serve x.domain.com?

 The external_acl_type helper is recording past visits and needs to
 determine its response based on whatever database records your login
 page did to record the login.


 Also is there a better way of achieving my purpose?

 Yes. Setup the proxy as a basic reverse proxy and leave the
 application-layer authentication decisions to the main web server.

 Application layer auth is usually done with session Cookies on the main
 server. You can check for Cookie header in the proxy and bounce with
 that same deny_info redirect if you like, to help reduce the main server
 load. It wont be perfect due to other uses of Cookie

[squid-users] Squid accel only after logon

2013-11-26 Thread P K
Hi,

I want to use Squid as a reverse proxy (accel) to my main website but
only if they've authenticated - something like a captive portal (not
sure if that's the right phrase). By authenticated, I don't mean
basic or digest etc. I want to provide my own logon page (say php) - I
can host another authentication website to host that.

How do I go about achieving that? Splash page functionality is
something that looks promising in squid but I can't get my head around
how to force squid to reverse proxy my site only after users have
authenticated on my php splash page. Also I need to terminate their
session after 3 hours.

http://wiki.squid-cache.org/ConfigExamples/Portal/Splash

I can do something like this:

#Show auth.php
external_acl_type splash_page ttl=60 concurrency=100 %SRC
/usr/local/sbin/squid/ext_session_acl -t 7200 -b
/var/lib/squid/session.db

acl existing_users external splash_page

http_access deny !existing_users

# Deny page to display
deny_info 511:https://myauthserver/auth.php?url=%s existing_users
#end authphp

#reverse proxy

https_port 443 cert=/path/to/x_domain_com.pem
key=/path/to/x_domain_com.pem accel

cache_peer 1.1.1.1 parent 443 0 no-query originserver ssl
sslflags=DONT_VERIFY_PEER name=x_domain_com
acl sites_server_x_domain_com dstdomain x.domain.com
cache_peer_access x_domain_com allow sites_server_x_domain_com
http_access allow sites_server_x_domain_com
# end reverse proxy


But how is this going to work? I can present a username/password on my
auth.php and present a submit button to validate. But how do I tell
squid that it is OK to serve x.domain.com?

Also is there a better way of achieving my purpose?

Thanks.

Please help.


Re: [squid-users] Reverse Proxy multiple sites with basic auth

2013-11-23 Thread P K
I appreciate your help Amos. Please bear with me.

I can solve the basic auth problem by bringing https://x.domain.com
and https://y.domain.com into one roof i.e.
https://common.domain.com/x and https://common.domain.com/y.  But
that's much bigger piece of work for our organization. So I'm trying
to avoid that.


 Answer this then: How is the browser to know they are the same when the
 domain name presented tells it that a *different server* is being contacted?

 https://svn.tools.ietf.org/svn/wg/httpbis/draft-ietf-httpbis/latest/p7-auth.html#rfc.section.6.2

 The browser will not broadcast your users credentials to every website
 they connect. They will instead send one request without credentials on
 first contact to a domain and only send what it believes to be the
 correct credentials after the challenge comes back telling it what
 domain+realm needs login.

The browser does not need to know. It is the squid that asks the
browser to supply authentication creds first right?  I'm thinking
something like this:

1. Browser requests https://x.domain.com
2. Squid checks if this browser has authenticated on y.domain.com or
x.domain.com in the past 4 hours. If no then send 401 not authorized
asking for credentials for x.domain.com.Browser will keep sending
credentials in the Authorization header from now on and squid caches
the auth creds.
3. Browser requests https://y.domain.com. Squid already has creds for
x.domain.com cached. So it lets the browser in.
4. 4 hours have passed. Creds expired. Browser requests
https://x.domain.com. Squid sends the challenge again.

Is this possible with external acls or something? I may be talking
complete nonsense here. So please bear with me.

Thanks for your help.











On 23 November 2013 05:27, Amos Jeffries squ...@treenet.co.nz wrote:
 On 23/11/2013 12:57 p.m., P K wrote:
 Thanks Amos.

 That causes a big problem for me if basic authentication cannot be
 shared across domains. Is there anyway I can configure squid so that
 authentication challenge is sent for one or the other but not both.
 For e.g if user is authenticated (basic) on siteA then don't ask for
 authentication on siteB. Is this possible with squid in my
 configuration?


 Answer this then: How is the browser to know they are the same when the
 domain name presented tells it that a *different server* is being contacted?

 https://svn.tools.ietf.org/svn/wg/httpbis/draft-ietf-httpbis/latest/p7-auth.html#rfc.section.6.2

 The browser will not broadcast your users credentials to every website
 they connect. They will instead send one request without credentials on
 first contact to a domain and only send what it believes to be the
 correct credentials after the challenge comes back telling it what
 domain+realm needs login.



 For the other problem about authentication being asked twice - No the
 target server does not need any basic authentication.

 Then WTF are you bothering with it? see below.

 It is running
 tomcat. Squid causes browser to prompt for authentication when I type
 https://x.domain.com. Then the url changes to include
 /something;jsession=.. and then I get prompted again.



 !! your users have logged into no less than three different security
 systems by the time that paragraph description is over:
 * HTTP authentication
 * TLS
 * Java Cookie session

 THE PROBLEM you have is all that bouncing is crossing between different
 zones of secuity. If you have to bounce people around at all, do it
 without requiring authentication on the point of first contact and only
 on the final service itself.

 For example:
 1) http://x.domain.com bounces without auth to https://x.domain.com
 2) http://y.domain.com bounces without auth to https://x.domain.com

 3) https://x.domain.com does the type of auth the server requires and
 keeps the user there in its protection while their browsing session happens.


 Squid can even do the (1) and (2) redirects for you to save load on the
 origin server.

 Also, please read this
 https://randomcoder.org/articles/jsessionid-considered-harmful

 Amos


[squid-users] Reverse Proxy multiple sites with basic auth

2013-11-22 Thread P K
Hi,

I can't get the reverse proxy to work properly. Basically I want squid
to serve as reverse proxy to two of my domains -  x.example.com and
y.example.com. I also want squid to perform basic authentication
against my own radius server which should be common for both the
sites. I mean I want users to authenticate once and it should work for
both x.example.com and y.example.com. Here's my config:


auth_param basic program /path/to/basic_radius_auth -f
/path/to/squid_rad_auth.conf
auth_param basic children 5
auth_param basic realm PRIVATE
auth_param basic credentialsttl 4 hours
auth_param basic casesensitive on


https_port 443 cert=/path/to/x_domain_com.pem
key=/path/to/x_domain_com.pem accel

cache_peer 1.1.1.X parent 443 0 no-query originserver ssl
sslflags=DONT_VERIFY_PEER name=x_domain_com
cache_peer 1.1.1.Y parent 443 0 no-query originserver ssl
sslflags=DONT_VERIFY_PEER name=y_domain_com

acl sites_server_x_domain_com dstdomain x.domain.com
acl sites_server_y_domain_com dstdomain y.domain.com
acl radius-auth proxy_auth REQUIRED


cache_peer_access x_domain_com allow sites_server_x_domain_com
cache_peer_access y_domain_com allow sites_server_y_domain_com
cache_peer_access x_domain_com deny all
cache_peer_access y_domain_com deny all


http_access allow radius-auth
http_access allow sites_server_x_domain_com
http_access allow sites_server_y_domain_com


...snip ...



With this config:

1. I launch https://x.domain.com on  a browser. It prompts for
user/pass. I enter it and then it prompts again. I enter it and then
it lets me in. Not sure why this is happening. Is it because the
target site has IPTables NAT from 443 to 6443? If so how can I get
around that?

2. I launch https://x.domain.com and authenticate. It lets me in. Now
I change the URL on the same browser to https://y.domain.com. It asks
for authentication again. Why? How can I get around this?

PS: I know it is not possible to virtual host SSL and I need a wild
card cert. But I don;t care if I get a certificate warning with
y.domain.com.

Please could someone have a look and tell me what I'm doing wrong?

Thanks.


Re: [squid-users] Reverse Proxy multiple sites with basic auth

2013-11-22 Thread P K
Thanks Amos.

That causes a big problem for me if basic authentication cannot be
shared across domains. Is there anyway I can configure squid so that
authentication challenge is sent for one or the other but not both.
For e.g if user is authenticated (basic) on siteA then don't ask for
authentication on siteB. Is this possible with squid in my
configuration?

For the other problem about authentication being asked twice - No the
target server does not need any basic authentication. It is running
tomcat. Squid causes browser to prompt for authentication when I type
https://x.domain.com. Then the url changes to include
/something;jsession=.. and then I get prompted again.

On 22 November 2013 11:53, Amos Jeffries squ...@treenet.co.nz wrote:
 On 22/11/2013 11:16 p.m., P K wrote:
 Hi,

 I can't get the reverse proxy to work properly. Basically I want squid
 to serve as reverse proxy to two of my domains -  x.example.com and
 y.example.com. I also want squid to perform basic authentication
 against my own radius server which should be common for both the
 sites. I mean I want users to authenticate once and it should work for
 both x.example.com and y.example.com. Here's my config:


 auth_param basic program /path/to/basic_radius_auth -f
 /path/to/squid_rad_auth.conf
 auth_param basic children 5
 auth_param basic realm PRIVATE
 auth_param basic credentialsttl 4 hours
 auth_param basic casesensitive on


 https_port 443 cert=/path/to/x_domain_com.pem
 key=/path/to/x_domain_com.pem accel

 cache_peer 1.1.1.X parent 443 0 no-query originserver ssl
 sslflags=DONT_VERIFY_PEER name=x_domain_com
 cache_peer 1.1.1.Y parent 443 0 no-query originserver ssl
 sslflags=DONT_VERIFY_PEER name=y_domain_com

 acl sites_server_x_domain_com dstdomain x.domain.com
 acl sites_server_y_domain_com dstdomain y.domain.com
 acl radius-auth proxy_auth REQUIRED


 cache_peer_access x_domain_com allow sites_server_x_domain_com
 cache_peer_access y_domain_com allow sites_server_y_domain_com
 cache_peer_access x_domain_com deny all
 cache_peer_access y_domain_com deny all


 http_access allow radius-auth
 http_access allow sites_server_x_domain_com
 http_access allow sites_server_y_domain_com


 ...snip ...



 With this config:

 1. I launch https://x.domain.com on  a browser. It prompts for
 user/pass. I enter it and then it prompts again. I enter it and then
 it lets me in. Not sure why this is happening. Is it because the
 target site has IPTables NAT from 443 to 6443? If so how can I get
 around that?

 Does the web server require the auth credentials as well?
  If so try adding login=PASSTHRU  to the cache_peer lines. That will
 send the users credentials to it. Otherwise login= can be used with an
 explicit login user:passwd to be sent to the peer server.



 2. I launch https://x.domain.com and authenticate. It lets me in. Now
 I change the URL on the same browser to https://y.domain.com. It asks
 for authentication again. Why? How can I get around this?

 Why. Because they are different domains. And no there is no way to get
 around that. It is a requirement of web security that login credentials
 are scoped by domain and are not permitted to be delivered to any other.

 There is no reason to expect any two differently named domains use the
 same authentication backend even if they are contacted through the same
 proxy or even hosted on the same IP:port.

 PS. we already have requests from people wanting different backends on a
 *path prefix*. Yuck.


 PS: I know it is not possible to virtual host SSL and I need a wild
 card cert. But I don;t care if I get a certificate warning with
 y.domain.com.

 The latest Squid versions have SSL capability of generating
 certificates. You may want to try using that
 It has become possible in the lastest Squid versions with the
 infrastructure added to generate certificates.


 Please could someone have a look and tell me what I'm doing wrong?


 Firfox and Chrome are getting rather pedantic about some of those errors
 nowdays. To the point where user override is no longer possible on
 certain warnings. They just refuse to connect to the server.

 Amos