RE: [squid-users] 24h trusted IP

2011-08-10 Thread David Parks
I have a similar but different requirement in which we need to be able to deny 
access to a user at any time. 

Your challenge is going to be that squid caches the users login (as does the 
browser), and there's no good way to expire a users basic/digest auth 
credentials in squid - the user must close their browser for the credentials to 
expire (or perhaps there's some timeout period but I'm not familiar with it as 
it wouldn't affect my situation).

My solution to the problem is to use both an authentication helper paired with 
a url re-write helper. The authentication helper handles the initial 
authentication, and then passes some information over to the url redirect 
helper to indicate that the user did authenticate properly (I just use a simple 
named pipe for this, and a couple of shell scripts for the helpers).

The URL re-write helper gets called for each URL, so it needs to be fast. As 
such I keep all authenticated users in memory in a hash table. For each URL the 
url re-write helper checks the authenticated username against its hash table. 
If the user is blocked it will re-direct the user to a notification page.

Here's where our implementations deviate a bit. For me, the notification page 
provides the user a way to unblock their account and my script polls a 
webservice for this status and updates appropriately. You will have to figure 
some way of re-authenticating the user at this point. Keep in mind that you 
can't easily wipe the basic/digest authentication credentials in squid. 

Off the top of my head I would suggest providing a web page that handles the 
re-authentication. That web page would then need to communicate the username 
back to the authentication helper so subsequent pages succeed.

Since you have to do this extra web-based authentication step at the 24h mark, 
you might even skip the basic/digest authentication and just perform all 
authentication through this web based approach using only a url-rewrite helper 
(if the user isn't authenticated it forwards them to the website for 
authentication). Integration with single sign on might be a thought in your 
mind at this point too.

Anyway, that should be some food for thought for you.

Good luck,
David

-Original Message-
From: alexus [mailto:ale...@gmail.com] 
Sent: Wednesday, August 10, 2011 9:56 AM
To: squid-users@squid-cache.org
Subject: [squid-users] 24h trusted IP

I need for squid to do following

1) user authenticate against squid
2) add ip for 24h to a trusted list, so it will not prompt for
userid/password until 24h is expired


--
http://alexus.org/
-
No virus found in this message.
Checked by AVG - www.avg.com
Version: 10.0.1392 / Virus Database: 1520/3824 - Release Date: 08/09/11



RE: [squid-users] Authentication infinite loop

2011-08-10 Thread David Parks
I just verified that 3.2.0.10 exhibits this digest authentication problem, and 
I've updated the bug report you (Amos) referenced accordingly.

I also verified that 3.1.14 does *NOT* have this problem (and noted it in the 
same bug report).

Thanks for the response, that's good enough for me for now.

Dave

-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Sent: Tuesday, July 26, 2011 3:41 PM
To: squid-users@squid-cache.org
Subject: RE: [squid-users] Authentication infinite loop

 On Tue, 26 Jul 2011 15:05:22 -0700, David Parks wrote:
 After some more testing I'm finding more cause for concern here. I'm 
 using
 3.2.0.9 in this test.

 Please use 3.2.0.10. .9 has some big issues.


 Digest authentication is configured. I am now just using a simple 
 auth
 helper script which sits in a loop and outputs ERR (as per the 
 docs, this
 output indicates user not found, though in another test I found 
 that
 outputting an incorrect password hash has the same effect).
 Nothing interesting shows up in cache.log during any of this.

 Here is the behavior I see:

 - Run squid
 - Open the browser w/ squid instance configured as proxy
 - Browser indicates that it's trying to make a connection to the 
 default
 home page (google in this case), waiting
 - Squid auth helper receives nothing (I've got it copying output to a 
 debug
 file for viewing)

 - Timeout in around 75 seconds

 - Logs show user - received TCP_DENIED status (I believe this means 
 a 407
 went back to the browser, but I wasn't monitoring for this 
 specifically)

 Don't assume. Unless the log shows 407 as the status (ie 
 TCP_DENIED/407) there are other things from explicit ACLs, too-big 
 headers and bodies, mangled credentials, or unparsable header values 
 which can cause DENIED.

 - Still auth helper log shows that it received nothing
 - Browser requests user/pass popup

 - Entering user/pass sends the entry to the auth helper which replies 
 with
 ERR
 - Browser pops up the authentication dialogue again
 - Entering the same user/pass again causes the logs to spam user 
 username
 with status TCP_DENIED as quickly as possible (notice that the log 
 now shows
 the username, not -)


 Example auth helper script used:
 #!/bin/bash
 while read LINE; do
 echo $LINE /tmp/output
 echo ERR
 done


 Sounds like http://bugs.squid-cache.org/show_bug.cgi?id=3186

 There is a workaround posted, but it is not a nice one.

 We need to ensure that unchecked is ONLY set if the browser actually 
 sent whole new details. If the TTL has expired a background check needs 
 to be kicked without altering the existing ok/err state of the 
 credentials. There is a grace period where the old value may be used 
 while an background revalidate with the helper is done.

 Amos


 -Original Message-
 From: David Parks

 In doing some dev work I see a situation where squid gets into an 
 infinite
 loop with the browser. The situation:

 1) Browser attempts digest authentication against squid (running with 
 a
 custom auth helper)
 2) auth helper fails user authentication
 3) I believe squid caches the authentication failure
 4) Browser requests a page using the above authentication
 5) Squid replies with 407 - authentication required
 6) INFINITE LOOP: (Browser retries request : squid replies with 407)

 The above loop running locally can rack up a meg of data transfer in 
 just
 seconds.

 I remember dealing with this issue some time back in some other work 
 and
 just don't recall what I did about it.

 I'm running a custom auth helper, log daemon, and url rewrite helper.

 -
 No virus found in this message.
 Checked by AVG - www.avg.com
 Version: 10.0.1390 / Virus Database: 1518/3788 - Release Date: 
 07/25/11

-
No virus found in this message.
Checked by AVG - www.avg.com
Version: 10.0.1390 / Virus Database: 1518/3789 - Release Date: 07/26/11



RE: [squid-users] Authentication infinite loop

2011-07-26 Thread David Parks
After some more testing I'm finding more cause for concern here. I'm using
3.2.0.9 in this test.

Digest authentication is configured. I am now just using a simple auth
helper script which sits in a loop and outputs ERR (as per the docs, this
output indicates user not found, though in another test I found that
outputting an incorrect password hash has the same effect).
Nothing interesting shows up in cache.log during any of this.

Here is the behavior I see:

- Run squid
- Open the browser w/ squid instance configured as proxy
- Browser indicates that it's trying to make a connection to the default
home page (google in this case), waiting
- Squid auth helper receives nothing (I've got it copying output to a debug
file for viewing)

- Timeout in around 75 seconds

- Logs show user - received TCP_DENIED status (I believe this means a 407
went back to the browser, but I wasn't monitoring for this specifically)
- Still auth helper log shows that it received nothing
- Browser requests user/pass popup

- Entering user/pass sends the entry to the auth helper which replies with
ERR
- Browser pops up the authentication dialogue again
- Entering the same user/pass again causes the logs to spam user username
with status TCP_DENIED as quickly as possible (notice that the log now shows
the username, not -)


Example auth helper script used:
#!/bin/bash
while read LINE; do
echo $LINE /tmp/output
echo ERR
done


-Original Message-
From: David Parks [mailto:davidpark...@yahoo.com] 
Sent: Monday, July 25, 2011 7:11 PM
To: squid-users@squid-cache.org
Subject: [squid-users] Authentication infinite loop

In doing some dev work I see a situation where squid gets into an infinite
loop with the browser. The situation:

1) Browser attempts digest authentication against squid (running with a
custom auth helper)
2) auth helper fails user authentication
3) I believe squid caches the authentication failure
4) Browser requests a page using the above authentication
5) Squid replies with 407 - authentication required
6) INFINITE LOOP: (Browser retries request : squid replies with 407)

The above loop running locally can rack up a meg of data transfer in just
seconds.

I remember dealing with this issue some time back in some other work and
just don't recall what I did about it.

I'm running a custom auth helper, log daemon, and url rewrite helper.

-
No virus found in this message.
Checked by AVG - www.avg.com
Version: 10.0.1390 / Virus Database: 1518/3788 - Release Date: 07/25/11



[squid-users] Authentication infinite loop

2011-07-25 Thread David Parks
In doing some dev work I see a situation where squid gets into an infinite
loop with the browser. The situation:

1) Browser attempts digest authentication against squid (running with a
custom auth helper)
2) auth helper fails user authentication
3) I believe squid caches the authentication failure
4) Browser requests a page using the above authentication
5) Squid replies with 407 - authentication required
6) INFINITE LOOP: (Browser retries request : squid replies with 407)

The above loop running locally can rack up a meg of data transfer in just
seconds.

I remember dealing with this issue some time back in some other work and
just don't recall what I did about it.

I'm running a custom auth helper, log daemon, and url rewrite helper.



[squid-users] Logging packet bytes vs. http size bytes?

2011-07-12 Thread David Parks
Is there any way to log the actual packet sizes rather than just the size of
the http request+headers that are found in the access log configuration?



[squid-users] Segmentation fault - 3.2.0.8

2011-06-13 Thread David Parks
I'm getting a segmentation fault error that I can't figure out.
If I remove the [auth_param digest realm Squid proxy-caching web server]
line it parses out just fine.

Squid 3.2.0.8  (and 3.2.0.7)

#
#
# Command:
#
#
./squid -X -d 9 -k parse

#
#
# Log (note segmentation fault at the end)
#
#
 ~~~cut begging of log 
2011/06/13 04:08:15.483| aclParseAclList: looking for ACL name 'all'
2011/06/13 04:08:15.483| ACL::FindByName 'all'
2011/06/13 04:08:15.483| Processing Configuration File:
/usr/local/squid/etc/squid.conf (depth 0)
2011/06/13 04:08:15.484| Processing: 'auth_param digest program
/usr/local/proxycommandcenter/bin/helper
/usr/local/proxycommandcenter/pipes/proxy_auth_client-to-cmd
/usr/local/proxycommandcenter/pipes/proxy_auth_cmd-to-client'
2011/06/13 04:08:15.484| Processing: 'auth_param digest children 1 startup=1
idle=1 concurrency=2'
2011/06/13 04:08:15.484| Processing: 'auth_param digest realm Squid
proxy-caching web server'
Segmentation fault


#
#
# etc/squid.conf (it only appears to get to auth_param digest realm, I've
tried various realm strings to no avail)
#
#
auth_param digest program /usr/local/proxycommandcenter/bin/helper
/usr/local/proxycommandcenter/pipes/proxy_auth_client-to-cmd
/usr/local/proxycommandcenter/pipes/proxy_auth_cmd-to-client
auth_param digest children 1 startup=1 idle=1 concurrency=2
auth_param digest realm Squid proxy-caching web server
auth_param digest nonce_garbage_interval 5 minutes
auth_param digest nonce_max_duration 30 minutes
auth_param digest nonce_max_count 50

url_rewrite_program /usr/local/proxycommandcenter/bin/helper
/usr/local/proxycommandcenter/pipes/url_redirect_client-to-cmd
/usr/local/proxycommandcenter/pipes/url_redirect_cmd-to-client
url_rewrite_children 1 startup=1 idle=1 concurrency=2

acl homesite url_regex http[s]{0,1}://[^/]*proxyandvpn\.com.*
acl authenticated proxy_auth REQUIRED

#acl from_localhost src 127.0.0.1/32 ::1
acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1
acl localnet dst 10.0.0.0/8 # RFC1918 possible internal network
acl localnet dst 172.16.0.0/12  # RFC1918 possible internal network
acl localnet dst 192.168.0.0/16 # RFC1918 possible internal network
acl localnet dst fc00::/7   # RFC 4193 local private network range
acl localnet dst fe80::/10  # RFC 4291 link-local (directly plugged)
machines

http_access deny to_localhost
http_access deny localnet
http_access allow homesite
http_access allow authenticated
http_access deny all

adapted_http_access allow authenticated
adapted_http_access deny all

http_port 80

# No local caching
maximum_object_size 0 KB
minimum_object_size 0 KB
# email use for ftp anonymous access
ftp_user anonymous@
# check ftp data connection
ftp_sanitycheck on
icp_access deny all
ident_lookup_access deny all


# We recommend you to use at least the following line.
hierarchy_stoplist cgi-bin ?

logformat custom_verbose User[%un] TotalBytes[%st] ClientIP[%a]
LocalPort[%lp] SquidStatus[%Ss] URL[%ru] Time[%{%Y-%m-%d %H}tg:00:00]
HttpStatus[%Hs]
access_log
stdio:/usr/local/proxycommandcenter/pipes/log_daemon_client-to-cmd
custom_verbose
cache_store_log /usr/local/squid/var/logs/store.log
pid_filename /usr/local/squid/var/logs/squid.pid
cache_log /usr/local/squid/var/logs/cache.log
coredump_dir /usr/local/squid/var/logs/core-dumps

cache_effective_user squid
forwarded_for delete
client_db off


# Add any of your own refresh_pattern entries above these.
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .   0   20% 4320



RE: [squid-users] 2 NCSA password files

2010-11-22 Thread David Parks
I have a ballpark-similar requirement. What I did is write a custom 
authentication application and an ACL helper which work in tandem to 
authenticate and monitor user activity. Here's what it looks like (at least the 
parts that are relevant to your question):

I run 1 java based application which handles both user authentication (we 
integrate with a custom web-based application), and handles ACL requests. 

The login doesn't need to be fast, but the ACL checks are called for each URL, 
so they need to respond immediately. Once a user authenticates we store the 
username in a hash table which is later accessed by the ACL helper quickly (and 
updated as needed by external events). In your case you could put the fast 
users into a bucket and have an ACL helpers that check that the user was added 
to the table (otherwise they can default to the slow group).

I found it easiest to write one application to do all of the above. The ACL 
helper and Authentication helper are implemented simply with a linux shell 
script that redirects standard in/out to and from a unix pipe, then in the java 
app I can simply read/write to the relevant pipe (just a file to java).

So the workflow looks like:
 1) User accesses a site through squid
 2) squid sends Digest authentication request
 3) User credentials are sent to the authentication helper
 4) Authentication helper script redirects to a named pipe
 5) Custom app reads the login info from the pipe, writes success/fail response 
to another named pipe, and caches the username as appropriate to identify the 
user as in the fast group or not
 6) Authentication helper script redirects above input back to user, user now 
is authenticated
 7) A custom ACL exists to match users to the fast group (or default them to 
the slow group)
 8) Squid calls the custom ACL helper for each URL request
 9) The custom ACL helper script redirects to another named pipe to be picked 
up by our Custom app
10) Custom app checks if the user is in the fast group and responds to the 
named pipe with a success/fail for the ACL match
11) The ACL helper script redirects the named pipe to STDOUT of the ACL helper 
12) Squid matches the user to the fast group or defaults them to the slow group.

The custom app in this case could be as simple as calling the existing 
authentication helpers (twice if necessary to find the user in the fast or slow 
group). But it will need to handle caching the username as fast or slow so that 
the custom ACL requests can happen immediately.

If you go this route let me know and I'll post the shell scripts I use (and a 
few other pointers), unless you’re a shell script guru (I wasn't) it'll take a 
good few hours to figure out.

David


-Original Message-
From: J Webster [mailto:webster_j...@hotmail.com] 
Sent: Sunday, November 21, 2010 4:43 PM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] 2 NCSA password files

So, if my users change on a daily basis (sometimes hourly), can I update the 
acl file on the fly.
So, I'd have 1 ncsa file with the username and passwords for all users.
Then 2 acl files with high speed users and low speed users?


--
From: David Parks davidpark...@yahoo.com
Sent: Sunday, November 21, 2010 10:02 AM
To: 'J Webster' webster_j...@hotmail.com; squid-users@squid-cache.org
Subject: RE: [squid-users] 2 NCSA password files

 If you write a custom ACL helper you can match users against any criteria 
 you define, then implement the delay pools for users that matched your 
 custom ACL helper.
 



RE: [squid-users] 2 NCSA password files

2010-11-21 Thread David Parks
If you write a custom ACL helper you can match users against any criteria you 
define, then implement the delay pools for users that matched your custom ACL 
helper.

-Original Message-
From: J Webster [mailto:webster_j...@hotmail.com] 
Sent: Saturday, November 20, 2010 9:01 PM
To: squid-users@squid-cache.org
Subject: [squid-users] 2 NCSA password files

Is it possible to have 2 NCSA password auth files and then have different 
download speeds per each NCSA file/user group? 



RE: [squid-users] optimize squid for video streaming

2010-11-13 Thread David Parks
Are you hosting squid on your home internet connection? Your upload bandwidth 
is probably limited (25k bytes upstream is common) and too low to forward the 
video traffic on to you.


-Original Message-
From: Héctor Andrés Urbina Saavedra [mailto:hau...@mail.usask.ca] 
Sent: Sunday, November 14, 2010 11:31 AM
To: squid-users@squid-cache.org
Subject: [squid-users] optimize squid for video streaming

Hello,

I am out of my country and I am trying to use squid to watch TV from some tv 
channels that don't stream to other countries.  My squid server shows good 
enough bandwidth to the channels' streaming and to myself in Canada, however 
when I try to see online tv the image and sound get stopped every few seconds.  
I suppose the main reason could be some buffer size configuration.

I have cache_mem = 8MB.  Should I increase it? or do something else?

I would appreciate any insight on this.

Cheers,
Héctor.



RE: [squid-users] First post

2010-11-06 Thread David Parks
Hi Luke, Squid is a proxy server, it simply re-directs traffic like a broker
handles a transaction for a client so the client doesn't work directly with
the seller.

It can cache data like images so that when, for example, UserA goes to a
website, when UserB goes to that same website the images and such don't need
to be downloaded again, they are sent from the local squid. But for a single
user your browser does this caching already.

There are other uses, but far more technical.

Try this google search, I think it will get you going in the direction you
want to follow:
http://www.google.com/search?q=download+webpages+for+offline+viewingie=utf-
8oe=utf-8aq=trls=org.mozilla:en-US:officialclient=firefox-a

David


-Original Message-
From: Luke [mailto:luke...@gmail.com] 
Sent: Saturday, November 06, 2010 12:34 PM
To: squid-users@squid-cache.org
Subject: [squid-users] First post

So I have never set up a Squid web cache before but I think it is what I
need.  Let me explain:
My father and mother live out in the middle of no where and currently get
their internet access through satellite service.  It reminds me of the old
days of dial up.  My dad uses the internet to get most of his current event
and sporting news.  He is very patient but when I go there to visit it
drives me nuts.  I was wondering if Squid could do the
following:
Download during the night his favorite news sites and their linked articles
so when he gets up in the morning to read the morning news it is lightning
fast.  Can this be done?

Thanks

Luke Brown



RE: [squid-users] SSL between squid and client possible?

2010-09-25 Thread David Parks
I've added myself to that bug and given it my interest.
Do you know of any browses for which you can connect to squid over a secure 
https_port?

I tried setting it up to learn about the limitations, but can't connect to the 
https_port using firefox, ie or safari.

Thanks,
Dave


-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Sent: Wednesday, September 22, 2010 10:34 AM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] SSL between squid and client possible?

On Tue, 21 Sep 2010 16:39:53 -0700, David Parks davidpark...@yahoo.com
wrote:
 Can SSL be enabled between client and squid?
 Example: An HTTP request to http://yahoo.com goes over SSL from client
to
 squid proxy, then standard HTTP from squid to yahoo and again secured
from
 squid to client on the way back?
 It seems like this is only possible with reverse proxy setups, not
typical
 proxy forward traffic.
 Just wanted to verify my understanding here.
 Thanks,
 David

Squid will do this happily. https_port is the same as http_port but requires 
SSL/TLS on the link.

The problem is that most web browsers won't do the SSL/TLS when talking to an 
HTTP proxy. Please assist with bugging the browser devs about this.
https://bugzilla.mozilla.org/show_bug.cgi?id=378637.  There are implications 
that they might do HTTP-over-SSL to SSL proxies, but certainly will send 
non-HTTP there and break those protocols instead.

Amos



RE: [squid-users] Interminted TCP_DENIED

2010-09-21 Thread David Parks
Hmm, now that I look again, the docs appear to have magically changed on me, I 
must have accidentally looked at a 2.7 doc. 

Ok, great, the bug report's there and I've been able to work around any non 
ported features in 3.2 now (urlgroups as one, but that wasn't a stopper for me).

I'm going to start running 3.2 beta through some testing using digest 
authentication (w/ custom helper), logdaemon, and a url_rewriter. Let me know 
if there any particular areas that I should focus on that might help flush 
things out?

And thanks for all the help and great work!

David



-Original Message-
From: Henrik Nordström [mailto:hen...@henriknordstrom.net] 
Sent: Tuesday, September 21, 2010 1:50 AM
To: David Parks
Cc: squid-users@squid-cache.org
Subject: RE: [squid-users] Interminted TCP_DENIED

mån 2010-09-20 klockan 18:27 -0700 skrev David Parks:

 I was not able to reproduce the intermittent 407 problem in this version as 
 predicted by Amos.
 
 However I did run into some other issues:
 1) A bug with digest authentication -
Open a browser and authenticate. Now restart squid (don't close the 
 browser)
Try browsing to another page. This crashes squid with the following error 
 in squid.out:
FATAL: Received Segment Violation...dying.

PLease file a bug report on this if you have not already.

   http://bugs.squid-cache.org/

 2) Question: Is url_rewrite_concurrency gone? I get a config file warning 
 that it's not recognized.
But it's in the squid.conf.documented docs as valid.

are you sure the squid.conf.documented you are reading is from the right 
version? My 3.2 copy do not mention url_rewrite_concurrency

The concurrency level is set in url_rewrite_children these days.

Regards
Henrik



[squid-users] SSL between squid and client possible?

2010-09-21 Thread David Parks
Can SSL be enabled between client and squid?
Example: An HTTP request to http://yahoo.com goes over SSL from client to
squid proxy, then standard HTTP from squid to yahoo and again secured from
squid to client on the way back?
It seems like this is only possible with reverse proxy setups, not typical
proxy forward traffic.
Just wanted to verify my understanding here.
Thanks,
David




RE: [squid-users] Interminted TCP_DENIED

2010-09-20 Thread David Parks
 request 0x85bcba0
2010/09/20 17:23:02| authenticateAuthUserRequestFree: freeing request 0x8564e50
2010/09/20 17:23:02| authenticateAuthUserRequestFree: freeing request 0x85ee538
2010/09/20 17:23:02| authenticateAuthUserRequestFree: freeing request 0x85ea0a8
2010/09/20 17:23:02| authenticateValidateUser: Auth_user_request was NULL!
2010/09/20 17:23:02| authenticateAuthenticate: broken auth or no proxy_auth 
header. Requesting auth header.
2010/09/20 17:23:02| authenticateDigestNonceNew: created nonce 0x85ea3a8 at 
1285017782
2010/09/20 17:23:02| authenticateValidateUser: Auth_user_request was NULL!
2010/09/20 17:23:02| authenticateAuthenticate: broken auth or no proxy_auth 
header. Requesting auth header.
2010/09/20 17:23:02| authenticateDigestNonceNew: created nonce 0x85f2f10 at 
1285017782
2010/09/20 17:23:02| authenticateAuthenticate: no connection authentication type
2010/09/20 17:23:02| authenticateValidateUser: Validated Auth_user request 
'0x85ea0a8'.
2010/09/20 17:23:02| authenticateValidateUser: Validated Auth_user request 
'0x85ea0a8'.
2010/09/20 17:23:02| authDigestNonceIsValid: Nonce count doesn't match
2010/09/20 17:23:02| authenticateDigestAuthenticateuser: user 'test' validated 
OK but nonce stale
2010/09/20 17:23:02| authenticateValidateUser: Validated Auth_user request 
'0x85ea0a8'.
2010/09/20 17:23:02| authenticateValidateUser: Validated Auth_user request 
'0x85ea0a8'.
2010/09/20 17:23:02| authenticateDigestNonceNew: created nonce 0x85ec798 at 
1285017782
2010/09/20 17:23:02| authenticateAuthUserRequestFree: freeing request 0x85ea0a8
2010/09/20 17:23:03| authenticateAuthUserRequestFree: freeing request 0x858a958
2010/09/20 17:23:03| authenticateAuthUserRequestFree: freeing request 0x857ffd0
2010/09/20 17:23:03| authenticateAuthUserRequestFree: freeing request 0x857fda0
2010/09/20 17:23:03| authenticateAuthUserRequestFree: freeing request 0x8585ae8
2010/09/20 17:23:03| authenticateAuthUserRequestFree: freeing request 0x858b940
2010/09/20 17:23:03| authenticateValidateUser: Auth_user_request was NULL!
2010/09/20 17:23:03| authenticateAuthenticate: broken auth or no proxy_auth 
header. Requesting auth header.
2010/09/20 17:23:03| authenticateDigestNonceNew: created nonce 0x856b938 at 
1285017783
2010/09/20 17:23:03| authenticateValidateUser: Auth_user_request was NULL!
2010/09/20 17:23:03| authenticateAuthenticate: broken auth or no proxy_auth 
header. Requesting auth header.
2010/09/20 17:23:03| authenticateDigestNonceNew: created nonce 0x8585da8 at 
1285017783
2010/09/20 17:23:05| authenticateValidateUser: Auth_user_request was NULL!
2010/09/20 17:23:05| authenticateAuthenticate: broken auth or no proxy_auth 
header. Requesting auth header.
2010/09/20 17:23:05| authenticateDigestNonceNew: created nonce 0x8581090 at 
1285017785
2010/09/20 17:23:05| authenticateValidateUser: Auth_user_request was NULL!
2010/09/20 17:23:05| authenticateAuthenticate: broken auth or no proxy_auth 
header. Requesting auth header.
2010/09/20 17:23:05| authenticateDigestNonceNew: created nonce 0x857fe58 at 
1285017785




-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Sent: Sunday, September 19, 2010 4:33 PM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] Interminted TCP_DENIED

On Sun, 19 Sep 2010 12:37:38 -0700, David Parks davidpark...@yahoo.com
wrote:
 I've simplified things as far as I can think to and still get what
appear
 to
 be random TCP_DENIED/407 errors after I've been authenticated.
 
 Using Squid 2.7 STABLE 9, I'm now just using the digest_pw_auth 
 authenticator with a single user pw file of test:test.
 
 If I turn off authentication there's no problem. But with 
 authentication
on
 I can't get much further than a page or two of sites like Yahoo.com or 
 LATimes.com (sites with many resources) before I get a 407.
 
 I've run some wireshark captures and could post the http header 
 request/responses if that helps any. I don't know the digest
authentication
 protocol well enough to follow all the nonce transitions and all of 
 that
to
 see if it's a problem.
 
 Here is my squid.conf in hopes that someone might have some ideas on 
 direction I could take in debugging this.
 
 Is there any way to get more info from Squid about why it's throwing
407's?

debug_options 29,6

Squid has a few strange things going on with ref-counting of the credentials. 
Particularly relevant would be race conditions erasing the past credentials if 
a new validation re-check fails.

NP: 3.2 has had an overhaul in the credentials management to remove such bugs. 
But the digest side has not yet had strong testing. If you are able to help out 
with the testing and fixing any found issues there it may prove more reliable.

Amos



RE: [squid-users] Interminted TCP_DENIED

2010-09-20 Thread David Parks
So I fired up 3.2.0.2 today.

I was not able to reproduce the intermittent 407 problem in this version as 
predicted by Amos.

However I did run into some other issues:
1) A bug with digest authentication -
   Open a browser and authenticate. Now restart squid (don't close the browser)
   Try browsing to another page. This crashes squid with the following error in 
squid.out:
   FATAL: Received Segment Violation...dying.
   It probably doesn't like receiving auth headers without following the 
typical challenge/response process.

2) Question: Is url_rewrite_concurrency gone? I get a config file warning that 
it's not recognized.
   But it's in the squid.conf.documented docs as valid.

I tried testing in 3.2.0.2-20100920 but make install fails with:
   forward.cc: In member function 'void FwdState::doneWithRetries()':
   forward.cc:562: error: 'class BodyPipe' has no member named 
'expectNoConsumption'

Would you like me to post #1 in bugzilla?




[squid-users] Interminted TCP_DENIED

2010-09-19 Thread David Parks
I've simplified things as far as I can think to and still get what appear to
be random TCP_DENIED/407 errors after I've been authenticated.

Using Squid 2.7 STABLE 9, I'm now just using the digest_pw_auth
authenticator with a single user pw file of test:test. 

If I turn off authentication there's no problem. But with authentication on
I can't get much further than a page or two of sites like Yahoo.com or
LATimes.com (sites with many resources) before I get a 407.

I've run some wireshark captures and could post the http header
request/responses if that helps any. I don't know the digest authentication
protocol well enough to follow all the nonce transitions and all of that to
see if it's a problem.

Here is my squid.conf in hopes that someone might have some ideas on
direction I could take in debugging this.

Is there any way to get more info from Squid about why it's throwing 407's?

_
auth_param digest realm US Proxy
auth_param digest program /usr/local/squid/libexec/digest_pw_auth
/tmp/pwfile
auth_param digest children 5
auth_param digest nonce_garbage_interval 5 minutes
auth_param digest nonce_max_duration 30 minutes
auth_param digest nonce_max_count 50
acl all src all
acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
acl authenticated proxy_auth REQUIRED

http_access allow authenticated
http_access deny all
icp_access allow localnet
icp_access deny all

http_port 80
hierarchy_stoplist cgi-bin ?

cache_dir ufs /mnt/sda2/cache-squid 100 16 256
logformat custom_verbose User[%un] TotalBytes[%st] ClientIP[%a]
LocalPort[%lp] SquidStatus[%Ss] URL[%ru] Time[%{%Y-%m-%d %H}tg:00:00]
HttpStatus[%Hs]
access_log /mnt/sda2/logs-squid/accesslog/access.log custom_verbose
cache_store_log /mnt/sda2/logs-squid/store.log
pid_filename /mnt/sda2/logs-squid/squid.pid
cache_log /mnt/sda2/logs-squid/cache.log
coredump_dir /mnt/sda2/logs-squid/core-dumps

cache_effective_user squid

refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .   0   20% 4320
acl shoutcast rep_header X-HTTP09-First-Line ^ICY.[0-9]
upgrade_http0.9 deny shoutcast
acl apache rep_header Server ^Apache
broken_vary_encoding allow apache




[squid-users] Intermittent TCP_DENIED after authentication

2010-09-17 Thread David Parks
I'm trying to debug a problem in dev:

 - After performing digest authentication (using a custom authentication
helper), pages will load as expected.
 - But when I hit large pages which load many resources (example yahoo or
latimes.com) sometimes they will load, but if I hit them a few times I'll
get TCP_DENIED/407 errors and have to re-authenticate.
 - My authentication helper is not called after the initial authentication
request.

I'm trying to track down why the requests are denied when, by my rational,
they should continue to succeed. And why it only seems to happen when
requests are put through in rapid succession (though there's only 1 user on
the system, I only notice it on pages with many resources, I've never seen
it happen on google.com for example).

Any thoughts are most appreciated.

David




RE: [squid-users] When is the url_rewrite_program called?

2010-09-16 Thread David Parks
Aha, just came across another discussion that included this info (would be
great to include this in the documented config file under
url_rewrite_program). So for anyone who might search this in the future:

Sequence is approximately

* Request accepted
* http_access Access controls
* URL rewriting, replacing Squid's idea of the URL
* http_access2 Access controls
* Cache lookup
* Forwarding on cache miss
* http_reply_access Reply access controls


From discussion:
http://squid-web-proxy-cache.1019090.n4.nabble.com/url-rewrite-and-cache-whi
ch-URL-should-be-cached-td1023682.html



-Original Message-
From: David Parks [mailto:davidpark...@yahoo.com] 
Sent: Wednesday, September 15, 2010 8:39 PM
To: squid-users@squid-cache.org
Subject: [squid-users] When is the url_rewrite_program called?

When is the url_rewrite_program called?

Is it before ACL matches occur? Or after the http_access tag is matched?

I'm just trying to figure out the flow of events that occur.

Looking for an answer like:
1) http_access is matched, if denied end
2) url_rewrite_program called
3) acls are matched a second time
4) http_access is matched a second time

Thanks,
David




[squid-users] When is the url_rewrite_program called?

2010-09-15 Thread David Parks
When is the url_rewrite_program called?

Is it before ACL matches occur? Or after the http_access tag is matched?

I'm just trying to figure out the flow of events that occur.

Looking for an answer like:
1) http_access is matched, if denied end
2) url_rewrite_program called
3) acls are matched a second time
4) http_access is matched a second time

Thanks,
David




[squid-users] ACL blocks, browser retries constantly

2010-07-02 Thread David Parks
I have a simple ACL helper that fails whenever a user should no longer have
access (I need a way of dynamically blocking access to the proxy on a
per-user basis).

But when the ACL fails the request, the browser goes into a vicious cycle of
continuing to re-try the same request indefinitely and just hammering the
proxy.

Bad for the proxy, and looks bad to the user (it's not clear why the browser
is going crazy). Any thoughts on how I should deal with this problem?

Thanks,
David





RE: [squid-users] Rotating logs restarts authentication/acl helpers?

2010-06-10 Thread David Parks
I understand, thank you. So, I'm mucking with log modules in 3.HEAD now, but 
not understanding the process 100% from the LogModules docs page. 

There are modules (udp, tcp, etc) that I configure for each log file, such as:
   access_log udp://localhost:1000
   cache_store_log upd://localhost:1001

Seems easy enough. But what is this log_file_daemon?
Is that a helper akin to a auth/acl helper that reads info from STDIN? 

If so, the best approach seems to be a log helper, started by squid, which 
could cache the logs to disk if the external logfile processing app is down. 
UDP/TCP makes me nervous in case the log helper process is ever down or started 
in the wrong order by human error (it's just an extra dependency to manage).

If this is the scheme in place here, can you give me a couple of sentences 
describing the creation of a log helper? What is it's input/output protocol  
method? What log files is it applicable to? Does squid start the process and 
manage it?

Thanks!
David


-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Sent: Wednesday, June 09, 2010 8:41 PM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] Rotating logs restarts authentication/acl helpers?

On Wed, 9 Jun 2010 18:49:22 -0600, David Parks davidpark...@yahoo.com
wrote:
 Using 3.1.4, when I call   squid -k rotate  to rotate the logs, it
restarts
 all the authentication and acl helpers. 
 Why is this? I have an ACL helper running for every request (very
quick),
 and the reload of logs is causing it to be down for ~10 seconds.
 I would like to be able to parse logs every 30 seconds for
near-real-time
 reporting.

This is because the helpers are attached to the cache.log for debugging and 
error reporting.
This has always been the case AFAIK.

Use the log daemon: feature instead for real-time access to log data.
It lets you easily create a daemon script to receive and do anything with the 
log lines.
 http://wiki.squid-cache.org/Features/LogDaemon

Amos



RE: [squid-users] Rotating logs restarts authentication/acl helpers?

2010-06-10 Thread David Parks
Got it working easily enough. Exactly what I was looking for! Thanks again for 
the great help!!

On a side note, it might be nice to copy the comments in the .c file you 
mentioned, to the squid.conf.documented file under the logfile_daemon 
directive, the one line description there now is a bit cryptic for those 
wanting to extend the functionality.

Thanks,
David


-Original Message-
From: Henrik Nordström [mailto:hen...@henriknordstrom.net] 
Sent: Thursday, June 10, 2010 1:57 PM
To: David Parks
Cc: squid-users@squid-cache.org
Subject: RE: [squid-users] Rotating logs restarts authentication/acl helpers?

tor 2010-06-10 klockan 07:38 -0600 skrev David Parks:

 Seems easy enough. But what is this log_file_daemon?
 Is that a helper akin to a auth/acl helper that reads info from STDIN? 

Yes, kind of. It's using a special format with some commands for rotation etc.

See helpers/log_daemon/file/ for the default daemon which writes log data to 
files. (log_file_daemon). This (log_file_daemon.c) also contains an explanation 
of the log data format.

Regards
Henrik




[squid-users] Rotating logs restarts authentication/acl helpers?

2010-06-09 Thread David Parks
Using 3.1.4, when I call   squid -k rotate  to rotate the logs, it restarts
all the authentication and acl helpers. 
Why is this? I have an ACL helper running for every request (very quick),
and the reload of logs is causing it to be down for ~10 seconds.
I would like to be able to parse logs every 30 seconds for near-real-time
reporting.


2010/06/09 20:39:19| storeDirWriteCleanLogs: Starting...
2010/06/09 20:39:19|   Finished.  Wrote 275 entries.
2010/06/09 20:39:19|   Took 0.00 seconds (1646706.59 entries/sec).
2010/06/09 20:39:19| logfileRotate: /mnt/sda2/squidlogs/store.log
2010/06/09 20:39:19| logfileRotate: /mnt/sda2/squidlogs/accesslog/access.log
2010/06/09 20:39:19| helperOpenServers: Starting 20/20 'java' processes
- Authentication helpers
AuthenticationModuleStarted
AuthenticationModuleStarted
AuthenticationModuleStarted
AuthenticationModuleStarted
AuthenticationModuleStarted
2010/06/09 20:39:22| helperOpenServers: Starting 1/1 'java' processes
- External ACL helper
.
.
.
.




RE: [squid-users] Digest authentication change from previous version?

2010-06-08 Thread David Parks
I could not reproduce the behavior (calling the auth helper on each request) on 
a vanilla install. Also tested 5 different versions and didn't reproduce it.
Must be some odd configuration I have, I'll just wipe it and rebuild.
Thanks!
David


-Original Message-
From: Henrik Nordström [mailto:hen...@henriknordstrom.net] 
Sent: Sunday, June 06, 2010 12:28 PM
To: David Parks
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] Digest authentication change from previous version?

sön 2010-06-06 klockan 09:35 -0600 skrev David Parks:
 
 But since there's a change from what I originally found and now I want 
 to validate that this is indeed the _expected behavior_? Anyone 
 familiar with such a change?

It's not intentional.

Regards
Henrik



[squid-users] Digest authentication change from previous version?

2010-06-06 Thread David Parks
A while back I tested out squid with a custom Digest authenticator.

I found that squid was caching the authentication requests and not
re-requesting them from the auth-helper.

I don't recall what version I did the test on, but it might have been 2.7.

I am now using 3.0.25 and I see that my auth-helper is receiving a new
authentication request every request (or possibly every connection).

In either case I'm much happier with the current setup because I can disable
user access dynamically using the auth-helper.

But since there's a change from what I originally found and now I want to
validate that this is indeed the _expected behavior_? Anyone familiar with
such a change?

Thanks,
David





[squid-users] Digest authentication helper question

2010-06-05 Thread David Parks
Hi, the digest authentication helper protocol requires that the helper
return the encrypted digest authentication hash given the username and
realm.
 
The problem is, if I have 2 different realms which authenticate against the
same user credentials, if I store the credentials in a one-way encrypted
format (obviously preferable) I have to store them with the realm included
in the encryption, because I have to pass this back to squid via the helper.
In this case I would have to store a password for each realm, and could
never change the realm. Or I'm going to have to store the passwords
unencrypted so I can encrypt them with the realm in the helper.
 
Why not just use the same OK/ERR scheme that basic auth uses? This way the
helper can do the validation its own way without tying our hands when it
comes to situations like this?
 
Thanks,
David




RE: [squid-users] Digest authentication helper question

2010-06-05 Thread David Parks
Ah, I didn't realize the protocol combined the realm and password in the hashed 
value sent from client-server, thought those were separate.
Makes sense now. Thanks very much for the fantastically detailed explanation.


-Original Message-
From: Henrik Nordström [mailto:hen...@henriknordstrom.net] 
Sent: Saturday, June 05, 2010 3:01 PM
To: David Parks
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] Digest authentication helper question

lör 2010-06-05 klockan 09:07 -0600 skrev David Parks:
 Hi, the digest authentication helper protocol requires that the helper 
 return the encrypted digest authentication hash given the username and 
 realm.

Yes..
 
 The problem is, if I have 2 different realms which authenticate 
 against the same user credentials, if I store the credentials in a 
 one-way encrypted format (obviously preferable) I have to store them 
 with the realm included in the encryption, because I have to pass this back 
 to squid via the helper.
 In this case I would have to store a password for each realm, and 
 could never change the realm. Or I'm going to have to store the 
 passwords unencrypted so I can encrypt them with the realm in the helper.

Yes..
 
 Why not just use the same OK/ERR scheme that basic auth uses? This way 
 the helper can do the validation its own way without tying our hands 
 when it comes to situations like this?

You would still have the limitations above as those are from the authentication 
protocol as such, and not really related to Squid. Digest auth exchanges 
crypt§ographic hashes based on the password, and the plain text password is 
never known to the proxy.

The digest auth data given to Squid by the browser is:

digest-request = MD5(MD5(user : realm : password) : nonce :
nc : cnonce : qop : MD5(method : uri))

which is sent as a sequende of 32 HEX digits representing 16 octets/bytes of 
random data.

The proxy verifies digest authentication by performing the same calculation 
outlined above based on the digest request parameters and verifies that it ends 
up with the same unique random data that the client sent.

The per user unique shared secret component in that calculation is

   MD5(user : realm : password)

the rest is dynamic per request parameters set by the digest exchange between 
browser  server(squid).

This shared secret is the users password as far as digest auth is concerned, 
salted with the user  realm to guarantee uniqueness and block reuse of the 
same secret for other services should it ever leak from your auth backend 
somehow. And this is what Squid asks digest helpers to return to enable Squid 
to perform the fine details of the digest auth scheme calculcations.

The other option (not supported by Squid) would be to use the helper in ways 
similar to what is done for the NTLM scheme and that's to offload the whole 
auth processing to the helper with the proxy just acting as a relay, and maybe 
MD5-sess offload from the helper back into the proxy to avoid needing to call 
the auth helper on each and every request. (note:
MD5-sess is not supported by any browser from what I know, and broken in many 
which try..)

Regards
Henrik



[squid-users] Digest authentication scheme doesn't support concurrency?

2010-06-04 Thread David Parks
From what it looks like Digest Authentication doesn't support concurrency
(sending multiple requests to a single helper). But Basic Auth, and ACL
Helpers do.
 
Seems odd so I just want to do a verification that I'm reading it right.
 
Squid 3.0 STABLE 25
 
Thanks,
David




[squid-users] Authentication helpers not shut down

2010-04-06 Thread David Parks
I noticed that running squid -k reconfigure starts a new authentication 
helper, but does not shut down the old one.
Is this normal behavior? Do I just need to monitor for the closing of the input 
stream and shut down on that cue?

Just wanna make sure I'm on track. 

Dave

p.s. if there are any good guides on the various types of helpers, and the 
protocol used with them I haven't come across it yet and would love a link.


RE: [squid-users] Squid 3.1.1 is available

2010-03-29 Thread David Parks
Just to make sure I read this correctly - the feature for logging to a UDP port 
is not available until 3.2 (which doesn't have a release date in the near 
future), correct?

As of now the only option is logging to a file correct?

Thanks,
David





RE: [squid-users] Help with accelerated site

2010-03-27 Thread David Parks
Hi Adam, a few recommendations:

1) There are a number of consultancy and support organization that provide 
dedicated support for squid. If you can't find the answer here or yourself (via 
code or in docs), they might be an alternative you want to look into
2) The developers and people supporting squid on this list are all donating 
their time, they don't owe you, I, or anyone on here anything. Lambasting them 
isn't cool, and not appreciated by anyone on this list.
3) We all get frustrated with software, it's the nature of the business (I 
average a couple cycles of frustration a day myself). But lashing out in a 
public forum, against the very people that might be able to help you is like 
trying to catch flies with vinegar.
4) If you aren't getting the responses you need try refining your questions 
into smaller bites. There are a lot of emails in this forum and it's not always 
easy to digest a long email (again, the community support provided is free, if 
you need people to really dedicate time to your issue you should consider 
paying them for their time, e.g. refer back to suggestion #1).

I wish you the best of luck with your task, unfortunately I don't know the 
answer to your question myself or I would offer my own suggestions.

David


-Original Message-
From: a...@gmail [mailto:adbas...@googlemail.com] 
Sent: Saturday, March 27, 2010 7:07 AM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] Help with accelerated site

Hello All.
I have to say since I started using Squid I get thrown from one problem to 
another, followed every suggestion and every tutorial and I could not get 
through to my my backend server This is ridiculous now, I honestly start to 
believe that  this whole project is a joke or the software isn't at all mature 
to deal with what it is supposed to deal with, it's still in a teething stages, 
and I believe that we are the guinea pigs of this project where they made us 
believe that it works, I do not believe for one second that it acctually works.

I have read so many questions regarding this particular issue and nobody 
could come up with  a straight answer, are we the only people with this issue? 
are we the only people with no luck?

The questions that was asked time and time again was never been answered, so 
please don't tell me that this thing works, I'd like to see it and don't tell 
me this whole site runs on a proxy Squid I'd like to see it aswell.

I was getting this before:
ERROR
The requested URL could not be retrieved

While trying to retrieve the URL: /

The following error was encountered:

* Invalid URLAnd I followed a suggestion I read on the mailing list, that 
maybe I needed to add a vhost after the http_port 3128Now I am getting this 
instead:The requested URL could not be 
retrievedThe
following error was encountered while trying to retrieve the URL: 
http://www.mysite.org/Access Denied.Access control configuration prevents your 
request from being allowed at this time. Please contact your service provider 
if you feel this is incorrect.Your cache administrator is webmaster.It's not 
acctually working at all, all it does is taking you from one problem to 
another, and so forth it's a none stop it'a  bag of problems and nasty 
surprises, not to mention things you need to tweak on my system to make Mr 
Squid happierI am sorry guys but this thing doesn't work and I believe it when 
I see it and even if I see it working it's still ridiculousto spend as much 
time to get one piece of software to work.I have followed the tutorials to the 
letter and many suggestions, not to mention the amount of time I wasted on this 
thingnever before in my life I have spent as much time on any programme, this 
is the first time and I am not willing to spend the rest of my life trying to 
figure out something that doesn't work.Sorry guys but I am very very 
disapointed with this, I am just going to completely uninstall the whole 
thingAnd go back to the way it was before or perhaps look for an alternative 
for something that works.Thanks to all of you who tried to helpBest of luck to 
anyone who's still trying to solve Squid's never ending issues.Thank 
you.RegardsAdam- Original Message -
From: Ron Wheeler rwhee...@artifact-software.com
To: a...@gmail adbas...@googlemail.com
Cc: Amos Jeffries squ...@treenet.co.nz; squid-users@squid-cache.org
Sent: Thursday, March 25, 2010 1:58 AM
Subject: Re: [squid-users] Help with accelerated site


 a...@gmail wrote:
 Hello there,
 Thanks for the reply Ron and Amos


 Maybe my original e-mail wasn't clear a bit confusing I am sorry if I 
 confused you

 I have squid running on Machine A with let's say local ip 192.168.1.4 
 the backend server is running on machine B and ip address 192.168.1.3

 Now, instead of getting the website that is located on Machine B
 192.168.1.3 which is listening on port 81 not 80.
 I am getting the 

RE: [squid-users] Windows Authentication Helper client

2010-03-26 Thread David Parks
Just a thought - it's something I haven't implemented, but it might be worth
you looking into (and hey, if it's useful to you let me know):

I did read along the way that you can use SSH to do a port forward to the
proxy server (there are some write-ups on this indexed in google). This
allows you to secure the connection to the proxy.

Although it wasn't specified in those articles, it seems reasonable to
consider the possibility of maintaining user authentication through SSH. You
could even require a client certificate, thus avoiding passwords altogether
while maintaining relative security.

Again, I haven't thought it out completely, just tossing out an idea for you
to look into.

David



-Original Message-
From: Matt Richards [mailto:m...@mattstone.net] 
Sent: Friday, March 26, 2010 4:17 AM
To: squid-users@squid-cache.org
Subject: [squid-users] Windows Authentication Helper client

Hello,

Does anybody know if any technique or application that will allow windows
machines (XP and 7) to authenticate against a proxy when applications don't
support proxy authentication.

What I am looking for is an alternative to Novell's Client Trust, its an
application that sits in the system tray and when a user attempts to use the
proxy the proxy will connect back to the IP address of the requesting
machine on a specific port and talk to the client trust application to
establish what user is logged on to the machine.

At the moment we have a number of authentication mechanisms setup, including
Kerberos, NTLM, basic and a web based login form if the machine is not a
member of our domain or logged into a guest account.
This all works well most of the time but there are a few cases where the
software just fails to work when it tries to connect and pointing the
machine (IE or the software) at a proxy that doesn't require authentication
work without issue.

It also works if the machine is logged in as our guest user and the user
authenticates to the web form as this doesn't require the software to
authenticate as the proxy knows to map that IP address to the authenticated
user.

I have looked through the internet and thought about this for a while now
and I still haven't really been able to come up with anything that doesn't
involve writing our own application for the workstation and an
authentication helper for squid. My programming skills are basic.

There was one thought I had which was to write scripts to add an entry in a
database (memcache) after a request for a page from a successful login and
then check this database in one of the steps in attempting to identify the
user. I would probably use storeurl_rewrite_program to update the database.
Only issues with this is working out what I would set the timeout to (users
bounce around machines here quite a lot), if this would slow down the proxy
too much (~120 requests per second for each proxy), and if the application
is an exam application (downloads content, no network usage for 40 mins
while they answer questions, then uploads the results) so it times out
before the upload and also for this to work they will have to request
content and successfully authentication before they will have a cache entry.

Sorry for the long email, if anybody has any ideas I would really like to
hear about them.

Cheers,

Matt.





RE: [squid-users] Disable user accounts

2010-03-23 Thread David Parks
I created my own authentication module, and tried setting nonce_max_duration
to 1 minutes (I also tried 1 minute, and 2 minutes to make sure there
wasn't something funky with the word minutes). My authentication module logs
every time it is called. 

But when I sit there and hit refresh on the browser every ~15 seconds, I
don't get any re-authentication calls being made to the auth module (only
the initial authentication). I've kept this test up for over 5 min with no
re-authentication attempts to the auth module.

Did I mis-understand something possibly? Or is nonce_max_duration not
actually causing re-authentication to the auth_module (perhaps it just
sticks within the cached authentication in squid?)

So far the only two ways to lock out users that I understand are the
nonce_max_duration (if I can make it work as I currently understand it
should), and banned user list ACLs w/ -k reload calls. If anyone thinks
I'm missing anything else let me know.

Thanks,
Dave



Quote from a previous email:

   nonce_max_duration determines how long the nonces may be used for. 
 It's closer to what you are wanting, but I'm not sure of there are any
nasty side effects of setting it too low.







RE: [squid-users] Disable user accounts

2010-03-22 Thread David Parks
So, if I understand correctly, squid has no way for me to force a user
account to be expired or cleared prematurely. Setting the nonce_max_duration
low wouldn't block a user with a constant stream of traffic, say watching a
video for example.

If the above statements are correct, then do you have any thoughts on how
challenging a change like this would be at the code level? For example,
having a command similar to squid -k reconfigure (e.g. squid -r
user_to_expire) in which case squid would simply expire the given
credentials, thus tricking squid into re-authenticating on demand?

If user credentials are simply a table in memory this seems conceptually
simple to accomplish. Though I'm a java developer and haven't touched C/++
in many years, so I'm not sure this is worth considering unless you think
it's as simple as it seems like it could be.

Thanks!
Dave

p.s. my purpose in following this line of questioning is to monitor log
files for per user traffic, and after a user exceeds their data transfer
quota, I need to block further access. I don't want to slow access for users
within their quota.




-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Sent: Monday, March 22, 2010 12:35 AM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] Disable user accounts

David Parks wrote:
 I will be monitoring squid usage logs and need to disable user 
 accounts from an external app (block them from making use of the proxy 
 after they are authenticated).
 
 I'm not quite following the FAQ on this
 (http://wiki.squid-cache.org/Features/Authentication?action=showredir
 ect=SquidFaq/ProxyAuthentication#How_do_I_ask_for_authentication_of_an
 _already_authenticated_user.3F) because I don't have any criteria on 
 which the ACL might force a re-negotiation (or I just don't understand 
 the proposed solution).

Re-challenge is automatic whenever a new request needs to be authed and the
currently known credentials are unknown or too old to be used.

 
 I'm also not clear if (nonce_garbage_interval) and
 (nonce_max_duration) are actually forcing a password check against 
 the authentication module, or if they are just dealing with the 
 nuances of the digest authentication protocol. I have them set to

garbage collection only removes things known to be dead already. The garbage
interval determines how often the memory caches are cleaned out above and
beyond the regular as-used cleanings.

  nonce_max_duration determines how long the nonces may be used for. 
It's closer to what you are wanting, but I'm not sure of there are any nasty
side effects of setting it too low.

 their defaults, but after making a change to the password file that 
 digest_pw_auth helper uses, I do not get challenged for the updated 
 password. Could it just be that digest_pw_auth didn't re-read the 
 password file after I made the change?

Yes.

 
 Thanks! David
 
 
 p.s. thanks for all of the responses to this point, I haven't replied 
 as such with a thanks, but the help on this user group is fantastic 
 and is really appreciated, particularly Amos, you're a god-send!

Welcome.

Amos
--
Please be using
   Current Stable Squid 2.7.STABLE8 or 3.0.STABLE25
   Current Beta Squid 3.1.0.18




[squid-users] Limiting connections per user - not per IP

2010-03-21 Thread David Parks
I expect a lot of users from the same IP (NAT), is there a way to limit 
concurrent connections by authenticated user rather than just by IP (acl 
maxconn appears to do it only by IP)?
Thx,
David



[squid-users] Disable user accounts

2010-03-21 Thread David Parks
I will be monitoring squid usage logs and need to disable user accounts from an 
external app (block them from making use of the proxy after they are 
authenticated). 

I'm not quite following the FAQ on this 
(http://wiki.squid-cache.org/Features/Authentication?action=showredirect=SquidFaq/ProxyAuthentication#How_do_I_ask_for_authentication_of_an_already_authenticated_user.3F)
 because I don't have any criteria on which the ACL might force a 
re-negotiation (or I just don't understand the proposed solution).

I'm also not clear if (nonce_garbage_interval) and (nonce_max_duration) are 
actually forcing a password check against the authentication module, or if they 
are just dealing with the nuances of the digest authentication protocol. I have 
them set to their defaults, but after making a change to the password file that 
digest_pw_auth helper uses, I do not get challenged for the updated password. 
Could it just be that digest_pw_auth didn't re-read the password file after I 
made the change?

Thanks!
David


p.s. thanks for all of the responses to this point, I haven't replied as such 
with a thanks, but the help on this user group is fantastic and is really 
appreciated, particularly Amos, you're a god-send!


RE: [squid-users] Requests through proxy take 4x+ longer than direct to the internet

2010-03-19 Thread David Parks
Ah brilliant, thank you for passing this link along, it's very helpful!

Question then: Does the proxy server have a similar functionality as the
browser, that of limiting concurrent requests to a given domain (as
described in this article)?

What I want to know really is: Can I have my users bump up the number of
connections to the proxy server, or, by doing so, do I risk the proxy server
flooding a site and getting the proxies IP blocked?

What solutions have been employed in other scenarios, or are proxy servers
just inherently slower than direct connections due to this concurrent
connection issue?

Thanks,
David



-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Sent: Friday, March 19, 2010 1:06 AM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] Requests through proxy take 4x+ longer than
direct to the internet

David Parks wrote:
 Hi, I set up a dev instance of squid on my windows system.
 
 I've configured 2 browsers (Chrome  Firefox), chrome direct to the 
 internet, firefox through the locally running instance of squid.
 
 I expected similar response times from the two browsers, but I 
 consistently see firefox (configured to proxy through squid) takes 4x+
longer.
 
 Below are the logs showing response times from a hit on yahoo.com, the 
 chrome browser opened the page in ~2 seconds.
 
 I have used the windows binaries of squid and configured digest 
 password authentication, everything else (other than default port) is 
 left as default in the config file.
 
 After doing a packet capture I noted the following behavior:
 
- When going through the proxy: 9 GET requests are made, and 9 HTTP 
 responses are received in a reasonable time period (2sec)
- After the 9th HTTP response is sent, there is a 4 second delay 
 until the next GET request is made
- Then 6 GET requests are made, and 6 HTTP responses are received 
 in a reasonable amount of time.
- After the 6th GET request in this second group there is a 5 
 second delay until the next GET request is made.
- This pattern repeats its self when the proxy is in use.
- This pattern does not occur when I am not connected through the
proxy.
 
 Any thoughts on this behavior?
 

This blog article explains the issues involved:

http://www.stevesouders.com/blog/2008/03/20/roundup-on-parallel-connections/

Amos
--
Please be using
   Current Stable Squid 2.7.STABLE8 or 3.0.STABLE25
   Current Beta Squid 3.1.0.18




[squid-users] Requests through proxy take 4x+ longer than direct to the internet

2010-03-18 Thread David Parks
Hi, I set up a dev instance of squid on my windows system.

I've configured 2 browsers (Chrome  Firefox), chrome direct to the
internet, firefox through the locally running instance of squid.

I expected similar response times from the two browsers, but I consistently
see firefox (configured to proxy through squid) takes 4x+ longer.

Below are the logs showing response times from a hit on yahoo.com, the
chrome browser opened the page in ~2 seconds.

I have used the windows binaries of squid and configured digest password
authentication, everything else (other than default port) is left as default
in the config file.

After doing a packet capture I noted the following behavior:

   - When going through the proxy: 9 GET requests are made, and 9 HTTP
responses are received in a reasonable time period (2sec)
   - After the 9th HTTP response is sent, there is a 4 second delay until
the next GET request is made
   - Then 6 GET requests are made, and 6 HTTP responses are received in a
reasonable amount of time.
   - After the 6th GET request in this second group there is a 5 second
delay until the next GET request is made.
   - This pattern repeats its self when the proxy is in use.
   - This pattern does not occur when I am not connected through the proxy.

Any thoughts on this behavior?

Thanks much,
David


Yahoo example log:

1268958646.966417 127.0.0.1 TCP_MISS/301 602 GET http://yahoo.com/ test
DIRECT/67.195.160.76 text/html
1268958652.263   5289 127.0.0.1 TCP_MISS/302 748 GET http://www.yahoo.com/
test DIRECT/209.191.122.70 text/html
1268958658.997   6726 127.0.0.1 TCP_MISS/200 38900 GET http://mx.yahoo.com/?
test DIRECT/209.191.122.70 text/html
1268958664.895   5132 127.0.0.1 TCP_MISS/200 1616 GET
http://d.yimg.com/a/i/ww/met/pa_icons/gmail_22_052809.gif test
DIRECT/189.254.81.8 image/gif
1268958664.908   5142 127.0.0.1 TCP_MISS/200 1118 GET
http://d.yimg.com/a/i/ww/met/pa_icons/glamout_22_012010.gif test
DIRECT/189.254.81.8 image/gif
1268958666.087   6140 127.0.0.1 TCP_MISS/200 32906 GET
http://l.yimg.com/br.yimg.com/i/img2/200911/111609_fp_movil.swf test
DIRECT/189.254.81.35 application/x-shockwave-flash