RE: [squid-users] NTLM and transparent/interception confusion

2009-01-02 Thread Johnson, S
That's too bad...  I've set up numerous Bluecoat proxies and they do
have this capability.  But of course, you're paying about $50k usd /
box.

-Original Message-
From: Guido Serassio [mailto:guido.seras...@acmeconsulting.it] 
Sent: Thursday, January 01, 2009 4:00 AM
To: Johnson, S; squid-users@squid-cache.org
Subject: Re: [squid-users] NTLM and transparent/interception confusion

Hi,

At 20.06 31/12/2008, Johnson, S wrote:
I've been doing a lot of reading on this...  I've got the proxy working
in either of these two modes:
1) As a browser configuration proxy
2) with http_port 3128 transparent, in redirected mode

I've got NTLM authentication working just fine with #1 above.  However,
with #2 I never get a password prompt.  I don't really care about
transparency; I just want to authenticate users that are outbound
without having to configure their browser.

I asked this question a couple of months back and there are people
stating that they are doing the authentication with transparent mode.
Some of the references I've found in my searches also seem to
corroborate the possibility of this working (but it's not working for
me).  However, in the documentation it seems that this should not be
possible.  Am I barking up the wrong tree or is this truly possible?

You cannot.

Youa are mixing two very different and incompatible things:

- Transparent/intercepting proxy
- NTLM transparent (silent) authentication, also known as Windows 
integrated authentication
http://wiki.squid-cache.org/SquidFaq/InterceptionProxy#head-e56904dd4dfe
0e21e5c2903473c473d401533ac7

Regards and happy New Year

Guido



-

Guido Serassio
Acme Consulting S.r.l. - Microsoft Certified Partner
Via Lucia Savarino, 1   10098 - Rivoli (TO) - ITALY
Tel. : +39.011.9530135  Fax. : +39.011.9781115
Email: guido.seras...@acmeconsulting.it
WWW: http://www.acmeconsulting.it/


-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.



[squid-users] url_regex help

2009-01-02 Thread Dean Weimer
I have an internal web server that some users are accessing through a proxy, 
it's an older server that badly needs to be replaced, but until the developers 
get around to migrating the applications I have to solve a problem.  Basically 
it is serving some Excel and PDF files, but the server responds with a 304 
Unmodified response even though the files have been updated, causing squid to 
serve the cached file instead of the updated file.

I was able to use the an ACL using the dstdomain option and a no_cache deny 
line to stop it from caching the server entirely.  However as this machine is 
quite slow, I would like to still cache the html and images as those work 
correctly.  While using the url_regex  lines to get just the Excel and PDF 
files not cached, I am still getting some TCP_MEM_HIT entries in the access 
logs on these files.  I probably should mention that I disabled the disk cache 
for now on this system while figuring this problem out, all actual web request 
are forwarded through another proxy that is still caching on disk, only the 
internal web applications go direct.

Here's what I have, anyone have an idea where I went wrong
I am Running Squid 3.0 Stable 9 on FreeBSD 6.2
Acl NOCACHEPDF url_regex -i ^http://hostname.\*pdf$
Acl NOCACHEXLS url_regex -i ^http://hostname.\*xls$
No_cache deny NOCACHEPDF NOCACHEXLS

I have used cat combined with awk and grep to check the pattern matching on the 
access logs with:
Cat /usr/local/squid/var/logs/access.log | awk '{print $7}' | grep -e 
^http://hostname.\*pdf$
Cat /usr/local/squid/var/logs/access.log | awk '{print $7}' | grep -e 
^http://hostname.\*xls$

This correctly matches all the entries I want and none that I don't want to 
stop caching.

Thanks,
 Dean Weimer
 Network Administrator
 Orscheln Management Co


Re: [squid-users] NTLM and transparent/interception confusion

2009-01-02 Thread Kinkie
Could you try to get a network trace of a successfully authenticated
http transaction?
I would love to see how they do it...
Thanks!

On 1/2/09, Johnson, S sjohn...@edina.k12.mn.us wrote:
 That's too bad...  I've set up numerous Bluecoat proxies and they do
 have this capability.  But of course, you're paying about $50k usd /
 box.

 -Original Message-
 From: Guido Serassio [mailto:guido.seras...@acmeconsulting.it]
 Sent: Thursday, January 01, 2009 4:00 AM
 To: Johnson, S; squid-users@squid-cache.org
 Subject: Re: [squid-users] NTLM and transparent/interception confusion

 Hi,

 At 20.06 31/12/2008, Johnson, S wrote:
I've been doing a lot of reading on this...  I've got the proxy working
in either of these two modes:
1) As a browser configuration proxy
2) with http_port 3128 transparent, in redirected mode

I've got NTLM authentication working just fine with #1 above.  However,
with #2 I never get a password prompt.  I don't really care about
transparency; I just want to authenticate users that are outbound
without having to configure their browser.

I asked this question a couple of months back and there are people
stating that they are doing the authentication with transparent mode.
Some of the references I've found in my searches also seem to
corroborate the possibility of this working (but it's not working for
me).  However, in the documentation it seems that this should not be
possible.  Am I barking up the wrong tree or is this truly possible?

 You cannot.

 Youa are mixing two very different and incompatible things:

 - Transparent/intercepting proxy
 - NTLM transparent (silent) authentication, also known as Windows
 integrated authentication
 http://wiki.squid-cache.org/SquidFaq/InterceptionProxy#head-e56904dd4dfe
 0e21e5c2903473c473d401533ac7

 Regards and happy New Year

 Guido



 -
 
 Guido Serassio
 Acme Consulting S.r.l. - Microsoft Certified Partner
 Via Lucia Savarino, 1   10098 - Rivoli (TO) - ITALY
 Tel. : +39.011.9530135  Fax. : +39.011.9781115
 Email: guido.seras...@acmeconsulting.it
 WWW: http://www.acmeconsulting.it/


 --
 This message has been scanned for viruses and
 dangerous content by MailScanner, and is
 believed to be clean.




-- 
/kinkie


[squid-users] squid restarts itself

2009-01-02 Thread wh
Hello again and Happy New Year to all.

Today I decided to review the cache.log file to see how things were
running after receiving some complaints from users that there hasnt'
been Internet a couple of times. I noticed that squid is restarting
itself every once in a while. I dont know what' going on with squid or
my configuration but, I'm getting a lot of errors and is not working
properly. Please help me figure out what's wrong. Thank you in advanced
for your help.

Heres my squid.conf file:
# Port Squid listens on
http_port 192.168.2.1:3128 transparent

# Access-lists (ACLs) will permit or deny hosts to access the proxy
#acl lan-access src 192.168.1.0/255.255.255.0
acl lan-access src 192.168.2.0/255.255.255.0
acl localhost src 127.0.0.1
acl all src 0.0.0.0/0.0.0.0


# Access rule
http_access allow localhost
http_access allow lan-access
http_access deny all

maximum_object_size 100 MB

cache_mem 100 MB

access_log /var/log/squid/access.log
cache_log /var/log/squid/cache.log
cache_store_log /var/log/squid/store.log

cache_dir ufs /var/log/squid/cache 10 255 255

htcp_port 0
icp_port 0

extension_methods SEARCH NICK

-

Now, here's part of the cache.log file:

2008/12/19 12:32:53| Done reading /var/log/squid/cache swaplog (426776
entries)
2008/12/19 12:32:53| Finished rebuilding storage from disk.
2008/12/19 12:32:53|419973 Entries scanned
2008/12/19 12:32:53| 0 Invalid entries.
2008/12/19 12:32:53| 0 With invalid flags.
2008/12/19 12:32:53|418997 Objects loaded.
2008/12/19 12:32:53| 0 Objects expired.
2008/12/19 12:32:53|   640 Objects cancelled.
2008/12/19 12:32:53|  5918 Duplicate URLs purged.
2008/12/19 12:32:53|   192 Swapfile clashes avoided.
2008/12/19 12:32:53|   Took 46.1 seconds (9096.9 objects/sec).
2008/12/19 12:32:53| Beginning Validation Procedure
2008/12/19 12:32:54|   262144 Entries Validated so far.
2008/12/19 12:32:54| storeLateRelease: released 45 objects
2008/12/19 12:32:56|   Completed Validation Procedure
2008/12/19 12:32:56|   Validated 826549 Entries
2008/12/19 12:32:56|   store_swap_size = 24545080
2008/12/19 12:49:23| clientParseRequestMethod: Unsupported method in
request '^C'
2008/12/19 12:49:23| clientProcessRequest: Invalid Request
2008/12/19 12:53:19| WARNING: unparseable HTTP header field {POST
/mortalfm/ HTTP/1.0}
2008/12/19 13:01:32| WARNING: 1 swapin MD5 mismatches
2008/12/19 13:08:19| client_side.cc(2699) WARNING! Your cache is running
out of filedescriptors
2008/12/19 13:08:35| client_side.cc(2699) WARNING! Your cache is running
out of filedescriptors
2008/12/19 13:08:55| client_side.cc(2699) WARNING! Your cache is running
out of filedescriptors
2008/12/19 13:09:23| client_side.cc(2699) WARNING! Your cache is running
out of filedescriptors
2008/12/19 13:09:36| clientNatLookup: NF getsockopt(SO_ORIGINAL_DST)
failed: (2) No such file or directory
2008/12/19 13:09:44| client_side.cc(2699) WARNING! Your cache is running
out of filedescriptors
2008/12/19 13:10:00| client_side.cc(2699) WARNING! Your cache is running
out of filedescriptors
2008/12/19 13:10:37| clientNatLookup: NF getsockopt(SO_ORIGINAL_DST)
failed: (2) No such file or directory
2008/12/19 13:10:37| client_side.cc(2699) WARNING! Your cache is running
out of filedescriptors
2008/12/19 13:11:02| client_side.cc(2699) WARNING! Your cache is running
out of filedescriptors
2008/12/19 13:11:18| client_side.cc(2699) WARNING! Your cache is running
out of filedescriptors
2008/12/19 13:11:34| client_side.cc(2699) WARNING! Your cache is running
out of filedescriptors
2008/12/19 13:11:50| client_side.cc(2699) WARNING! Your cache is running
out of filedescriptors
2008/12/19 13:12:06| client_side.cc(2699) WARNING! Your cache is running
out of filedescriptors
2008/12/19 13:12:22| client_side.cc(2699) WARNING! Your cache is running
out of filedescriptors
2008/12/19 13:12:38| client_side.cc(2699) WARNING! Your cache is running
out of filedescriptors
2008/12/19 13:12:54| client_side.cc(2699) WARNING! Your cache is running
out of filedescriptors
2008/12/19 13:13:22| client_side.cc(2699) WARNING! Your cache is running
out of filedescriptors
2008/12/19 13:13:38| client_side.cc(2699) WARNING! Your cache is running
out of filedescriptors
2008/12/19 13:13:55| client_side.cc(2699) WARNING! Your cache is running
out of filedescriptors
2008/12/19 13:14:13| client_side.cc(2699) WARNING! Your cache is running
out of filedescriptors
2008/12/19 13:14:36| clientNatLookup: NF getsockopt(SO_ORIGINAL_DST)
failed: (2) No such file or directory
2008/12/19 13:14:36| client_side.cc(2699) WARNING! Your cache is running
out of filedescriptors
2008/12/19 13:14:52| client_side.cc(2699) WARNING! Your cache is running
out of filedescriptors
2008/12/19 13:15:23| client_side.cc(2699) WARNING! Your cache is running
out of filedescriptors
2008/12/19 13:33:35| clientParseRequestMethod: Unsupported method in
request '^C'
2008/12/19 13:33:35| 

Re: [squid-users] url_regex help

2009-01-02 Thread Guillaume Smet
On Fri, Jan 2, 2009 at 5:13 PM, Dean Weimer dwei...@orscheln.com wrote:
 Here's what I have, anyone have an idea where I went wrong
 I am Running Squid 3.0 Stable 9 on FreeBSD 6.2
 Acl NOCACHEPDF url_regex -i ^http://hostname.\*pdf$
 Acl NOCACHEXLS url_regex -i ^http://hostname.\*xls$
 No_cache deny NOCACHEPDF NOCACHEXLS

 I have used cat combined with awk and grep to check the pattern matching on 
 the access logs with:
 Cat /usr/local/squid/var/logs/access.log | awk '{print $7}' | grep -e 
 ^http://hostname.\*pdf$
 Cat /usr/local/squid/var/logs/access.log | awk '{print $7}' | grep -e 
 ^http://hostname.\*xls$

The \ character before the * is only necessary to prevent your shell
from expanding the wildcard because you didn't use quotes to escape
your regexp.

In your Squid conf file, you just need:
acl NOCACHEPDF url_regex -i ^http://hostname.*pdf$
acl NOCACHEXLS url_regex -i ^http://hostname.*xls$

But if I were you, I'd use:
acl NOCACHEXLS url_regex -i ^http://hostname/.*\.xls$
acl NOCACHEPDF url_regex -i ^http://hostname/.*\.pdf$
which is more precise and more correct IMHO.

Or shorter:
acl NOCACHE url_regex -i ^http://hostname/.*\.(pdf|xls)$

-- 
Guillaume


RE: [squid-users] url_regex help

2009-01-02 Thread Dean Weimer
That worked, thanks a lot, used your advice on a single rule instead of two as 
well.

Thanks,
 Dean Weimer
 Network Administrator
 Orscheln Management Co

-Original Message-
From: Guillaume Smet [mailto:guillaume.s...@gmail.com] 
Sent: Friday, January 02, 2009 3:19 PM
To: Dean Weimer
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] url_regex help

On Fri, Jan 2, 2009 at 5:13 PM, Dean Weimer dwei...@orscheln.com wrote:
 Here's what I have, anyone have an idea where I went wrong
 I am Running Squid 3.0 Stable 9 on FreeBSD 6.2
 Acl NOCACHEPDF url_regex -i ^http://hostname.\*pdf$
 Acl NOCACHEXLS url_regex -i ^http://hostname.\*xls$
 No_cache deny NOCACHEPDF NOCACHEXLS

 I have used cat combined with awk and grep to check the pattern matching on 
 the access logs with:
 Cat /usr/local/squid/var/logs/access.log | awk '{print $7}' | grep -e 
 ^http://hostname.\*pdf$
 Cat /usr/local/squid/var/logs/access.log | awk '{print $7}' | grep -e 
 ^http://hostname.\*xls$

The \ character before the * is only necessary to prevent your shell
from expanding the wildcard because you didn't use quotes to escape
your regexp.

In your Squid conf file, you just need:
acl NOCACHEPDF url_regex -i ^http://hostname.*pdf$
acl NOCACHEXLS url_regex -i ^http://hostname.*xls$

But if I were you, I'd use:
acl NOCACHEXLS url_regex -i ^http://hostname/.*\.xls$
acl NOCACHEPDF url_regex -i ^http://hostname/.*\.pdf$
which is more precise and more correct IMHO.

Or shorter:
acl NOCACHE url_regex -i ^http://hostname/.*\.(pdf|xls)$

-- 
Guillaume


RE: [squid-users] Extra Squid process?

2009-01-02 Thread Alan Lehman
On Wed, 29 Mar 2006 04:58:10 -0800, Henrik Nordstrom said:

With Squid you will see the following ports in use:

a) The TCP ports specified by http_port (and/or https_port) in LISTEN
state, and any client connections open to these..

b) UDP icp_port, snmp_port and htcp_port

c) One additional random UDP port for DNS

d) Random TCP connections over the loopback (127.0.0.1) interface to
each helper, all in CONNECTED state.


You should NOT see random TCP ports in LISTEN state. If you do then you
have probably set http_port 0 in your squid.conf..

Regards
Henrik