Re: [squid-users] Fwd: %path% in acl list squid 2.6

2010-08-19 Thread Amos Jeffries
On Thu, 19 Aug 2010 11:08:00 +0530, sushi squid sushi.sq...@gmail.com
wrote:
 Thanks Amos  John,
 I am glad that you all are replying so fast(at least faster than me
 coming back and checking the solution :) )
 
 I have few doubts with both Amos and John's reply and a new question
 
 @Amos's solution : Mine is a transparent proxy sorry i didn't mention
that
 last time ... and i read that proxy_auth wont work with transparent
proxy
 settings, i also read that there has to be an external authentication
 program but i didn't understand it should check the authentication
of
 which credentials
 I just want that for each user a different blocklist/whitelist is
 used(without the user being asked for password).

Given that:
 * you earlier said this was on Windows XP
 * the use of %userprofile% variable indicates that it is running directly
on the box the user is logged into with their profile in the main registry
view.
 * NAT interception (transparent) is not available in the supported
Squid releases
 * access to NAT tables on Windows requires replacing the whole networking
stack anyway
...
 What do you mean by transparent then?

 
 @John's solution : i didn't understand what you meant by i have to
manage
 the whitelist on my own ??
 
 The new question is ... about fail-safe with squid...
 I want to use 2 servers, one for fail-safe
 so is this configuration right ??
 
 cache_peerIPAdressOfMainServerparent 31280  default
 cache_peerIPAdressOfFail-SafeServer  sibilling31280 
proxy-only
 
 and then do i need to add the names to the dns_nameservers
 
 dns_nameservers IPAdressOfMainServer
 dns_nameservers IPAdressOfFail-SafeServer
 
 cache_peer is mainly for load balancing will this setting work??

One question at a time please.

You can work on building more complex systems after you sort out the
fundamental question of who is and is not allowed access and how to
identify them.

 
 On Wed, Aug 18, 2010 at 5:19 AM, Amos Jeffries squ...@treenet.co.nz
 wrote:
 
 On Tue, 17 Aug 2010 22:37:31 +0530, sushi squid sushi.sq...@gmail.com
 wrote:
  Thanks JD for the reply,
  My Problem is this ...
  Imagine a system with three accounts:
  1)Administrator
  2)John
  3)Sushi
  I want that in the config file the path should be such that …
  when John logsin he has a different block list and when sushi logs in
  a different black list is loaded
 
  This has to be done with single installation of squid ….
  any ideas ..???

 I suggest forgetting loading config on login. That requires that Squid
 load and startup during their login, which may not be realistic.
 Particularly when running as a system service, or on a different box
 altogether.

 Find some measure to identify the users inside Squid and structure your
 access controls to identify the user before testing the user-specific
 ACL.
 User AD account name would be a good choice here since it's logins you
 want
 to base things on. The mswin_* helpers bundled with squid for windows
 builds contact the local AD/SSPI directly.

 Each http_access (and other access types) are tested left-to-right
along
 a
 line. So a config like this:

  acl userJohn proxy_auth john
  acl userBob proxy_auth bob
  acl userJohnBlocklist dstdomain C:/userJohnBlocklist.txt
  acl userBobBlocklist dstdomain C:/userBobBlocklist.txt

  http_access allow userJohn !userJohnBlocklist
  http_access allow userBob !userBobBlocklist
  http_access deny all

 will only block requests which match userJohn using the
 userJohnBlocklist list. vice versa for userBob and his list.

 Amos

 
  On 8/17/10, John Doe jd...@yahoo.com wrote:
  From: sushi squid sushi.sq...@gmail.com
 
  I am a newbie in squid ... my squid config file is giving some
 strange
  error
  My OS is Windows XP and squid version is 2.6Stable
  In  the acl permission list the path is as follows
  acl goodsite url_regex -i  %userprofile%/whitelist.txt
 
  Maybe I am wrong but I do not think squid will resolve your
 %userprofile%
  variable...
 
  JD
 
 
 
 



Re: [squid-users] Fwd: %path% in acl list squid 2.6

2010-08-19 Thread John Doe
From: sushi squid sushi.sq...@gmail.com
On Wed, Aug 18, 2010 at 2:52 PM, John Doe jd...@yahoo.com wrote:
Either amos way, or you could use an external_acl script that will be able to
get %userprofile%, but that means you will have to handle the whitelist
yourself...
@John's solution : i didn't understand what you meant by i have to manage the 
whitelist on my own ??

Did you check how external_acl works?
You have to write the external acl script... so you have to write the whitelist 
code too.
Squid calls your script and gives you a few variables (url, etc...).
You script does whatever checks it wants and tells squid if it is allowed or 
not.
By example, your script could:
 - check %userprofile%
 - Select the right whitelist
 - Check if the requested url is in the whitelist
 - Allow or deny.

JD


  


Re: [squid-users] High load server Disk problem

2010-08-19 Thread Amos Jeffries

Robert Pipca wrote:

Hi,

2010/8/18 Jose Ildefonso Camargo Tolosa ildefonso.cama...@gmail.com:

Yeah, I missed that last night (I was sleepy, I guess), thanks God you
people are around!.  Still, he would need faster disk access, unless
he is talking about 110Mbps (~12MB/s) instead of 110MB/s (~1Gbps).

So, Robert, is that 110Mbps or 1Gbps?


We have 110Mbps of HTTP network traffic (the actual bandwidth is round
250Mbps, but I'm talking about HTTP only).

But aufs doesn't seem to behave that well. I have it with XFS mounted
with noatime. I ran squid -z on the aufs cache_dir to see if aufs
behaves better with less objects. It does, but I still get quite a lot
of these:

2010/08/18 21:22:20| squidaio_queue_request: WARNING - Disk I/O overloading
2010/08/18 21:22:35| squidaio_queue_request: WARNING - Disk I/O overloading
2010/08/18 21:22:35| squidaio_queue_request: Queue Length:
current=533, high=777, low=321, duration=20
2010/08/18 21:22:50| squidaio_queue_request: WARNING - Disk I/O overloading
2010/08/18 21:22:50| squidaio_queue_request: Queue Length:
current=669, high=777, low=321, duration=35
2010/08/18 21:23:05| squidaio_queue_request: WARNING - Disk I/O overloading
2010/08/18 21:23:05| squidaio_queue_request: Queue Length:
current=422, high=777, low=321, duration=50
2010/08/18 21:23:22| squidaio_queue_request: WARNING - Disk I/O overloading
2010/08/18 21:41:46| squidaio_queue_request: WARNING - Queue congestion

So, duration keeps growingso the problem will occur again.

Now, it seems that COSS behaves very nicely.

I'd like to know if I can adjust the max-size option of coss, with
something like --with-coss-membuf-size ? Or is really hard-coded?


It can be altered but not to anything big...



I use the aufs cache_dir to do youtube and windowsupdate caches. So if
I could increase max-size of the coss cache_dirs to around 100MB, I
could leave the aufs cache_dir to windowsupdate fles only (which are
around 300MB+). Is it possible?


No.  The buf/slices are equivalent to swap pages for COSS. Each is 
swapped in/out of disk as a single slice of the total cache. Objects are 
arranged on them with temporal locality so that ideally requests from 
one website or webpage all end up together on a single slice.
 Theory goes being that clients only have to wait for the relevant COSS 
slice for their requested webpage to be swapped into RAM and all their 
small followup requests for .js .css and images etc get served directly 
from there.


Your COSS dirs are already sized at nearly 64GB each (65520 MB). With 
objects up to 1MB stored there. That holds most Windows updates, which 
are usually only a few hundred KB each.
I'm not sure what your slice size is, but 15 of them are stored in RAM 
at any given time. You may want to increase that membuf= parameter a 
bit, or reduce the individual COSS dir size (requires a COSS dir erase 
and rebuild).



The rule-of-thumb for effective swap management seems to be storing 5min 
of data throughput in memory to avoid overly long disk IO wait times. 
Assuming an average hit-rate of around 20% that comes to needing 1min of 
full HTTP bandwidth in memory either (combined: cache_mem RAM cache + 
COSS membuf) at any given time.


Disclaimer: thats just my second-rate interpretation of a recent thesis 
presentation on memory vs flash vs disk service. So testing is recommended.



It may also be time for you to perform an HTTP object analysis.

 This involves grabbing a period of the logs and counting how many 
objects go through your proxy, grouped in regular size brackets (0-512, 
-1K, -2K, -4KB, 8K, 16K, 32K ... ) etc.

[there are likely tools out there that do this for you.]

 There are three peaks that appear in these counts: one usually near 
zero for the IMS requests; one in the low 4KB-128KB for the general 
image/page/script content; and one around the low 1MB-50MB for video 
media objects. Between these last two peaks there is a dip. IMO the 
min-size/max-size boundary between COSS and AUFS should be there around 
the low point of the dip somewhere.


 The bigger group of objects are popular, but too large for COSS to 
swap in/out efficiently, AUFS handles these very nicely. The objects in 
the smaller bump are the reverse, too small to wait for individual AUFS 
swapin/out and likely to be custered in the highly inter-related webpage 
bunches that COSS handles very well.



Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.6
  Beta testers wanted for 3.2.0.1


[squid-users] Unusual behaviour when linking ACLs to delay pools

2010-08-19 Thread Richard Greaney
Hi all

I have a problem so strange it's almost laughable.

I'm trying to set up a site with delay pools, but I only want to
forward members of a particular Active Directory group to the delay
pool. I have an authenticator that I have used on countless sites,
which checks to see whether a given user belongs to an AD group,
nested or otherwise. When I put a user in this group and use my acl to
prevent that group from say, accessing a website, it blocks them as
expected. When I apply that same ACL against the delay pool, however,
it doesn't send members into the pool. However, if I alter the ACL to
check for membership of ANOTHER group, then they ARE sent into the
pool. Confused?

Here's my config:

-
external_acl_type ldap_group ttl=70 %LOGIN
/usr/local/squid/libexec/squid/squid_ldap_group.pl #custom
authenticator to check for membership of nested AD groups
auth_param basic program /usr/local/squid/libexec/squid/adauth.pl
#custom authenticator to verify a user/pass combination are correct

delay_initial_bucket_level 100
delay_pools 1
delay_class 1 2
delay_parameters 1 -1/-1 8000/2048

acl all src 0.0.0.0/0.0.0.0
acl validusers proxy_auth REQUIRED
acl badfiles urlpath_regex -i /etc/squid/badfiles.txt
acl throttled external ldap_group Internet-Throttled
acl inetallow external ldap_group Internet-Allow
acl inetdisallow external ldap_group Internet-Disallow

delay_access 1 allow throttled
delay_access 1 deny all

http_access deny throttled badfiles
--

So if I put a user in the group Internet-Throttled, they won't be
sent into the pool, but will be prohibited from downloading files in
the badfiles.txt list. Group membership testing is working for the
http_access deny, but not for delay_access
But if I alter the definition of the 'throttled' acl so it refers to
members of the AD group Internet-Allow, then all members of that
group ARE sent to the delay pool

I'm finding it hard to attribute blame anywhere. It seems to be that
it can't be the authenticator, the group, or the delay pool syntax as
they all work fine under certain circumstances.

Any advice is greatly welcomed.

Thanks
Richard


Re: [squid-users] Slow basic authentication

2010-08-19 Thread Amos Jeffries

Bucci, David G wrote:

Hi - I've got Squid configured on both the client and server (reference recent 
discussions on establishing an SSL tunnel for all traffic from a client to a 
server -- I'm using that configuration, though I've yet to turn on the SSL).

I'm seeing inconsistent, and generally slow behavior when accessing our origin 
server, which requires basic authentication.  Sometimes the browser prompts for 
uid/pw, sometimes it doesn't, and often it takes a lng time.



Since you don't have the SSL yet it should be easy to grab a packet 
trace of the headers flowing between the two Squid and see whats going 
on that takes so long.



Using the Windows distro of 2.7 from Acme, build 8.

Are there any tuning options necessary when caching against servers that send 
back a 401 initially?  Though I didn't think it was correct, I've tried 
login=PASS on the cache_peer line in the client.



Should not matter 401 challenge headers are supposed to be passed 
straight through Squid.



Note that I have cache deny all set, on both the client and the serve, and 
proxy-only in the client's cache_peer parent line -- we're proxying access to 
web service calls, all of which should return unique results, so no caching 
needed/wanted.



Squid still needs to pass them through store in transit. Ensuring the 
presence of a Content-Length header can prevent Squid falling back on 
disk storage for temporary unknown-length objects. And a cache_mem at 
least big enough to store the required in-transit ones lets them fly 
past quickly.


I don't think that is related to the problem though.


When I set Firefox to NOT use the procy, there I no slowdown, I get immediately 
prompted for uid/pw.



Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.6
  Beta testers wanted for 3.2.0.1


Re: [squid-users] Unusual behaviour when linking ACLs to delay pools

2010-08-19 Thread Amos Jeffries

Richard Greaney wrote:

Hi all

I have a problem so strange it's almost laughable.

I'm trying to set up a site with delay pools, but I only want to
forward members of a particular Active Directory group to the delay
pool. I have an authenticator that I have used on countless sites,
which checks to see whether a given user belongs to an AD group,
nested or otherwise. When I put a user in this group and use my acl to
prevent that group from say, accessing a website, it blocks them as
expected. When I apply that same ACL against the delay pool, however,
it doesn't send members into the pool. However, if I alter the ACL to
check for membership of ANOTHER group, then they ARE sent into the
pool. Confused?


Highly likely that the membership assignment or lookup of the group you 
want is not working in the background authentication systems.




Here's my config:

-
external_acl_type ldap_group ttl=70 %LOGIN
/usr/local/squid/libexec/squid/squid_ldap_group.pl #custom
authenticator to check for membership of nested AD groups
auth_param basic program /usr/local/squid/libexec/squid/adauth.pl
#custom authenticator to verify a user/pass combination are correct

delay_initial_bucket_level 100
delay_pools 1
delay_class 1 2
delay_parameters 1 -1/-1 8000/2048

acl all src 0.0.0.0/0.0.0.0
acl validusers proxy_auth REQUIRED
acl badfiles urlpath_regex -i /etc/squid/badfiles.txt
acl throttled external ldap_group Internet-Throttled
acl inetallow external ldap_group Internet-Allow
acl inetdisallow external ldap_group Internet-Disallow

delay_access 1 allow throttled
delay_access 1 deny all

http_access deny throttled badfiles
--

So if I put a user in the group Internet-Throttled, they won't be
sent into the pool, but will be prohibited from downloading files in
the badfiles.txt list. Group membership testing is working for the
http_access deny, but not for delay_access
But if I alter the definition of the 'throttled' acl so it refers to
members of the AD group Internet-Allow, then all members of that
group ARE sent to the delay pool

I'm finding it hard to attribute blame anywhere. It seems to be that
it can't be the authenticator, the group, or the delay pool syntax as
they all work fine under certain circumstances.

Any advice is greatly welcomed.

Thanks
Richard


Alternatively...

delay_access is what we call a fast group access control.

This category are tested so often on high-speed pathways they can only 
use the data immediately available in memory and will not do remote 
lookups for auth or external helper results.


They will *sometimes* be able to use cached in-memory results from 
previous lookups. So the the slow category ACL types are not 
prohibited in fast category access controls. But they are not 
guaranteed to work 100% of the time either.



I suspect your http_access rules are different when testing for the two 
groups. In such a way that the throttled ACL never gets tested in 
http_access (causing its result to be cached for delay_Access).



My favorite hack for pre-caching these types of lookup results for later 
use is to test the ACL by itself early in the config with !all tacked on 
the end of the line (which prevents the line as a whole matching and 
doing the allow/deny).


ie
  http_access deny throttled !all
  http_access deny inetallowed !all


Amos
--
Please be using
  Current Stable Squid 2.7.STABLE9 or 3.1.6
  Beta testers wanted for 3.2.0.1


Re: [squid-users] High load server Disk problem

2010-08-19 Thread Robert Pipca
Hi,

2010/8/19 Amos Jeffries squ...@treenet.co.nz:
 I'd like to know if I can adjust the max-size option of coss, with
 something like --with-coss-membuf-size ? Or is really hard-coded?

 It can be altered but not to anything big...

What's something not big? Around 10MB?

Does --with-coss-membuf-size=10485760 do the trick?

 No.  The buf/slices are equivalent to swap pages for COSS. Each is swapped
 in/out of disk as a single slice of the total cache. Objects are arranged on
 them with temporal locality so that ideally requests from one website or
 webpage all end up together on a single slice.

That's the stripe, right?

 Your COSS dirs are already sized at nearly 64GB each (65520 MB). With
 objects up to 1MB stored there. That holds most Windows updates, which are
 usually only a few hundred KB each.

Actually I do a windowsupdate fetch not with ranges, but the full
file, since windowsupdate requests from windows machines only send out
partial content requests, but with the same URL.

 I'm not sure what your slice size is, but 15 of them are stored in RAM at
 any given time. You may want to increase that membuf= parameter a bit, or
 reduce the individual COSS dir size (requires a COSS dir erase and rebuild).

Hmm, I could only get it up to 15 with the current build. But I'll
test to see how big can I increase it.

 The rule-of-thumb for effective swap management seems to be storing 5min of
 data throughput in memory to avoid overly long disk IO wait times. Assuming
 an average hit-rate of around 20% that comes to needing 1min of full HTTP
 bandwidth in memory either (combined: cache_mem RAM cache + COSS membuf) at
 any given time.

 Disclaimer: thats just my second-rate interpretation of a recent thesis
 presentation on memory vs flash vs disk service. So testing is recommended.

That's a good tip, thanks :)

- Robert


RE: Re: [squid-users] Slow basic authentication

2010-08-19 Thread Bucci, David G
Thank you, Amos -- it had to do with the dual NICs on the server, and weird 
routing between the two subnets represented.

-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz]
Sent: Thursday, August 19, 2010 5:52 AM
To: squid-users@squid-cache.org
Subject: EXTERNAL: Re: [squid-users] Slow basic authentication

Bucci, David G wrote:
 Hi - I've got Squid configured on both the client and server (reference 
 recent discussions on establishing an SSL tunnel for all traffic from a 
 client to a server -- I'm using that configuration, though I've yet to turn 
 on the SSL).
 
 I'm seeing inconsistent, and generally slow behavior when accessing our 
 origin server, which requires basic authentication.  Sometimes the browser 
 prompts for uid/pw, sometimes it doesn't, and often it takes a lng time.
 

Since you don't have the SSL yet it should be easy to grab a packet trace of 
the headers flowing between the two Squid and see whats going on that takes so 
long.

 Using the Windows distro of 2.7 from Acme, build 8.
 
 Are there any tuning options necessary when caching against servers that send 
 back a 401 initially?  Though I didn't think it was correct, I've tried 
 login=PASS on the cache_peer line in the client.
 

Should not matter 401 challenge headers are supposed to be passed straight 
through Squid.

 Note that I have cache deny all set, on both the client and the serve, and 
 proxy-only in the client's cache_peer parent line -- we're proxying access to 
 web service calls, all of which should return unique results, so no caching 
 needed/wanted.
 

Squid still needs to pass them through store in transit. Ensuring the presence 
of a Content-Length header can prevent Squid falling back on disk storage for 
temporary unknown-length objects. And a cache_mem at least big enough to store 
the required in-transit ones lets them fly past quickly.

I don't think that is related to the problem though.

 When I set Firefox to NOT use the procy, there I no slowdown, I get 
 immediately prompted for uid/pw.
 

Amos
--
Please be using
   Current Stable Squid 2.7.STABLE9 or 3.1.6
   Beta testers wanted for 3.2.0.1


[squid-users] Any way to disable authentication on a specific URL ?

2010-08-19 Thread Fosters

Hi,
I'd just like to know if I can disable the LDAP authentication for a user
accessing a given url.
So that he will go through squid without being asked anything but when
asking for another url he will be required to authenticate ?

Thanks for your help.
-- 
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Any-way-to-disable-authentication-on-a-specific-URL-tp2331235p2331235.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] NTLM authentication login popups

2010-08-19 Thread Tuan Nguyen
http://wiki.squid-cache.org/Features/Authentication#How_do_I_prevent_Login_Popups.3F

I have followed the above instruction but still getting login popups.
Basically I'm trying to force Access Denied page displayed to
unauthenticated users instead of the popup. Any help would be much
appreciated. Thanks.

squid.conf:
...
acl ntlmauth proxy_auth REQUIRED
http_access deny ntlmauth all


[squid-users] high load issues

2010-08-19 Thread Johnson, S

I put a new squid/dansguardian in place duplicating what I had for a couple of 
other networks.   The proxy is configured for everyone going through one of two 
groups with the ability in the 2nd group to elevate their privileges to bypass 
the filter by clicking on a link in the denied page.  The authentication is 
done to our AD server using winbind.

All of that worked great in testing with fewer than 10 people using it...

However, when deployed to 50-100 people, I was getting sporadic page drops when 
browsing.  Sometimes there would be a long pause then a page would be 
displayed: Unable to connect in firefox.  Other times it would immediately 
drop into that Unable to connect page.  By clicking refresh the page would 
then open up.  There seemed to be no rhyme or reason why sometimes it would 
drop. Even very low browse sites like google would sometimes do this.  When 
this happens, there is absolutely ZERO in the log files that the user even 
tried to browse a site.

The utilization on the server is very low (under 5% for proc) and there's 
plenty of RAM (~4gb).

I examined Squid for performance / proc / memory adjustments but nothing really 
jumped out at me as a potential issue.  Do you think that this may be an issue 
with Squid or perhaps winbind not able to do the authentication?

Thanks.


Re: [squid-users] Any way to disable authentication on a specific URL ?

2010-08-19 Thread Kinkie
On Thu, Aug 19, 2010 at 4:03 PM, Fosters comeonle...@gmail.com wrote:

 Hi,
 I'd just like to know if I can disable the LDAP authentication for a user
 accessing a given url.
 So that he will go through squid without being asked anything but when
 asking for another url he will be required to authenticate ?

 Thanks for your help.

Sure!
Please see http://wiki.squid-cache.org/ConfigExamples/Authenticate/Bypass


-- 
    /kinkie


Re: [squid-users] NTLM authentication login popups

2010-08-19 Thread Kinkie
On Thu, Aug 19, 2010 at 4:45 PM, Tuan Nguyen ug32...@googlemail.com wrote:
 http://wiki.squid-cache.org/Features/Authentication#How_do_I_prevent_Login_Popups.3F

 I have followed the above instruction but still getting login popups.
 Basically I'm trying to force Access Denied page displayed to
 unauthenticated users instead of the popup. Any help would be much
 appreciated. Thanks.

 squid.conf:
 ...
 acl ntlmauth proxy_auth REQUIRED
 http_access deny ntlmauth all

If you change this to
http_access deny all

users failing to successfully authenticate at the first attempt will
get no login popups.
They will still get login popups if:
- the client is not joined to a domain
- the client is configured not to attempt automatica authentication to the proxy
- the clients is not MSIE or Firefox (not sure about other browsers)

-- 
    /kinkie


Re: [squid-users] NTLM authentication login popups

2010-08-19 Thread Tuan Nguyen
If I make that change everything will be denied and nothing will be
passed to the NTLM authenticator. Thanks.

On Thu, Aug 19, 2010 at 4:18 PM, Kinkie gkin...@gmail.com wrote:
 On Thu, Aug 19, 2010 at 4:45 PM, Tuan Nguyen ug32...@googlemail.com wrote:
 http://wiki.squid-cache.org/Features/Authentication#How_do_I_prevent_Login_Popups.3F

 I have followed the above instruction but still getting login popups.
 Basically I'm trying to force Access Denied page displayed to
 unauthenticated users instead of the popup. Any help would be much
 appreciated. Thanks.

 squid.conf:
 ...
 acl ntlmauth proxy_auth REQUIRED
 http_access deny ntlmauth all

 If you change this to
 http_access deny all

 users failing to successfully authenticate at the first attempt will
 get no login popups.
 They will still get login popups if:
 - the client is not joined to a domain
 - the client is configured not to attempt automatica authentication to the 
 proxy
 - the clients is not MSIE or Firefox (not sure about other browsers)

 --
     /kinkie



[squid-users] Moving to transparent proxy, SSL questions

2010-08-19 Thread Shawn Wright

Regards, 
We've been running squid in various forms for over 10 years using basic auth 
against our windows domain, and have a lengthy list of ACLs we wish to 
maintain. The major issue we continue to encounter is dumb devices/apps which 
will not proxy correctly (iTunes, Ipads/pods, Android phones, etc.) or will not 
do proxy auth correctly. (Skype for Mac). 

Our campus of ~600 users is going wireless next month, and seamless support for 
wireless devices is part of the goal, while still maintaining the content 
control and logging we have now. I have a test proxy with squid 2.6 in 
transparent mode using our Cisco 6000 MSFC router to redirect using WCCP2, and 
this works fine for http traffic. HTTPS traffic is the problem. 

For HTTPS, it seems we have two choices: use SSLbump, and tell our users to 
accept the cert warnings, and/or install our cert; or NAT the SSL traffic. As 
we are a campus environment with 500+ fill time residents including staff, 
SSLbump may be uncomfortable for some users, and may also not achieve the 
seamless experience we're seeking. NAT/Masq of traffic has been the exception 
over the years, so we don't attempt to log this traffic. I am interested in 
hearing from users who have made this transition and have found an acceptable 
solution to SSL traffic that allows for logging and ideally, filtering based on 
source and destination. 

Thanks 


Shawn Wright 
I.T. Manager, Shawnigan Lake School 
http://www.shawnigan.ca 



Re: [squid-users] NTLM authentication login popups

2010-08-19 Thread Kinkie
Sorry, you are right.
You need to

acl ntlmauth proxy_auth REQUIRED
http_access allow ntlmauth
http_access deny all

This is a very simplicistic configuration though, please check
squid.conf.documented for a more security-sensitive setup.



On Thu, Aug 19, 2010 at 5:46 PM, Tuan Nguyen ug32...@googlemail.com wrote:
 If I make that change everything will be denied and nothing will be
 passed to the NTLM authenticator. Thanks.

 On Thu, Aug 19, 2010 at 4:18 PM, Kinkie gkin...@gmail.com wrote:
 On Thu, Aug 19, 2010 at 4:45 PM, Tuan Nguyen ug32...@googlemail.com wrote:
 http://wiki.squid-cache.org/Features/Authentication#How_do_I_prevent_Login_Popups.3F

 I have followed the above instruction but still getting login popups.
 Basically I'm trying to force Access Denied page displayed to
 unauthenticated users instead of the popup. Any help would be much
 appreciated. Thanks.

 squid.conf:
 ...
 acl ntlmauth proxy_auth REQUIRED
 http_access deny ntlmauth all

 If you change this to
 http_access deny all

 users failing to successfully authenticate at the first attempt will
 get no login popups.
 They will still get login popups if:
 - the client is not joined to a domain
 - the client is configured not to attempt automatica authentication to the 
 proxy
 - the clients is not MSIE or Firefox (not sure about other browsers)

 --
     /kinkie





-- 
    /kinkie


RE: Re: [squid-users] Microsft WCF net.tcp connections Squidable?

2010-08-19 Thread Bucci, David G
net.tcp appears to be what the rest of the world  call TCP sockets. 
So no a socket descriptor cannot  be transmitted over HTTP.

Ok, thanks.  In research, I'm reading about people compiling Squid with SOCKS5 
support ... would that enable socket-proxying within Squid?  Or is there some 
creative way to generalize/leverage the FTP proxying support (since the control 
channel on an FTP session is a TCP socket, no?)?

If not ... may have to move to ssh tunnels or stunnel or something, for at 
least these connections -- and there are real disadvantages relative to Squid 
with doing that.  Nuts.

-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz]
Sent: Wednesday, August 18, 2010 8:30 PM
To: squid-users@squid-cache.org
Subject: EXTERNAL: Re: [squid-users] Microsft WCF net.tcp connections Squidable?

On Wed, 18 Aug 2010 16:50:33 -0400, Bucci, David G
david.g.bu...@lmco.com wrote:
 I've been googling, but can't find any clear indication ... does 
 anyone know of the net.tcp construct, in the Windows Communication
Foundations
 or whatever-the-heck-it's-called, can be proxied through squid?

net.tcp appears to be what the rest of the world call TCP sockets. So no a 
socket descriptor cannot be transmitted over HTTP.
The SOAP/JASON/AJAX/todays-fad requests sent down it apparently has meta 
headers formatted as full extensible XML files instead of simple Mime Foo:
data.

Amos



Re: [squid-users] NTLM authentication login popups

2010-08-19 Thread Tuan Nguyen
Thanks Kinkie but I'm still getting the popup window (tried with both
IE and FF). The client machine is joined to a domain. Basically I'm
trying to force an Access Denied page on this client instead of the
popup. The wiki does suggest an all hack but it's not working for
me:

http://wiki.squid-cache.org/Features/Authentication#How_do_I_prevent_Login_Popups.3F


On Thu, Aug 19, 2010 at 5:09 PM, Kinkie gkin...@gmail.com wrote:
 Sorry, you are right.
 You need to

 acl ntlmauth proxy_auth REQUIRED
 http_access allow ntlmauth
 http_access deny all

 This is a very simplicistic configuration though, please check
 squid.conf.documented for a more security-sensitive setup.



 On Thu, Aug 19, 2010 at 5:46 PM, Tuan Nguyen ug32...@googlemail.com wrote:
 If I make that change everything will be denied and nothing will be
 passed to the NTLM authenticator. Thanks.

 On Thu, Aug 19, 2010 at 4:18 PM, Kinkie gkin...@gmail.com wrote:
 On Thu, Aug 19, 2010 at 4:45 PM, Tuan Nguyen ug32...@googlemail.com wrote:
 http://wiki.squid-cache.org/Features/Authentication#How_do_I_prevent_Login_Popups.3F

 I have followed the above instruction but still getting login popups.
 Basically I'm trying to force Access Denied page displayed to
 unauthenticated users instead of the popup. Any help would be much
 appreciated. Thanks.

 squid.conf:
 ...
 acl ntlmauth proxy_auth REQUIRED
 http_access deny ntlmauth all

 If you change this to
 http_access deny all

 users failing to successfully authenticate at the first attempt will
 get no login popups.
 They will still get login popups if:
 - the client is not joined to a domain
 - the client is configured not to attempt automatica authentication to the 
 proxy
 - the clients is not MSIE or Firefox (not sure about other browsers)

 --
     /kinkie





 --
     /kinkie



Re: [squid-users] Unusual behaviour when linking ACLs to delay pools

2010-08-19 Thread Richard Greaney
On Thu, Aug 19, 2010 at 10:10 PM, Amos Jeffries squ...@treenet.co.nz wrote:

 This category are tested so often on high-speed pathways they can only use
 the data immediately available in memory and will not do remote lookups for
 auth or external helper results.

 They will *sometimes* be able to use cached in-memory results from previous
 lookups. So the the slow category ACL types are not prohibited in fast
 category access controls. But they are not guaranteed to work 100% of the
 time either.


 I suspect your http_access rules are different when testing for the two
 groups. In such a way that the throttled ACL never gets tested in
 http_access (causing its result to be cached for delay_Access).


 My favorite hack for pre-caching these types of lookup results for later use
 is to test the ACL by itself early in the config with !all tacked on the end
 of the line (which prevents the line as a whole matching and doing the
 allow/deny).

Thanks! And you'd be dead right. That's exactly what was happening.
The test against another group was succeeding as it had already been
used for Internet access by proxy_auth.

I now have another problem, however, in that it appears you can't AND
multiple ACLs to determine whether or not they can access a delay
pool. Say for instance, I wanted to do:

delay_access 1 allow throttled badfiles
delay_access 1 deny all

This would throttle only when members of the 'throttled' acl attempt
to download files in the 'badfiles' acl. I can apply the pool to one
ACL or the other, but not both. I also tried getting cheeky and
stacking multiple conditions into the ACL definition. eg:

acl throttled urlpath_regex -i /etc/squid/badfiles.txt
acl throttled external ldap_group Internet-Throttled

But squid doesn't like mixing multiple conditions to make a single acl.

Is there a workaround for this?

Thanks
Richard


Re: [squid-users] high load issues

2010-08-19 Thread Jose Ildefonso Camargo Tolosa
Hi!

That's a dansguardian issue: I had something similar, but specially
with SSL sites.

I just got tired of dansguardian (I made it work, but from time to
time the problem would come back), and started to use plain squid ACLs
for small lists, and squidguard.

I hope this helps,

Ildefonso Camargo

On Thu, Aug 19, 2010 at 10:18 AM, Johnson, S sjohn...@edina.k12.mn.us wrote:

 I put a new squid/dansguardian in place duplicating what I had for a couple 
 of other networks.   The proxy is configured for everyone going through one 
 of two groups with the ability in the 2nd group to elevate their privileges 
 to bypass the filter by clicking on a link in the denied page.  The 
 authentication is done to our AD server using winbind.

 All of that worked great in testing with fewer than 10 people using it...

 However, when deployed to 50-100 people, I was getting sporadic page drops 
 when browsing.  Sometimes there would be a long pause then a page would be 
 displayed: Unable to connect in firefox.  Other times it would immediately 
 drop into that Unable to connect page.  By clicking refresh the page would 
 then open up.  There seemed to be no rhyme or reason why sometimes it would 
 drop. Even very low browse sites like google would sometimes do this.  When 
 this happens, there is absolutely ZERO in the log files that the user even 
 tried to browse a site.

 The utilization on the server is very low (under 5% for proc) and there's 
 plenty of RAM (~4gb).

 I examined Squid for performance / proc / memory adjustments but nothing 
 really jumped out at me as a potential issue.  Do you think that this may be 
 an issue with Squid or perhaps winbind not able to do the authentication?

 Thanks.



[squid-users] proxy 'busy' too often: (errmsg: firefox is configured to use a proxy server that is not responding)

2010-08-19 Thread Linda Walsh




This may be a basic question, but 'more often than I would like',
if I try browse 'too fast' I will see a message from firefox about
it being configured to use a proxy which is not responding.

All I have to do is reload that page, and it loads -- i.e. it's a
temporary problem, but it's annoying and time wasting.  It happens
when I've opened more than one link in background and squid can't
keep up with the freshly opened links of more than one page.

Obviously, this isn't a squid problem, its a 'user-configuration' problem,
since squid handles alot larger loads than 1 user opening web-pages
very fast, sequentially!  Is there some number of threads some place that
I should be looking to 'turn up'?


FWIW, I config  gen my own squid (from some recent branch -- varies
by whether or not I'm seeing some problem and have pulled source to see
if it is fixed (usually is) before reporting it.  But this seems a bit
to uncertain to leave entirely to chance.  I'll include my config options
below, in case something there glaringly stands out as lame.

Thanks for any ideas...  It happens *infrequently*.  But it shouldn't
be happening at all, so that's why I'm wondering if I have
something misconfigured.

Thanks!
Linda

(configure script follows)

export CFLAGS=-fgcse-after-reload -fpredictive-commoning 
-frename-registers -ftracer -fbranch-target-load-optimize 
-fbranch-target-load-optimize2 -march=native

export CCFLAGS=$CFLAGS

configure --enable-disk-io  --enable-async-io=48  --enable-storeio  
--enable-removal-policies  --disable-htcp  --enable-ssl  
--disable-ident-lookups  --enable-external-acl-helpers  --with-dl  
--with-large-files  --prefix=/usr  --sysconfdir=/etc/squid 
--bindir=/usr/sbin --sbindir=/usr/sbin --libexecdir=/usr/sbin 
--datadir=/usr/share/squid --libdir=/usr/lib64  --localstatedir=/var 
--enable-ecap --with-default-user=squid --enable-icap-client 
--enable-referer-log --disable-wccp --disable-wccpv2 --disable-snmp 
--enable-cachemgr-hostname --disable-eui  --enable-delay-pools 
--enable-useragent-log --enable-zph-qos --enable-linux-netfilter 
--disable-translation --with-aufs-threads=32 --disable-strict-error-checking




Re: [squid-users] Unusual behaviour when linking ACLs to delay pools

2010-08-19 Thread Richard Greaney
On Fri, Aug 20, 2010 at 11:04 AM, Richard Greaney rkgrea...@gmail.com wrote:
 On Thu, Aug 19, 2010 at 10:10 PM, Amos Jeffries squ...@treenet.co.nz wrote:

 This category are tested so often on high-speed pathways they can only use
 the data immediately available in memory and will not do remote lookups for
 auth or external helper results.

 They will *sometimes* be able to use cached in-memory results from previous
 lookups. So the the slow category ACL types are not prohibited in fast
 category access controls. But they are not guaranteed to work 100% of the
 time either.


 I suspect your http_access rules are different when testing for the two
 groups. In such a way that the throttled ACL never gets tested in
 http_access (causing its result to be cached for delay_Access).


 My favorite hack for pre-caching these types of lookup results for later use
 is to test the ACL by itself early in the config with !all tacked on the end
 of the line (which prevents the line as a whole matching and doing the
 allow/deny).

 Thanks! And you'd be dead right. That's exactly what was happening.
 The test against another group was succeeding as it had already been
 used for Internet access by proxy_auth.

 I now have another problem, however, in that it appears you can't AND
 multiple ACLs to determine whether or not they can access a delay
 pool. Say for instance, I wanted to do:

 delay_access 1 allow throttled badfiles
 delay_access 1 deny all

 This would throttle only when members of the 'throttled' acl attempt
 to download files in the 'badfiles' acl. I can apply the pool to one
 ACL or the other, but not both. I also tried getting cheeky and
 stacking multiple conditions into the ACL definition. eg:

 acl throttled urlpath_regex -i /etc/squid/badfiles.txt
 acl throttled external ldap_group Internet-Throttled

 But squid doesn't like mixing multiple conditions to make a single acl.

 Is there a workaround for this?

 Thanks
 Richard

Ignore the last message. I was being an idiot. There's no need for any
workarounds. The following acl works fine:

delay_access 1 allow badfiles throttled
delay_access 1 deny all