Re: [squid-users] Squid with three isp

2011-02-24 Thread Senthilkumar
Thank you very much, with your suggestions we achieved it by adding 
necessary route.


Thanks,
Senthil

Amos Jeffries wrote:

On Wed, 23 Feb 2011 18:06:18 +0530, Senthilkumar wrote:

Thank you.

When we use tcp_outgoing address and select isp . The all traffic
seems to be going through the default gateway Ethernet of the squid
machine and only reply comes through the tcp out going address
assigned Ethernet.

Eg  isp1- eth1(default gateway) isp2-eth2, client- eth0
In squid.conf
acl isp1 src172.16.1.48
acl isp2 src 172.16.1.56 .
tcp_outgoing address   ip of isp1   isp1
tcp_outgoing address   ip of isp2   isp2

When we browse from client 172.16.1.56 and tcpdump eth1 we can see
connections are established through eth1 which is supposed to be
through eth2. But we can see reply comes through the eth2  and also
whatismyip.com shows ip of isp2. This shows that default gateway isp1
is used for making all requests so that it is overloaded and the isp2
is used only for receiving requests.


What Squid is doing is setting the from/source IP on the packets. 
There is only one channel between squid and the operating system. It 
is entirely up to the system routing where those packets go. It sounds 
to me like your magic routing setup is not working or not looking at 
the source IP of packets coming out of the Squid machine (OUTPUT 
firewall chain in iptables if that is any help).


Amos






Re: [squid-users] Squid with three isp

2011-02-23 Thread Senthilkumar

Amos Jeffries wrote:

On 23/02/11 19:50, Senthilkumar wrote:

Amos Jeffries wrote:

On Wed, 23 Feb 2011 10:41:20 +0530, Senthilkumar wrote:

Hello All,

We have a gateway machine which has three upstream isp and to make
clients use the particular isp we use advance routing based on the
source address. When we run squid on the same machine to log
websites all traffic passes through the single isp i.e. the isp which
is set as default gateway. We need all the users to pass through squid
and use different isp's as per source route.
Please share your views to achieve it.


http://wiki.squid-cache.org/ConfigExamples/Strange/RotatingIPs

Amos


Thank you very much Amos,

We have clients in 10.X.X and 172.16.1.X and 172.16.2.X series, we need
each client series would use single isp for upload and download.
whether the following configuration achieve it?

acl isp1 src 10.X.X
acl isp2 src 172.16.1.X
acl isp3 src 172,16.2.X

acl download method GET HEAD
acl upload method POST PUT

tcp_outgoing_address isp1 ip isp1 download upload
tcp_outgoing_address isp2 ip isp1 download upload
tcp_outgoing_address isp3 ip isp1 download upload


No this wont work. Requests cannot have two methods (one up one down) 
simultaneously.


The example was written by an ISP with a satellite and a ground link 
where direction of majority data traffic matters a lot. You do not 
need the method ACL. Only the src test.


Amos

Thank you very much.


Re: [squid-users] Squid with three isp

2011-02-23 Thread Senthilkumar

Thank you.

When we use tcp_outgoing address and select isp . The all traffic seems 
to be going through the default gateway Ethernet of the squid machine 
and only reply comes through the tcp out going address assigned Ethernet.


Eg  isp1- eth1(default gateway) isp2-eth2, client- eth0
In squid.conf
acl isp1 src172.16.1.48
acl isp2 src 172.16.1.56 .
tcp_outgoing address   ip of isp1   isp1
tcp_outgoing address   ip of isp2   isp2

When we browse from client 172.16.1.56 and tcpdump eth1 we can see 
connections are established through eth1 which is supposed to be through 
eth2. But we can see reply comes through the eth2  and also 
whatismyip.com shows ip of isp2. This shows that default gateway isp1 is 
used for making all requests so that it is overloaded and the isp2 is 
used only for receiving requests.


Thanks,
Senthil

Hasanen AL-Bana wrote:

Hi,
If you have different IP ranges on the same machine , then you need to
add 3 gateways using linux ip route , then you can make use of
tcp_outgoing_address.
I have it documented here :
http://www.snono-systems.com/files/squid-multiwan.pdf

On Wed, Feb 23, 2011 at 9:50 AM, Senthilkumar
senthilkumaar2...@gmail.com wrote:
  

Amos Jeffries wrote:


On Wed, 23 Feb 2011 10:41:20 +0530, Senthilkumar wrote:
  

Hello All,

We have a gateway machine which has three upstream isp and to make
clients use the particular isp we use advance routing based on the
source address.  When we run squid  on the same machine to log
websites all traffic passes through the single isp i.e. the isp which
is set as default gateway. We need all the users to pass through squid
and use different isp's as per source route.
Please share your views to achieve it.


http://wiki.squid-cache.org/ConfigExamples/Strange/RotatingIPs

Amos

  

Thank you  very much Amos,

We have clients in 10.X.X and 172.16.1.X and 172.16.2.X series, we need each 
client series would use single isp for upload and download.
whether the following configuration achieve it?

acl isp1 src 10.X.X
acl isp2 src 172.16.1.X
acl isp3 src 172,16.2.X

acl download method GET HEAD
acl upload method POST PUT

tcp_outgoing_address isp1 ip isp1 download upload
tcp_outgoing_address isp2 ip isp1 download upload
tcp_outgoing_address isp3 ip isp1 download upload

whether by setting tcp_outgoing_address whether upload and download takes place 
through the source based routed isp or the default gateway isp?

Thank
Senthil




  




[squid-users] Squid with three isp

2011-02-22 Thread Senthilkumar

Hello All,

We have a gateway machine which has three upstream isp and to make 
clients use the particular isp we use advance routing based on the 
source address.  When we run squid  on the same machine to log websites 
all traffic passes through the single isp i.e. the isp which is set as 
default gateway. We need all the users to pass through squid and use 
different isp's as per source route.

Please share your views to achieve it.

Thanks
Senthil.



Re: [squid-users] Squid with three isp

2011-02-22 Thread Senthilkumar

Amos Jeffries wrote:

On Wed, 23 Feb 2011 10:41:20 +0530, Senthilkumar wrote:

Hello All,

We have a gateway machine which has three upstream isp and to make
clients use the particular isp we use advance routing based on the
source address.  When we run squid  on the same machine to log
websites all traffic passes through the single isp i.e. the isp which
is set as default gateway. We need all the users to pass through squid
and use different isp's as per source route.
Please share your views to achieve it.


http://wiki.squid-cache.org/ConfigExamples/Strange/RotatingIPs

Amos


Thank you  very much Amos,

We have clients in 10.X.X and 172.16.1.X and 172.16.2.X series, we need 
each client series would use single isp for upload and download.

whether the following configuration achieve it?

acl isp1 src 10.X.X
acl isp2 src 172.16.1.X
acl isp3 src 172,16.2.X

acl download method GET HEAD
acl upload method POST PUT

tcp_outgoing_address isp1 ip isp1 download upload
tcp_outgoing_address isp2 ip isp1 download upload
tcp_outgoing_address isp3 ip isp1 download upload

whether by setting tcp_outgoing_address whether upload and download 
takes place through the source based routed isp or the default gateway isp?


Thank
Senthil



Re: [squid-users] squid reduce Bandwidth?

2011-02-06 Thread Senthilkumar

Amos Jeffries wrote:

On 05/02/11 22:44, Senthilkumar wrote:

Hi,

Current network topology
Internetsquid--bandwidthMonitor-clients.

The squid is configured as transparent proxy.
squid version: 2.7stable6

We are seeing an average upstream bandwidth of 5 mbps and download
bandwidth of 25 mbps over a day on the bandwidth monitor when the
network traffic is by-passed through squid box (ie., without transparent
redirection). Where as when the traffic goes via squid we are only
seeing 4 mpbs upstream and 20 mbps downstream. We have tested this over
several days and clearly seeing this difference.

Will having squid in the above topology reduce the bandwidth? The only
reason we see is, most of the client request will be HTTP/1.1 and squid
will only make HTTP/1.0 on the server side


Possibly, but requires special configuration for 2.7 to use HTTP/1.1.

I'm inclined to suspect non-HTTP traffic being pushed through port 80. 
Squid will break those connections before any bandwidth gets consumed 
by them.


Amos


Thank you Amos,

We have tried by enabling server_http11   on in squid.conf but no 
luck. In the access log for we can see lot of TCP_MISS/000.
In the iptables we have used the following rule /sbin/iptables -t nat 
-A PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 3128
Can you suggest how to check traffic other than port 80 is pushed 
through squid and make only port 80 to pass through?


Thanks.
Senthil



[squid-users] squid reduce Bandwidth?

2011-02-05 Thread Senthilkumar

Hi,

Current network topology
Internetsquid--bandwidthMonitor-clients.

The squid is configured as transparent proxy.
squid version: 2.7stable6

We are seeing an average upstream bandwidth of 5 mbps and download 
bandwidth of 25 mbps over a day on the bandwidth monitor when the 
network traffic is by-passed through squid box (ie., without transparent 
redirection). Where as when the traffic goes via squid we are only 
seeing 4 mpbs upstream and 20 mbps downstream. We have tested this over 
several days and clearly seeing this difference.


Will having squid in the above topology reduce the bandwidth?  The only 
reason we see is, most of the client request will be HTTP/1.1 and squid 
will only make HTTP/1.0 on the server side

Any inputs on this will be greatly appreciated.

Thanks,
Senthil





Re: [squid-users] Authentication to Sharepoint not happening

2011-02-01 Thread Senthilkumar

Hi Amos,

We are also having trouble in accessing share point site which uses NTLM 
authentication.
We have configured squid with NTLM scheme. Whether the below fix 
(pipeline_prefetch on ) works for the squid which is configured with 
NTLM proxy authentication.



Thanks
Senthil

Amos Jeffries wrote:

On Tue, 1 Feb 2011 20:25:42 +0530, Saurabh Agarwal
saurabh.agar...@citrix.com wrote:
  

It works now! I followed the code and then turned off


pipeline_prefetch.
  

In code there was this check which was setting no_connection_auth flag


to
  

1.

if (Config.onoff.pipeline_prefetch)
request-flags.no_connection_auth = 1;

I don't understand it completely but I can move forward. Thank You Amos!



Aha, Thank you!  This has led me to find a bug.

Squid-3 checks and overrides pipelining when auth_param are set to prevent
request race conditions. But I see that if the pipline is manually
configured after the auth_param it will be turned back on. Fixing now.

Amos


  




Re: [squid-users] Connection error

2011-01-31 Thread Senthilkumar

Hi Amos,

Thanks for your response.

By using kerberos instead of ntlm scheme can the pop up occurring rarely 
can be fixed?


Thanks
Senthil

Amos Jeffries wrote:

On 31/01/11 18:44, Senthilkumar wrote:

Thank you .

We are using squid 3.1.8 with 100 children for ntlm scheme. We have
about 500 users and around 75 req/sec.

In the cache log rarely we see 100 pending ntlm requests and that time
squid reconfigures automatically.
Is it default behaviour of squid to reconfigure when ntlm are queued.?



No, reconfigure only happens when the administrator or some operating 
system controls runs squid -k reconfigure.


You may be seeing a crash and restart?



In the cache log we can see following errors also.

2011/01/31 10:59:02| AuthConfig::CreateAuthUser: Unsupported or
unconfigured/inactive proxy-auth scheme, 'Basic
bnByY1xzaHViaGFuZ2lkOmdhbGF4eUA1Nw=='
2011/01/31 10:59:18| AuthConfig::CreateAuthUser: Unsupported or
unconfigured/inactive proxy-auth scheme, 'Basic
bnByY1xzaHViaGFuZ2lkOmdhbGF4eUA1Nw=='


Normal message for a proxy without Basic auth configured when the 
client send Basic credentials to it.


Squid is supposed to pause requests during the configure time. So why 
this shows up is a problem that needs to be found.


Amos


Amos Jeffries wrote:

On Tue, 25 Jan 2011 19:25:33 +0530, Senthilkumar wrote:

Hi Amos,

I have followed the suggestions provided by you and if use deny
without all i am getting pop up when i access denied sites, it is
suppressed when i use all.
We use ntlm scheme to authenticate with domain users, all users can
authenticate without any prompt, while browsing out of 350 users only
5-6 users getting prompt rarely(around 2-3 times a day)
There is no specific website or time the prompt appears. Please
suggest some troubleshooting ideas and cause for it.
The cache.log does not show any errors


I'm not sure exactly which deny line you are describing as producing a
popup. The config below looks right. Where you deny based on group
lookups
the lines should end with all, as you saw not having it there 
produces

the popup.


NTLM can suffer from a few issues on connections and some bugs in 
Squid.

Though both of these problems have been worked on and reduced in newer
releases.

If one of the allow group lookups is somehow failing this may 
produce a

popup.

I am not sure how one would check for these in production environment.
The
things to watch out for are the HTTP auth headers for the request 
before

during and after the prompt appears. Whether this is happening on a
connection while it stays up, or if the connection drops out on the
challenge. Whether it happened on a new connection using some non-NTLM
auth
(ie a Windows 7 machine trying an unexpected encryption, or some
background
application with the wrong keys).

Amos










Re: [squid-users] Connection error

2011-01-30 Thread Senthilkumar

Thank you .

We are using squid 3.1.8 with 100 children for ntlm scheme. We have 
about 500 users and around 75 req/sec.


In the cache log rarely we see 100 pending ntlm requests and that time 
squid reconfigures automatically.

Is it default behaviour  of squid to reconfigure when ntlm are queued.?

In the cache log we can see following errors also.

2011/01/31 10:59:02| AuthConfig::CreateAuthUser: Unsupported or 
unconfigured/inactive proxy-auth scheme, 'Basic 
bnByY1xzaHViaGFuZ2lkOmdhbGF4eUA1Nw=='
2011/01/31 10:59:18| AuthConfig::CreateAuthUser: Unsupported or 
unconfigured/inactive proxy-auth scheme, 'Basic 
bnByY1xzaHViaGFuZ2lkOmdhbGF4eUA1Nw=='


Thanks
Senthil

Amos Jeffries wrote:

On Tue, 25 Jan 2011 19:25:33 +0530, Senthilkumar wrote:
  

Hi Amos,

I have followed the suggestions provided by you and if use deny without 
all i am getting pop up when i access denied sites, it is suppressed 
when i use all.
We use ntlm scheme to authenticate with domain users, all users can 
authenticate without any prompt, while browsing out of 350 users only 
5-6 users getting prompt rarely(around 2-3 times a day)
There is no specific website or time the prompt appears. Please suggest 
some troubleshooting ideas and cause for it.

The cache.log does not show any errors



I'm not sure exactly which deny line you are describing as producing a
popup. The config below looks right.  Where you deny based on group lookups
the lines should end with all, as you saw not having it there produces
the popup.


NTLM can suffer from a few issues on connections and some bugs in Squid.
Though both of these problems have been worked on and reduced in newer
releases.

If one of the allow group lookups is somehow failing this may produce a
popup.

I am not sure how one would check for these in production environment. The
things to watch out for are the HTTP auth headers for the request before
during and after the prompt appears. Whether this is happening on a
connection while it stays up, or if the connection drops out on the
challenge. Whether it happened on a new connection using some non-NTLM auth
(ie a Windows 7 machine trying an unexpected encryption, or some background
application with the wrong keys).

Amos

  




Re: [squid-users] Connection error

2011-01-25 Thread Senthilkumar

Hi Amos,

I have followed the suggestions provided by you and if use deny without 
all i am getting pop up when i access denied sites, it is suppressed 
when i use all.
We use ntlm scheme to authenticate with domain users, all users can 
authenticate without any prompt, while browsing out of 350 users only 
5-6 users getting prompt rarely(around 2-3 times a day)
There is no specific website or time the prompt appears. Please suggest 
some troubleshooting ideas and cause for it.

The cache.log does not show any errors

Here is my update conf

#Acl for checking group
acl hods1 external groupauth hods
acl itdept1 external groupauth itdept
acl hr1  external groupauth hr
acl srgp1 external groupauth srgroupleader
acl gl1 external groupauth groupleader
acl nm1  external groupauth normaluser
acl mancom1  external groupauth mancomm

#Acl to allow and block websites

acl hods2 url_regex -i /usr/local/squid31/policy/allowhod

acl hods3 dstdomain -i /usr/local/squid31/policy/allowhod
acl hods4 dstdomain -i /usr/local/squid31/policy/denyhodgroup


acl itdept2 url_regex -i /usr/local/squid31/policy/allowitdept
acl itdept3 dstdomain -i /usr/local/squid31/policy/allowitdept


acl gl2 url_regex -i /usr/local/squid31/policy/allowgroupleader
acl gl3 url_regex -i /usr/local/squid31/policy/denygroupleader

acl srgp2 url_regex -i /usr/local/squid31/policy/allowsrgroupleader
acl srgp3 dstdomain -i /usr/local/squid31/policy/allowsrgroupleader

acl nm2 url_regex -i /usr/local/squid31/policy/allownormaluser
acl nm3 url_regex -i /usr/local/squid31/policy/denynormaluser
acl nm4 dstdomain -i /usr/local/squid31/policy/denynormaluser
###
acl mancom2 url_regex -i /usr/local/squid31/policy/allowmancommon
#acl global url_regex -i /usr/local/squid31/policy/allowglobal
acl global dstdomain -i /usr/local/squid31/policy/allowglobal
###
acl noblock src /usr/local/squid31/policy/allowdirect

#Http_access

http_access allow manager localhost
http_access deny manager

#Authentication
acl AuthorizedUsers proxy_auth REQUIRED

#Some options uncommented to redue delay

# noblock contains webistes that does not need authentication
http_access allow noblock
#global conatins ip that does not need authentication

http_access allow global
http_access deny !AuthorizedUsers
http_access allow hods3 hods1
http_access allow itdept3 itdept1
http_access allow gl2 gl1
http_access deny  gl3 gl1 all
http_access allow srgp3 srgp1
http_access allow nm2 nm1
http_access deny  nm4 nm1 all
http_access allow all

Thanks
Senthil


Senthilkumar wrote:

Amos, thank you very much
Amos Jeffries wrote:

On 18/01/11 16:36, Senthilkumar wrote:

Hi ,

I have increased ntlm scheme children's and even though i am getting
error message in cache log
All ntlmauthenticator processes are busy and wbinfo.pl is busy is
stopped now.

I have attached my squid.conf please check it and share your views if
anything is done wrong .


#Authentication
auth_param ntlm program /usr/bin/ntlm_auth
--helper-protocol=squid-2.5-ntlmssp
auth_param ntlm children 100
auth_param basic credentialsttl 8 hours


Notice how the credentialsttl directive there applies to Basic auth 
protocol.


NTLM credentials are held in the TCP connection details themselves. 
And stays in place until the TCP link they apply to is closed. There 
is no relevant TTL.




authenticate_ttl 4 hours

auth_param basic program /usr/bin/ntlm_auth
--helper-protocol=squid-2.5-basic
auth_param basic children 10
auth_param basic realm PrimalHealth care services
auth_param basic credentialsttl 8 hours

#group Authentication
external_acl_type groupauth children=50 %LOGIN
/usr/local/squid31/libexec/wbinfo_group.pl

#Acl for checking group
acl senior1 external groupauth senior
acl dept1 external groupauth dept
acl human1 external groupauth human
acl srgp1 external groupauth group
acl gl1 external groupauth leader
acl nm1 external groupauth normal
acl mancom1 external groupauth man

#Acl to allow and block websites

acl senior2 url_regex -i /usr/local/squid31/policy/allow.txt

acl senior3 dstdomain -i /usr/local/squid31/policy/allow1.txt
acl senior4 dstdomain -i /usr/local/squid31/policy/allow3.txt


acl dept2 url_regex -i /usr/local/squid31/policy/allow4.txt
acl dept3 dstdomain -i /usr/local/squid31/policy/allow5.txt


acl gl2 url_regex -i /usr/local/squid31/policy/allowleader
acl gl3 url_regex -i /usr/local/squid31/policy/denyleader

acl srgp2 url_regex -i /usr/local/squid31/policy/allow6
acl srgp3 dstdomain -i /usr/local/squid31/policy/allow7

acl nm2 url_regex -i /usr/local/squid31/policy/allow8
acl nm3 url_regex -i /usr/local/squid31/policy/deny9
acl nm4 dstdomain -i /usr/local/squid31/policy/deny9
###
acl mancom2 url_regex -i /usr/local/squid31/policy/allowgl2
acl global url_regex -i /usr/local/squid31/policy/allowgl1
###
acl noblock src /usr/local/squid31/policy/allowdirect

#Http_access

http_access allow manager localhost
http_access deny manager

http_access allow noblock
http_access allow

Re: [squid-users] Connection error

2011-01-18 Thread Senthilkumar

Amos, thank you very much
Amos Jeffries wrote:

On 18/01/11 16:36, Senthilkumar wrote:

Hi ,

I have increased ntlm scheme children's and even though i am getting
error message in cache log
All ntlmauthenticator processes are busy and wbinfo.pl is busy is
stopped now.

I have attached my squid.conf please check it and share your views if
anything is done wrong .


#Authentication
auth_param ntlm program /usr/bin/ntlm_auth
--helper-protocol=squid-2.5-ntlmssp
auth_param ntlm children 100
auth_param basic credentialsttl 8 hours


Notice how the credentialsttl directive there applies to Basic auth 
protocol.


NTLM credentials are held in the TCP connection details themselves. 
And stays in place until the TCP link they apply to is closed. There 
is no relevant TTL.




authenticate_ttl 4 hours

auth_param basic program /usr/bin/ntlm_auth
--helper-protocol=squid-2.5-basic
auth_param basic children 10
auth_param basic realm PrimalHealth care services
auth_param basic credentialsttl 8 hours

#group Authentication
external_acl_type groupauth children=50 %LOGIN
/usr/local/squid31/libexec/wbinfo_group.pl

#Acl for checking group
acl senior1 external groupauth senior
acl dept1 external groupauth dept
acl human1 external groupauth human
acl srgp1 external groupauth group
acl gl1 external groupauth leader
acl nm1 external groupauth normal
acl mancom1 external groupauth man

#Acl to allow and block websites

acl senior2 url_regex -i /usr/local/squid31/policy/allow.txt

acl senior3 dstdomain -i /usr/local/squid31/policy/allow1.txt
acl senior4 dstdomain -i /usr/local/squid31/policy/allow3.txt


acl dept2 url_regex -i /usr/local/squid31/policy/allow4.txt
acl dept3 dstdomain -i /usr/local/squid31/policy/allow5.txt


acl gl2 url_regex -i /usr/local/squid31/policy/allowleader
acl gl3 url_regex -i /usr/local/squid31/policy/denyleader

acl srgp2 url_regex -i /usr/local/squid31/policy/allow6
acl srgp3 dstdomain -i /usr/local/squid31/policy/allow7

acl nm2 url_regex -i /usr/local/squid31/policy/allow8
acl nm3 url_regex -i /usr/local/squid31/policy/deny9
acl nm4 dstdomain -i /usr/local/squid31/policy/deny9
###
acl mancom2 url_regex -i /usr/local/squid31/policy/allowgl2
acl global url_regex -i /usr/local/squid31/policy/allowgl1
###
acl noblock src /usr/local/squid31/policy/allowdirect

#Http_access

http_access allow manager localhost
http_access deny manager

http_access allow noblock
http_access allow global


The rest of your rules look like classic mistakes people are always 
making. Sorry if this gets to close to a lecture. I'm taking the 
opportunity to demonstrate some simple but effective optimizations for 
all the list readers.


Before we start;
 the rules as posted require a total of 5 helper lookups *minimum* for 
each and every request. With up to a total of 14 helper lookups for 
some users single request.




First thing to notice. Almost all of the following ACL security tests 
rely on auth having been performed for at least one of their criteria.


We could take advantage of this and force auth to be used by all 
visitors.


  acl authed proxy_auth REQUIRED
  http_access deny !authed

At this point we don't care who they are or whether they are allowed 
to use the proxy. Only that they have credentials and the credentials 
are valid.
  The allow all final rule prevents this. Since people with invalid 
credentials may get there and be permitted.



Now to the first actual permission rule:


http_access allow senior1 senior3


 Start by looking at the types of these ACL.
 * senior3 is a dstdomain, one of the fastest ACL available.
 * senior1 is an external ACL using auth details. This requires not 
one but maybe two stop-and-wait actions while both auth and 
external-acl helpers produce results.


Ordering these ACL so the fastest one is run first will prevent the 
long waits and extra work from being done on later


The fast result:
 http_access allow senior3 senior1



The next permission rule:


http_access deny senior1 senior4 all


This rule suffers from the same problem as the earlier one.
 * senior4 is again a fast dstdomain
 * senior1 being a slow helper lookup

One extra thing to notice is the all at the end.

There is only one reason for using all in this way. Which is to 
suppress Squid requesting auth credentials when none have been given 
or the ones given are invalid.
 Specifically to note is that it ONLY works when the last ACL on the 
line right before the all is an ACL which challenges for auth.
 Namely one of type proxy_auth or one of type external where the 
external-acl helper uses %LOGIN.


As the rule is currently written we have all suppressing a dstdomain 
from challenging for auth. Since dstdomain does not challenge the 
all is completely useless.


After re-ordering the ACL for speed and reduced auth workload this 
state changes. The all *might* be useful there if the user had not 
provided any auth credentials at all.
  Note this is one reason why we suggest

Re: [squid-users] Connection error

2011-01-17 Thread Senthilkumar

Hi ,

I have increased ntlm scheme children's and even though i am getting 
error message in cache log
All ntlmauthenticator processes are busy and wbinfo.pl is busy is 
stopped now.


I have attached my squid.conf please check it and share your views if 
anything is done wrong .



#Authentication
auth_param ntlm program /usr/bin/ntlm_auth 
--helper-protocol=squid-2.5-ntlmssp

auth_param ntlm children 100
auth_param basic credentialsttl 8 hours
authenticate_ttl 4 hours

auth_param basic program /usr/bin/ntlm_auth 
--helper-protocol=squid-2.5-basic

auth_param basic children 10
auth_param basic realm PrimalHealth care services
auth_param basic credentialsttl 8 hours

#group Authentication
external_acl_type groupauth children=50  %LOGIN 
/usr/local/squid31/libexec/wbinfo_group.pl


#Acl for checking group
acl senior1 external groupauth senior
acl dept1 external groupauth dept
acl human1  external groupauth human
acl srgp1 external groupauth group
acl gl1 external groupauth leader
acl nm1  external groupauth normal
acl mancom1  external groupauth man

#Acl to allow and block websites

acl senior2 url_regex -i /usr/local/squid31/policy/allow.txt

acl senior3 dstdomain -i /usr/local/squid31/policy/allow1.txt
acl senior4 dstdomain -i /usr/local/squid31/policy/allow3.txt


acl dept2 url_regex -i /usr/local/squid31/policy/allow4.txt
acl dept3 dstdomain -i /usr/local/squid31/policy/allow5.txt


acl gl2 url_regex -i /usr/local/squid31/policy/allowleader
acl gl3 url_regex -i /usr/local/squid31/policy/denyleader

acl srgp2 url_regex -i /usr/local/squid31/policy/allow6
acl srgp3 dstdomain -i /usr/local/squid31/policy/allow7

acl nm2 url_regex -i /usr/local/squid31/policy/allow8
acl nm3 url_regex -i /usr/local/squid31/policy/deny9
acl nm4 dstdomain -i /usr/local/squid31/policy/deny9
###
acl mancom2 url_regex -i /usr/local/squid31/policy/allowgl2
acl global url_regex -i /usr/local/squid31/policy/allowgl1
###
acl noblock src /usr/local/squid31/policy/allowdirect

#Http_access

http_access allow manager localhost
http_access deny manager

http_access allow noblock
http_access allow global
http_access allow senior1 senior3
http_access deny  senior1 senior4 all
http_access allow dept1 dept3
http_access allow gl1 gl2
http_access deny  gl1 gl3 all
http_access allow srgp1 srgp3
http_access deny  nm1 nm4 all
http_access allow nm1 nm2
http_access allow all

squidGuard
url_rewrite_program /usr/local/squidGuard/bin/squidGuard  -c 
/usr/local/squidGuard/squidGuard.conf

url_rewrite_children 50
url_rewrite_access allow !noblock !senior2 !dept2 !gl2 !gl3 !srgp2 !nm2 
!nm3 !mancom2 !global

url_rewrite_access deny all


Thanks
Senthil

Amos Jeffries wrote:

On 15/01/11 07:35, Senthilkumar wrote:

Hi All,

I am using Squid Cache: Version 3.1.8, configured NTLM scheme using
samba, CLAM Av + ICAP and Squid guard.
All of the clients are Windows machine joined in domain. The browser
authenticates using ntlm scheme without pop up for password and
everything working fine.

We have two issues:
1.We are using many acls to allow and deny websites on the basis of the
ADS groups using wbinfo.pl. Time to time the users are reporting that
the authentication pop up occurs .
In cache.log we can find the following

2011/01/14 12:27:50| WARNING: All ntlmauthenticator processes are busy.
2011/01/14 12:27:50| WARNING: 25 pending requests queued
2011/01/14 12:56:48| WARNING: All ntlmauthenticator processes are busy.
2011/01/14 12:56:48| WARNING: 25 pending requests queued
2011/01/14 12:57:36| WARNING: All ntlmauthenticator processes are busy.
2011/01/14 12:57:36| WARNING: 25 pending requests queued
2011/01/14 14:00:03| WARNING: All ntlmauthenticator processes are busy.
2011/01/14 14:00:03| WARNING: 25 pending requests queued
2011/01/14 14:00:06| WARNING: Closing open FD 229
2011/01/14 14:01:09| WARNING: All ntlmauthenticator processes are busy.

We just increased it to 30 for ntlm and 30 for wbinfo(external) still it
occurs. Does ntlm scheme has any new behaviour?



Also, wbinfo has a maximum capacity limit of only ~256 lookups, shared 
across all helpers AFAIK. When this limit is exceeded the lookups get 
queued. When queue fills clients are rejected.



2.When we browse a website and leave browser idle for 30 - 60 minutes ,
cannot display page occurs.


strange.


In squid.conf we have used following values
half_closed_clients off
client_persistent_connections off
server_persistent_connections off
Whether squid has this as default behaviour?, suggest s suitable options
in squid conf to overcome it.


Eek!

Firstly, NTLM schemes authenticates a TCP connection, *not* a user.

Secondly, NTLM scheme requires *three* HTTP full requests to be 
performed to authenticate and fetch an object.


So... without persistent connections your Squid and its client 
browsers are consuming up to 3x the amount of traffic (and bandwidth) 
they normally would be.



Amos




Re: [squid-users] Connection error

2011-01-16 Thread Senthilkumar

Thank you Amos, this helps us lot.
Amos Jeffries wrote:

On 15/01/11 07:35, Senthilkumar wrote:

Hi All,

I am using Squid Cache: Version 3.1.8, configured NTLM scheme using
samba, CLAM Av + ICAP and Squid guard.
All of the clients are Windows machine joined in domain. The browser
authenticates using ntlm scheme without pop up for password and
everything working fine.

We have two issues:
1.We are using many acls to allow and deny websites on the basis of the
ADS groups using wbinfo.pl. Time to time the users are reporting that
the authentication pop up occurs .
In cache.log we can find the following

2011/01/14 12:27:50| WARNING: All ntlmauthenticator processes are busy.
2011/01/14 12:27:50| WARNING: 25 pending requests queued
2011/01/14 12:56:48| WARNING: All ntlmauthenticator processes are busy.
2011/01/14 12:56:48| WARNING: 25 pending requests queued
2011/01/14 12:57:36| WARNING: All ntlmauthenticator processes are busy.
2011/01/14 12:57:36| WARNING: 25 pending requests queued
2011/01/14 14:00:03| WARNING: All ntlmauthenticator processes are busy.
2011/01/14 14:00:03| WARNING: 25 pending requests queued
2011/01/14 14:00:06| WARNING: Closing open FD 229
2011/01/14 14:01:09| WARNING: All ntlmauthenticator processes are busy.

We just increased it to 30 for ntlm and 30 for wbinfo(external) still it
occurs. Does ntlm scheme has any new behaviour?



Also, wbinfo has a maximum capacity limit of only ~256 lookups, shared 
across all helpers AFAIK. When this limit is exceeded the lookups get 
queued. When queue fills clients are rejected.



2.When we browse a website and leave browser idle for 30 - 60 minutes ,
cannot display page occurs.


strange.


In squid.conf we have used following values
half_closed_clients off
client_persistent_connections off
server_persistent_connections off
Whether squid has this as default behaviour?, suggest s suitable options
in squid conf to overcome it.


Eek!

Firstly, NTLM schemes authenticates a TCP connection, *not* a user.

Secondly, NTLM scheme requires *three* HTTP full requests to be 
performed to authenticate and fetch an object.


So... without persistent connections your Squid and its client 
browsers are consuming up to 3x the amount of traffic (and bandwidth) 
they normally would be.



Amos




[squid-users] Connection error

2011-01-14 Thread Senthilkumar

Hi All,

I am using Squid Cache: Version 3.1.8, configured NTLM scheme using 
samba, CLAM Av + ICAP and Squid guard.
All of the clients are Windows machine joined in  domain. The browser 
authenticates using ntlm scheme without pop up for password and 
everything working fine.


We have two issues:
1.We are using many acls to allow and deny websites on the basis of the 
ADS groups using wbinfo.pl. Time to time the users are reporting that 
the authentication pop up occurs .

In cache.log we can find the following

2011/01/14 12:27:50| WARNING: All ntlmauthenticator processes are busy.
2011/01/14 12:27:50| WARNING: 25 pending requests queued
2011/01/14 12:56:48| WARNING: All ntlmauthenticator processes are busy.
2011/01/14 12:56:48| WARNING: 25 pending requests queued
2011/01/14 12:57:36| WARNING: All ntlmauthenticator processes are busy.
2011/01/14 12:57:36| WARNING: 25 pending requests queued
2011/01/14 14:00:03| WARNING: All ntlmauthenticator processes are busy.
2011/01/14 14:00:03| WARNING: 25 pending requests queued
2011/01/14 14:00:06| WARNING: Closing open FD  229
2011/01/14 14:01:09| WARNING: All ntlmauthenticator processes are busy.

We just increased it to 30 for ntlm and 30 for wbinfo(external) still it 
occurs. Does ntlm scheme has any new behaviour?


2.When we browse a website and leave browser idle for 30 - 60 minutes , 
cannot display page occurs.

In squid.conf we have used following values
half_closed_clients off
client_persistent_connections off
server_persistent_connections off
Whether squid has this as default behaviour?, suggest s suitable options 
in squid conf to overcome it.


Thanks
Senthil








Re: [squid-users] TCP_MISS/401 when accessing portal on IIS

2010-12-27 Thread Senthilkumar

Thank you. I will check it.
Amos Jeffries wrote:

On 27/12/10 20:25, Senthilkumar wrote:

Hi All,

I am using squid 3.1.8 with icap, ntlm scheme. Everything works fine.
I have a problem when i connect to a portal which is running on IIS
server and it has external authentication.(i am not sure about the
authentication used by the iis server, we hope it is ntlm )


Being IIS it probably is, check the WWW-Auth* header in the replies to 
be sure. There may be alternative options you can make the browser use 
by stripping the broken one out.



When i connect to that site it asks for a username and password and it
continues to ask even after providing username and password, finally the
following error is displayed page cannot be displayed.
In access log i can find TCP_DENIED/401.
I have tried by enabling client_persistent connections on. But no luck.
How can we make portal to work with squid?


NTLM does not work very well with HTTP to start with and does not work 
at all over the Internet without a lot of trouble.


If it is NTLM:
  You must enable persistent connections to both servers and clients, 
enable authentication pass-thru hacks in Squid. Then cross your 
fingers and hope that every other proxy admin does the same. Complain 
to the website admin as well.


The other less-likely possibility is that it is Kerberos auth (almost 
as bad as NTLM over the 'Net), or some other breakage.


Amos




[squid-users] TCP_MISS/401 when accessing portal on IIS

2010-12-26 Thread Senthilkumar

Hi All,

I am using squid 3.1.8  with icap, ntlm scheme. Everything works fine.
I have a problem when i connect to a portal which is running on IIS 
server and it has external authentication.(i am not sure about the 
authentication used by the iis server, we hope it is ntlm )
When i connect to that site it asks for a username and password and it 
continues to ask  even after providing username and password, finally 
the following error is displayed page cannot be displayed.

In access log i can find TCP_DENIED/401.
I have tried by enabling client_persistent connections on. But no luck.
How can we make portal to work with squid?

Thanks
Senthil






[squid-users] Icap-X_authenticated-Group

2010-11-21 Thread Senthilkumar

Hi All,

The squid 3.1.8  with icap support is installed and it communicates with 
the ICAP-SERVER.We are also using ntlmssp authentication with 
groups.Everything works fine.


From the ICAP-CLIENT we are able to pass X_authenticated_user  by enabling
icap_client_username_header X-Authenticated-User in squid.conf

But when we use icap_client_user name_header X-Authenticated-Group in 
squid.conf
The authenticate user name is displayed as X_Authenticate_group instead 
of group name.


We are in need to pass both user name and group name from the icap-client.

How can it be achieved  in the squid 3.1.8?


Thanks
Senthil.


[squid-users] smb_auth

2010-11-11 Thread Senthilkumar

Hi Team,

I am using squid 2.7 stable 7 and it was authenticating  with Active 
directory using *smb_auth.

*ADS has two organisation unit one as* sales and workers .*
I need to allow unfiltered access to sales while filtered access to workers.
Is it possible with smb_auth? and how it can be done?

Thanks
Senthil





[squid-users] Cache_peer directive

2010-11-01 Thread Senthilkumar

Hi All,

I am using squid 2.7 stable 7 with wccp.In the squid.conf i have used 9 
cache_peer directive as parent and made all requests passed through it 
by never direct allow all.I am using round robin in cache_peer to load 
balance requests.Now the problem is the browsing seems to be very slow 
the cache_peer machines are in local network only.Whether using many 
cache_peer makes connection slow?I only want cache_peer to be  as parent 
not was sibling .Is there any way to optimize the cache_peer 
directive(like enabling persistent) and makes connection fast.


Thanks
Senthil