Re: [squid-users] Squid stops handling requests after 30-35 requests

2013-11-20 Thread Amos Jeffries
On 20/11/2013 7:59 p.m., Bhagwat Yadav wrote:
 Hi,
 
 While doing some testing I am facing the issue that squid after some
 time not able to the process the request and saying :
 
 Connecting to www.attblueroom.com|216.77.188.73|:80... failed:
 Connection refused.
 
 Please provide some help. How I can debug the issue? What will the
 likely processing in such cases?

Start with cache.log. Does it mention anything happening right before or
at the time the problem apppears?

If there is no indication there try increasing the debug level:
 debug options ALL,4


 
 Also this issue is not continously reproducible.

Is there any sign in access.log of something in the traffic when it happens?


Amos


Re: [squid-users] intercepting SSL connections with client certificate

2013-11-20 Thread Amos Jeffries
On 20/11/2013 8:02 p.m., Shinoj Gangadharan wrote:
 1. sslbump is not passing on the client cert - I think this will be
 fixed with SSLPeekandSplice feature
 (http://wiki.squid-cache.org/Features/SslPeekAndSplice)

 I do not think this can be fixed. IIRC, Squid cannot forward the
 client
 certificate to the server on a bumped connection: During SSL handshake,
 the
 client certificate is sent along with a digest of SSL messages seen by
 the client
 so far. That digest is encrypted with the client private key. Squid
 would not
 be able to create that digest because Squid does not have access to the
 client
 private key and the client digest will not match the server view of the
 communication. This is one of the defense layers against the man-in-the-
 middle attack.

 Just like Squid cannot forward the server certificate to the client,
 Squid
 cannot forward the client certificate to the server. If a connection is
 bumped,
 both certificates can only be faked, not forwarded as is.

 Squid does not support faking client certificates.

 
 It would be great if we have an option to specify client cert and key for
 a specific IP/ domain like in cache_peer -  I know this is going to be
 complicated.
 

 2. Plain old cache_peer is not working with SSL due to this bug(this
 is my
 guess) : There is a bug in Squid where it can not forward CONNECT
 requests properly to ssl enabled peers. By Henrik from :
 http://squid-web-proxy-cache.1019090.n4.nabble.com/Transparent-SSL-
 Int
 erce
 ption-td4582940.html

 I am not sure exactly which problem you are referring to, but TCP
 tunnels to
 SSL peers are unofficially supported in
 https://code.launchpad.net/~measurement-factory/squid/connect2ssl

 
 Is it possible to use Parent Proxy with  SSL Bump? The following config
 does not forward requests to parent proxy. It always connects directly :
 
 acl wc dstdomain mydomain.com
 
 cache_peer testp.parentproxy.com parent 443 0 originserver no-query
 proxy-only ssl sslflags=DONT_VERIFY_PEER name=wimi
 cache_peer_access wimi allow all
 
 never_direct allow wc
 
 always_direct allow all
 

always_direct overrides never_direct and both of those override cache_peer_*

Try this:
 always_direct allow !wc

Amos


RE: [squid-users] intercepting SSL connections with client certificate

2013-11-20 Thread Shinoj Gangadharan
 -Original Message-
 From: Amos Jeffries [mailto:squ...@treenet.co.nz]
 Sent: Wednesday, November 20, 2013 1:59 PM
 To: squid-users@squid-cache.org
 Subject: Re: [squid-users] intercepting SSL connections with client
certificate

 On 20/11/2013 8:02 p.m., Shinoj Gangadharan wrote:
  1. sslbump is not passing on the client cert - I think this will be
  fixed with SSLPeekandSplice feature
  (http://wiki.squid-cache.org/Features/SslPeekAndSplice)
 
  I do not think this can be fixed. IIRC, Squid cannot forward the
  client
  certificate to the server on a bumped connection: During SSL
  handshake,
  the
  client certificate is sent along with a digest of SSL messages seen
  by
  the client
  so far. That digest is encrypted with the client private key. Squid
  would not
  be able to create that digest because Squid does not have access to
  the
  client
  private key and the client digest will not match the server view of
  the communication. This is one of the defense layers against the
  man-in-the- middle attack.
 
  Just like Squid cannot forward the server certificate to the client,
  Squid
  cannot forward the client certificate to the server. If a connection
  is
  bumped,
  both certificates can only be faked, not forwarded as is.
 
  Squid does not support faking client certificates.
 
 
  It would be great if we have an option to specify client cert and key
  for a specific IP/ domain like in cache_peer -  I know this is going
  to be complicated.
 
 
  2. Plain old cache_peer is not working with SSL due to this bug(this
  is my
  guess) : There is a bug in Squid where it can not forward CONNECT
  requests properly to ssl enabled peers. By Henrik from :
  http://squid-web-proxy-cache.1019090.n4.nabble.com/Transparent-
 SSL-
  Int
  erce
  ption-td4582940.html
 
  I am not sure exactly which problem you are referring to, but TCP
  tunnels to
  SSL peers are unofficially supported in
  https://code.launchpad.net/~measurement-factory/squid/connect2ssl
 
 
  Is it possible to use Parent Proxy with  SSL Bump? The following
  config does not forward requests to parent proxy. It always connects
 directly :
 
  acl wc dstdomain mydomain.com
 
  cache_peer testp.parentproxy.com parent 443 0 originserver no-query
  proxy-only ssl sslflags=DONT_VERIFY_PEER name=wimi cache_peer_access
  wimi allow all
 
  never_direct allow wc
 
  always_direct allow all
 

 always_direct overrides never_direct and both of those override
 cache_peer_*

 Try this:
  always_direct allow !wc

 Amos

With

always_direct allow !wc

I get this error :

Unable to forward this request at this time.

This request could not be forwarded to the origin server or to any parent
caches.

Regards,
Shinoj.


Re: [squid-users] squid 3.4.0.2 + smp + rock storage error

2013-11-20 Thread Alexandre Chappaz
Hi,

I have the same kind of error but what bugs me is that I cannot
reproduce this systematically. I am really wondering if this is a
permission PB on shm mount point and / or  /var/run/squid permissions
:

some times the service starts normally ( worker kids stay up ) and
some times some or all of the the worker kids die with this error :

FATAL: Ipc::Mem::Segment::open failed to
shm_open(/squid-cache_mem.shm): (2) No such file or directory.



attached is the cache.log, and here below the squid.conf.

Best regards


# pour le debogage (ne pas mettre plus de 2)
#debug_options ALL,2

# Utilisateurs
cache_effective_user nobody
cache_effective_group nobody


# Format access.log
strip_query_terms off
#logformat Squid  %ts.%03tu %6tr %a %Ss/%Hs %st %rm %ru %un %Sh/%A %mt
logformat PAS-Bdx %ts.%03tu %6tr %a %Ss/%Hs %st %rm %ru %un %Sh/%A
%mt %rv %tl %{Referer}h %{User-Agent}h

# chemins
coredump_dir /var/cache/squid
pid_filename /var/run/squid/squid.pid
access_log stdio:/var/log/squid/access.log PAS-Bdx
cache_log /var/log/squid/cache.log
cache_store_log none
mime_table /etc/squid/mime.conf
error_directory /etc/squid/errors
error_default_language fr
err_page_stylesheet /etc/squid/errorpage.css

# Fichier hosts
hosts_file /etc/hosts

# SNMP
acl snmpcommunity snmp_community read_only_user
snmp_access allow snmpcommunity
snmp_port 3401

###
# FONCTIONNEMENT DU PROXY #
###

#SMP
workers 4

#Ports d'ecoute
http_port 3128

#localhost a droit au cachemanager
http_access allow localhost manager
http_access deny manager

#localhost a droit a purger le cache
acl PURGE method PURGE
http_access allow PURGE localhost
http_access deny PURGE

# Les requetes intranet sont retournees en erreur
acl ip_intranet dst 10.0.0.0/8
http_access deny ip_intranet


acl PLSU_SIE_USERAGENT browser PLSU_SIE
acl PLSU_SIE_DEST dstdomain /etc/squid/acl/dest/PLSU_SIE.dst

http_access allow PLSU_SIE_USERAGENT PLSU_SIE_DEST
http_access deny PLSU_SIE_USERAGENT

#définition de la VIP des squid Père
#cache_peer 192.168.1.129 parent 3128 0 default no-query no-digest
cache_peer 192.168.1.201 parent 3128 0 sourcehash no-query no-digest
cache_peer 192.168.1.202 parent 3128 0 sourcehash no-query no-digest
cache_peer 192.168.1.203 parent 3128 0 sourcehash no-query no-digest
cache_peer 192.168.1.204 parent 3128 0 sourcehash no-query no-digest


# Time Out / Time To Live
negative_ttl 1 seconds
read_timeout 15 minutes
request_timeout 5 minutes
client_lifetime 4 hours
positive_dns_ttl 2 hours
negative_dns_ttl 5 minutes
shutdown_lifetime 5 seconds
dns_nameservers 127.0.0.1

# Divers
ftp_passive on
ftp_epsv off
logfile_rotate 2
request_header_access Via deny all
request_header_access X-Forwarded-For allow all
refresh_all_ims on

###
# FONCTIONNEMENT DU CACHE #
###

#Rafraichissement du cache
memory_cache_shared on
cache_mem 2 GB
max_filedesc 65535
maximum_object_size 512 MB
maximum_object_size_in_memory 2048 KB
ipcache_size 8192
fqdncache_size 8192

#definition du cache
#8Gb of shared rock cache, for 32Ko objects max
cache_dir rock /var/cache/squid/mem/ 8192 max-size=32768

if ${process_number} =1
# Filtrage avec squidGuard
url_rewrite_program /usr/local/squidGuard/bin/squidGuard
url_rewrite_children 1000 startup=15 idle=15 concurrency=0
cache_dir aufs /var/cache/squid/mem/W${process_number} 2048 16 256
min-size=32768 max-size=131072
cache_dir aufs /var/cache/squid/W${process_number} 12000 16 256 min-size=131072
endif
if ${process_number} =2
# Filtrage avec squidGuard
url_rewrite_program /usr/local/squidGuard/bin/squidGuard
url_rewrite_children 1000 startup=15 idle=15 concurrency=0
cache_dir aufs /var/cache/squid/mem/W${process_number} 2048 16 256
min-size=32768 max-size=131072
cache_dir aufs /var/cache/squid/W${process_number} 12000 16 256 min-size=131072
endif
if ${process_number} =3
# Filtrage avec squidGuard
url_rewrite_program /usr/local/squidGuard/bin/squidGuard
url_rewrite_children 1000 startup=15 idle=15 concurrency=0
cache_dir aufs /var/cache/squid/mem/W${process_number} 2048 16 256
min-size=32768 max-size=131072
cache_dir aufs /var/cache/squid/W${process_number} 12000 16 256 min-size=131072
endif
if ${process_number} =4
# Filtrage avec squidGuard
url_rewrite_program /usr/local/squidGuard/bin/squidGuard
url_rewrite_children 1000 startup=15 idle=15 concurrency=0
cache_dir aufs /var/cache/squid/mem/W${process_number} 2048 16 256
min-size=32768 max-size=131072
cache_dir aufs /var/cache/squid/W${process_number} 12000 16 256 min-size=131072
endif

# pages dynamiques non mises en cache
acl QUERY urlpath_regex cgi-bin \? \.fcgi \.cgi \.pl \.php3 \.asp \.php \.do
no_cache deny QUERY

# Reecriture des regles de gestion du cache pour certains domaines
gros consommateurs
acl forcedcache urlpath_regex .lefigaro\.fr .leparisien\.fr
.20minutes\.fr .lemde\.fr .lemonde\.fr .lepoint\.fr .lexpress\.fr
.meteofrance\.com .ouest-france\.fr .nouvelobs\.com .wikimedia\.org

RE: [squid-users] Fwd: Squid compiled size

2013-11-20 Thread Jenny Lee
 On 11/20/2013 12:04 AM, Mohd Akhbar wrote:
 
 I compiled squid on Centos 6.2 64bit with
 
 ./configure --prefix=/usr --includedir=/usr/include
 --datadir=/usr/share --bindir=/usr/sbin --libexecdir=/usr/lib/squid
 --localstatedir=/var --sysconfdir=/etc/squid
 
 My compiled size for squid runtime in /usr/sbin/squid is 28mb but if
 i'm install squid from rpm contributed by Elizer its only 2mb (cant
 remember the exact size) but definitely different from mine. Is there
 any prob with my compiled method ? Is it ok with that 28mb ?


Better to run stripped on production machine and keep the unstripped in case of 
segfaults.

cp /usr/sbin/squid /usr/sbin/squid.debug
strip /usr/sbin/squid

should bring it to 2MB. If it crashes give squid.debug to gdb.

Jenny 

[squid-users] Re: squid-2.7

2013-11-20 Thread Amos Jeffries
On 20/11/2013 6:55 p.m., z fazli wrote:
 hi
 
 I want to compare performance of squid3 vs squid-2.7 in tproxy mod ,and
 choose the best, but cannot install squid-2.7 on ubuntu 64 bit (on 32 bit
 there is no problem). Is it possible to install 2.7 on ubuntu ? how?
 

The two are not comparible. They each require completely different
kernels and networking stacks. So the sheer amount of difference
invalidates any test results you might come up with.

Amos



Re: [squid-users] Squid stops handling requests after 30-35 requests

2013-11-20 Thread Bhagwat Yadav
Hi,

I enable the logging but didn't find any conclusive or decisive logs
so that I can forward you.

In my testing, I am accessing same URL 500 times in a loop from the
client using wget.
Squid got hanged sometimes after 120 requests ,sometimes after 150 requests as:

rm: cannot remove `index.html': No such file or directory
--2013-11-20 03:52:37--  http://www.naukri.com/
Resolving www.naukri.com... 23.72.136.235, 23.72.136.216
Connecting to www.naukri.com|23.72.136.235|:80... connected.
HTTP request sent, awaiting response... 503 Service Unavailable
2013-11-20 03:53:39 ERROR 503: Service Unavailable.


Whenever it got hanged, it resumes after 1 minute e.g in above example
after 03:52:37 the response came at 03:53:39.

Please provide more help.

Many Thanks,
Bhagwat

On Wed, Nov 20, 2013 at 1:44 PM, Amos Jeffries squ...@treenet.co.nz wrote:
 On 20/11/2013 7:59 p.m., Bhagwat Yadav wrote:
 Hi,

 While doing some testing I am facing the issue that squid after some
 time not able to the process the request and saying :

 Connecting to www.attblueroom.com|216.77.188.73|:80... failed:
 Connection refused.

 Please provide some help. How I can debug the issue? What will the
 likely processing in such cases?

 Start with cache.log. Does it mention anything happening right before or
 at the time the problem apppears?

 If there is no indication there try increasing the debug level:
  debug options ALL,4



 Also this issue is not continously reproducible.

 Is there any sign in access.log of something in the traffic when it happens?


 Amos


[squid-users] Cyberoam logging

2013-11-20 Thread alamb200
Hi,
I have just managed to get Squid working as a proxy server on my Windows
server and want to start looking at the information in the log files.
To do this I have downloaded Cyberoam onto the same server running Squid and
need to try and getting them working together but I cannot work out how to
do this.

In the install guide it say:

1.   Update syslog-ng.conf with the below given text: 
 
/etc/syslog-ng/syslog-ng.conf

but I cannot find this file anywhere.

Can anyone help with this?

Thanks,

alamb200



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Cyberoam-logging-tp4663388.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Basic config file

2013-11-20 Thread alamb200
I went back to the basic config file and started again an dthis is now
working.
Thanks for your help.



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Basic-config-file-tp4663325p4663389.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] squid 3.4.0.2 + smp + rock storage error

2013-11-20 Thread Alexandre Chappaz
here it is

2013/11/20 Eliezer Croitoru elie...@ngtech.co.il:
 Hey Alexandre,

 I do not see any cache.log attachment here.
 Please resend it.

 Thanks,
 Eliezer


 On 20/11/13 11:19, Alexandre Chappaz wrote:

 Hi,

 I have the same kind of error but what bugs me is that I cannot
 reproduce this systematically. I am really wondering if this is a
 permission PB on shm mount point and / or  /var/run/squid permissions
 :

 some times the service starts normally ( worker kids stay up ) and
 some times some or all of the the worker kids die with this error :

 FATAL: Ipc::Mem::Segment::open failed to
 shm_open(/squid-cache_mem.shm): (2) No such file or directory.



 attached is the cache.log, and here below the squid.conf.

 Best regards




cache.log.bz2
Description: BZip2 compressed data


Re: [squid-users] Squid stops handling requests after 30-35 requests

2013-11-20 Thread Eliezer Croitoru

Hey,

Can you try another test?
It is very nice to use wget but there are couple options that needs to 
be consider.

Just to help you if was not there until now add: --delete-after
to the wget command line.

It's not related to squid but it helps a lot.
Now If you are up to it I will be happy to see the machine specs and OS.
Also what is squid -v output?

Can you ping the machine at the time it got stuck? what about tcp-ping 
or nc -v squid_ip port ?
we need to verify also in the access logs that it's not naukri.com that 
thinks your client is trying to covert it into a DDOS target.

What about trying to access other resources?
What is written in this 503 response page?

Eliezer

On 20/11/13 12:35, Bhagwat Yadav wrote:

Hi,

I enable the logging but didn't find any conclusive or decisive logs
so that I can forward you.

In my testing, I am accessing same URL 500 times in a loop from the
client using wget.
Squid got hanged sometimes after 120 requests ,sometimes after 150 requests as:

rm: cannot remove `index.html': No such file or directory
--2013-11-20 03:52:37--http://www.naukri.com/
Resolvingwww.naukri.com... 23.72.136.235, 23.72.136.216
Connecting towww.naukri.com|23.72.136.235|:80... connected.
HTTP request sent, awaiting response... 503 Service Unavailable
2013-11-20 03:53:39 ERROR 503: Service Unavailable.


Whenever it got hanged, it resumes after 1 minute e.g in above example
after 03:52:37 the response came at 03:53:39.

Please provide more help.

Many Thanks,
Bhagwat




[squid-users] Re: Issue with Squid_ldap_group (Windows) ?

2013-11-20 Thread Raf
Ok.

I found that in squid 3.x the ldap helper is change from previous release ;
instead of squid_ldap_group there’s the helper basic_ldap_auth (located in
/usr/lib64/squid on Fedora 18 x64). 

( http://www.squid-cache.org/Versions/v3/3.2/RELEASENOTES.html#ss4.2 )

After some problem with firewall configuration and some test with
basic_ldap_auth inserting the line below in squid.conf associated with ACL
ldap-auth can gain internet access only to active directory user.

auth_param basic program /usr/lib64/squid/basic_ldap_auth -R -b
dc=domain,dc=local -D CN=ADUser,OU=OU-ADUser,dc=domain,dc=local -w
pwd-ADUser -f sAMAccountName=%s -h IP-Ldap-Server:389
.
.
.
acl ldap-auth proxy_auth REQUIRED
http_access allow ldap-auth


Now I must find how can I do with user belong to a single group present in
active directory…previous version external acl helpers was squid_ldap_group.
In squid 3.3.2 seems to be ext_ldap_group_acl …but I can’t find it !!!

…and the story goes on …




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Issue-with-Squid-ldap-group-Windows-tp4663221p4663395.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] unusual TCP_DENIED situation

2013-11-20 Thread Mark Redding
Hello all,

I run the network for a UK boarding school (1000 pupils and around 400
staff) and use a combination of squid and dansguardian to provide time
controlled and filtered web access for all users. From time to time a
number of users have reported receiving squid access denied messages -
though if they try accessing the same site a minute or two later they
get through to it without any issue. Is anyone able to shed light on the
condition under which a TCP_DENIED/403 message may be returned by squid
(apart from the obvious acl based ones).

My configuration is thus :

FreeBSD 8.3-RELEASE-p3 amd64 running squid-3.1.19

I actually run 5 squid instances on the server all of which use a common
set of configuration files, yet it is only one particular class of user
(the teaching staff) who are are directed via an instance with a
slightly different configuration that the others that experience the issue.

The important part of the configuration for this instance is as follows :-

visible_hostname www-proxy-a-s
http_port 10.129.128.31:8081

pid_filename /var/log/proxy/squids.pid

icp_port 0

#cache_dir null /cache

cache_peer 127.0.0.1 parent 7000 0 no-query no-digest sourcehash name=c0
cache_peer 127.0.0.1 parent 7001 0 no-query no-digest sourcehash name=c1
cache_peer 127.0.0.1 parent 7002 0 no-query no-digest sourcehash name=c2
cache_peer 127.0.0.1 parent 7003 0 no-query no-digest sourcehash name=c3

include /usr/local/etc/squid/confs/squidmachines.conf
include /usr/local/etc/squid/confs/staff/squidcontrols.conf
cache_peer_access c0 deny full_access_users
cache_peer_access c1 deny full_access_users
cache_peer_access c2 deny full_access_users
cache_peer_access c3 deny full_access_users
cache_peer_access c0 deny direct_sites
cache_peer_access c1 deny direct_sites
cache_peer_access c2 deny direct_sites
cache_peer_access c3 deny direct_sites
include /usr/local/etc/squid/confs/staff/squidaccess.conf
http_access deny pupil_own_machines
http_access deny guest_own_machines
include /usr/local/etc/squid/confs/staff/squiddelay.conf
include /usr/local/etc/squid/confs/squidcommon.conf

access_log /var/log/proxy/squidsaccess.log fsquid
cache_log /var/log/proxy/squidscache.log

always_direct allow full_access_users
always_direct allow direct_sites

tcp_outgoing_address our-public-ip-address full_access_users
tcp_outgoing_address our-public-ip-address direct_sites



The 'cache peers' are actually dansguardian instances (each having a max
of 192 processes available) and the machines being used are not in the
'full_access_users' list neither are the sites they are attempting to
access in the 'direct_sites' list.

Has anyone else experienced such behaviour in a similar environment ?

regards,
Mark Redding



[squid-users] Re: Issue with Squid_ldap_group (Windows) ?

2013-11-20 Thread Raf
Someone can help me ? 

The external helpers ext_ldap_group_acl is automatically present when i
install squid on fedora 18 (x64) ?

after fresh installation of Fedora 18 (with gnome) i installed squid from
root (yum install squid) : i find the helpesr basic_ldap_auth but i don't
see ext_ldap_group_acl !! 

It's a bug of Squid/Fedora ?
If no how can i do for use thsi external helpers ? i must recompile/update
squid ? how can i do ?

This issue make me crazy !! 



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Issue-with-Squid-ldap-group-Windows-tp4663221p4663397.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] squid 3.4.0.2 + smp + rock storage error

2013-11-20 Thread Alex Rousskov
On 11/20/2013 02:19 AM, Alexandre Chappaz wrote:

 I have the same kind of error but what bugs me is that I cannot
 reproduce this systematically. I am really wondering if this is a
 permission PB on shm mount point and / or  /var/run/squid permissions
 :
 
 some times the service starts normally ( worker kids stay up ) and
 some times some or all of the the worker kids die with this error :
 
 FATAL: Ipc::Mem::Segment::open failed to
 shm_open(/squid-cache_mem.shm): (2) No such file or directory.


This is usually caused by two SMP Squid instances running, which is
usually caused by incorrect squid -z application in the system
startup/service scripts. YMMV, but the logs you posted later seem to
suggest that it is exactly what is happening in your case.

Do you run squid-z from the system startup/service script? If yes, does
the script assume that squid -z ends when the squid -z command returns?
If yes, the script should be modified to avoid that assumption because,
in recent Squid releases, the squid-z instance continues to run (in the
background) and clash with the regular squid instance started by the
same script a moment later.

There was a recent squid-dev discussion about fixing squid-z. I am not
sure there was a strong consensus regarding the best solution, but I
hope that squid-z will start doing nothing (Squid will just exit with a
warning message about the deprecated option) in the foreseeable future
while Squid instances will be capable of creating missing directories
runtime, when needed (and allowed) to do so.

More details and a call for volunteers at
 http://www.squid-cache.org/mail-archive/squid-dev/201311/0017.html


HTH,

Alex.



[squid-users] Re: Cannot get basic_ldap_auth to work with AD

2013-11-20 Thread Brig
Hi Peter,

Thx for the replies! Your names sounds familiar, were you on the Squid
project like 18 yrs ago? My first Squid project was back then when I used it
to develop a load balancer and I wonder if we corresponded back then?

Anyway here is the results of the four commands you asked me to issue:

1)

/u01/local/squid-3.3.10/helpers/basic_auth/LDAP/basic_ldap_auth -P -R -u cn
-b cn=Users,dc=mydomain,dc=com  -h 'ldap.mydomain.com'
brig {my passwd}
ERR Invalid credentials

2)

/u01/local/squid-3.3.10/helpers/basic_auth/LDAP/basic_ldap_auth -d -b
'dc=mydomain,dc=com'  -f 'sAMAccountName=%s' -D
'cn=squidauth,ou=Users,dc=mydomain,dc=com' -w 'squidauth passwd' -t 3  -H
'ldap://ldap.mydomain.com'
brig {my passwd}
basic_ldap_auth: WARNING, could not bind to binddn 'Invalid credentials'
ERR Success

3)

ldapsearch -LLL  -H ldap://ldap.mydomain.com -x -D
'CN=squidauth,OU=Users,OU=IT,
DC=mydomain,DC=com' -w 'squidauth passwd' -b 'DC=mydomain,DC=com'
'(sAMAccountNa
me=brig)' dn

dn: CN=Brig,OU=Users,OU=IT,DC=mydomain,DC=com

# refldap://ForestDnsZones.mydomain.com/DC=ForestDnsZones,DC=mydomain,DC=com

# refldap://DomainDnsZones.mydomain.com/DC=DomainDnsZones,DC=mydomain,DC=com

# refldap://mydomain.com/CN=Configuration,DC=mydomain,DC=com

4)

ldapsearch -LLL  -H ldap://ldap.mydomain.com -x -D
'CN=Brig,OU=Users,OU=IT,DC=mydomain,DC=com' -w 'my passwd' -b
'DC=mydomain,DC=com' '(sAMAccountName=brig)' dn

dn: CN=Brig,OU=Users,OU=IT,DC=mydomain,DC=com

# refldap://ForestDnsZones.mydomain.com/DC=ForestDnsZones,DC=mydomain,DC=com

# refldap://DomainDnsZones.mydomain.com/DC=DomainDnsZones,DC=mydomain,DC=com

# refldap://mydomain.com/CN=Configuration,DC=mydomain,DC=com


While doing this I spent an hour on the AD server too looking for any kind
of errors or anything and found NOTHING! This reminded me how much I hate
working with M$ technology cuz somehow I feel if I was using OpenLdap I get
the feeling I would see some kind of logging events that could help me
figure this out . . .

Thx again for you help!

Brig




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Cannot-get-basic-ldap-auth-to-work-with-AD-tp4663282p4663399.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] Replay Auth

2013-11-20 Thread FredB

 Objet: [squid-users] Replay Auth
 
 Hello,
 
 I'm trying to use squid with two identifications mode, first digest
 and second basic, all works without problem except one point
 
 auth_param basic credentialsttl 1 hours
 
 The proxy never claim the username and pass after 1 hour, so I found
 no way for forcing the replay with digest
 squid stop and start are also without effect (I guess that the
 browser replay automatically his credential).
 I should wait that the user close his browser ...
 


Another question, how I can force some kind of browsers to use one particular 
ident method or another ?
For example Firefox, IE only with digest 

Actually if I close the first banner (digest) I can choice the second (basic) 

I only need multiple authentication choice when the browser can use digest, 
wget for example 

Thank


Re: [squid-users] Replay Auth

2013-11-20 Thread Amos Jeffries

On 2013-11-21 03:23, FredB wrote:

Hello,

I'm trying to use squid with two identifications mode, first digest
and second basic, all works without problem except one point

auth_param basic credentialsttl 1 hours

The proxy never claim the username and pass after 1 hour, so I found
no way for forcing the replay with digest
squid stop and start are also without effect (I guess that the browser
replay automatically is credential).
I should wait that the user close his browser ...


What do you mean by claim ?

The browser is expected to deliver credentials on every request and the 
proxy validate them. The credentialsttl is only about how often Squid 
has to query the backend to validate them. When the TTL expire the 
authenticator backend is checked, exactly the same as on a new login. If 
it says they are still OK then a new credentialsttl period is started.
  When auth works properly the browser is only ever challenged at the 
start of the users browsing session and not bothered again.


To force a change in credentials midway through a series of transactions 
you need to cause the proxy to emit another auth challenge. Which can be 
done by denying one of the requests using an access control line ending 
with either an auth re-validation to the backend (proxy_auth 
REQUIRED), a check against explicit username (proxy_auth name) or 
with an external ACL that depends on %LOGIN.


http://wiki.squid-cache.org/action/show/Features/Authentication#How_do_I_ask_for_authentication_of_an_already_authenticated_user.3F


Amos



Re: [squid-users] Re: Issue with Squid_ldap_group (Windows) ?

2013-11-20 Thread Amos Jeffries

On 2013-11-21 04:08, Raf wrote:


After some problem with firewall configuration and some test with
basic_ldap_auth inserting the line below in squid.conf associated with 
ACL

ldap-auth can gain internet access only to active directory user.

auth_param basic program /usr/lib64/squid/basic_ldap_auth -R -b
dc=domain,dc=local -D CN=ADUser,OU=OU-ADUser,dc=domain,dc=local -w
pwd-ADUser -f sAMAccountName=%s -h IP-Ldap-Server:389
.
.
.
acl ldap-auth proxy_auth REQUIRED
http_access allow ldap-auth


Now I must find how can I do with user belong to a single group present 
in
active directory…previous version external acl helpers was 
squid_ldap_group.

In squid 3.3.2 seems to be ext_ldap_group_acl …but I can’t find it !!!


It should be right next to the basic_* authenticator (in the same 
directory anyway).


Amos



[squid-users] Re: Cannot get basic_ldap_auth to work with AD

2013-11-20 Thread Andrey
Hi

Which version of squid do you use?
Which os do you use for squid?
Which version of AD do you use?
Is it a ssl ldap?

Thanks.




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Cannot-get-basic-ldap-auth-to-work-with-AD-tp4663282p4663404.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] unusual TCP_DENIED situation

2013-11-20 Thread Amos Jeffries

On 2013-11-21 04:39, Mark Redding wrote:

Hello all,

I run the network for a UK boarding school (1000 pupils and around 400
staff) and use a combination of squid and dansguardian to provide time
controlled and filtered web access for all users. From time to time a
number of users have reported receiving squid access denied messages -
though if they try accessing the same site a minute or two later they
get through to it without any issue. Is anyone able to shed light on 
the

condition under which a TCP_DENIED/403 message may be returned by squid
(apart from the obvious acl based ones).

My configuration is thus :

FreeBSD 8.3-RELEASE-p3 amd64 running squid-3.1.19


I suggest upgrading to 3.3 release before going any further with this.
The old series have a few very strange bugs in auth behaviour which are 
fixed by a large overhaul in recent releases (you could end up putting a 
lot of time and effort in before finding the simple upgrade fixes your 
issue).


Amos


[squid-users] Re: Cannot get basic_ldap_auth to work with AD

2013-11-20 Thread Brig
Hi Andrey,

Ubuntu 11.04
Squid 3.3.10 (compiled natively on Ubuntu 11.04)
AD Version: 5.2.3790.3959  (would not surprise me if this AD version is out
of date)
No SSL

I am not really an M$ guy so I do not know a whole lot about the AD side of
it except that I am finding it extremely difficult to integrate with Squid
on linux ;-)  I have Admin access to the AD server yet as I mentioned
earlier it was useless in trying to help diagnose this problem!

Thx for any help you can provide!

Brig



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Cannot-get-basic-ldap-auth-to-work-with-AD-tp4663282p4663406.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Cannot get basic_ldap_auth to work with AD

2013-11-20 Thread Andrey
Ok so you have Windows Server 2003 R2.
Do you have all updates installed on windows server?
what shows netstat -aon in cmd?
is there port 389 open?

3.3.10 should work... Did you build it by yourself?



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Cannot-get-basic-ldap-auth-to-work-with-AD-tp4663282p4663407.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] newbie: squid does not block https sites on blacklist

2013-11-20 Thread info
I'm running centos6 server 64 bit with squid 3.3 as a transparent proxy 
server and I'm using a blacklist. I installed squid from the tarball with 
'--enable ssl' and the program starts fine.
The blacklist is working for http sites but not for https sites. The 
relevant lines I have in squid.conf are:


acl squid-gambling dstdomain -i /etc/squid/blacklists/squid-gambling.acl
acl SSL_ports port 443
http_access deny squid-gambling
http_access deny CONNECT !SSL_ports

is there a way to verify whether the ssl portion of squid is actually 
working?
if my config is wrong, can anyone show me the correct method? I've searched 
on google for ages but can't find a solution. 



[squid-users] Re: Cannot get basic_ldap_auth to work with AD

2013-11-20 Thread Brig

Yes I compiled Squid myself on Ubuntu.

Our SA is pretty good so I would expect he has installed all the updates on
the M$ machine. 

Not sure why you need netstat cuz as far as I can tell based on the results
of test #3 and #4 above with ldapsearch it appears that AD is working and I
can get info out of it using ldapsearch.

Here is the relevant netstat line though:

 UDP10.0.0.12:389  *:*600


I know we have the Openfire IM Server running on Linux successfully
authenticating against this AD server so I know that it 'should' work. In
fact when I started trying to get AD to work I essentially borrowed as much
of the LDAP config from the Openfire environment as I could to get started
yet so far no luck . . .

Thx!

Brig



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Cannot-get-basic-ldap-auth-to-work-with-AD-tp4663282p4663409.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Cannot get basic_ldap_auth to work with AD

2013-11-20 Thread Andrey
Ok hmm...

One more thing, did you follow this one:
http://wiki.squid-cache.org/ConfigExamples/Authenticate/Ldap#Windows_2003_Active_Directory_adjustments

Because I use now the U13.10 version with Ubuntu's Squid 3.3.8 from
repository, and Windows 2008 R2 AD. It is working good. However, as far as I
know there some differences between w2k3 AD and w2k8 AD.

Moreover, if Openfire IM Server is working with w2k3, look the LDAP
configuration of OpenFire. Maybe there some useful options.

If it is does not help, and if I was on your place I would take the 13.04
13.10 (VirtualBox) for test propose and install default Squid from
repository to try connect it to your w2k3. If it will work just search where
is the difference. By 12.04 I got the same connection problem as you have
now. I did not solve it.

By the way I got Idea during typing this message. The Squid 3.2.+ use by
requests the IPv6, maybe U11.04 does not provide such support?




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Cannot-get-basic-ldap-auth-to-work-with-AD-tp4663282p4663410.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Cannot get basic_ldap_auth to work with AD

2013-11-20 Thread Brig

Thx for the feedback  ideas! 

I realize 11.04 is old and in fact I was going to EoL this server since it
is old yet then I figured I would keep it around to use as a Squid Proxy
Test box. I did not think that being on 11.04 could be the problem all
together!

I have other U 12.04.2 servers yet sounds like you had problems with that
too huh? I do not have any U 13 servers yet. All of a sudden what should
have been a quick half day project is stretching out too long . . .

And yes I tried the AD Adjustments you pointed me to, that was one of the
first things I found/tried last week w/o luck.

Well I guess I will have to think about building a U13 test server and stop
spinning my wheels with U 11.04 or 12.04 for that matter . . .

Never would have thought a lil standalone helper program (basic_ldap_auth)
would be so finicky . . .

Thx!

Brig



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Cannot-get-basic-ldap-auth-to-work-with-AD-tp4663282p4663412.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] Replay Auth

2013-11-20 Thread Amos Jeffries
On 21/11/2013 6:28 a.m., FredB wrote:
 
 Objet: [squid-users] Replay Auth

 Hello,

 I'm trying to use squid with two identifications mode, first digest
 and second basic, all works without problem except one point

 auth_param basic credentialsttl 1 hours

 The proxy never claim the username and pass after 1 hour, so I found
 no way for forcing the replay with digest
 squid stop and start are also without effect (I guess that the
 browser replay automatically his credential).
 I should wait that the user close his browser ...

I have an idea and TODO list entry for making that happen. But nobody
has yet sponsored teh few days work that will take and my spare time has
been dedicated towards other more interesting developments.

 
 Another question, how I can force some kind of browsers to use one particular 
 ident method or another ?
 For example Firefox, IE only with digest 

You can't.  see RFC 2617 section 1.2:

The user agent MUST choose to use one of the challenges with the
strongest auth-scheme it understands and request credentials from the
user based upon that challenge.


The only way to influence the browser selection from Squid is to not
offer some schemes. eg an access control list per-scheme. Which is the
idea mentioned above which has not been implemented.

You can possibly turn off support for some schemes in the browser
itself. But I've only heard of it being done to disable Digest and NTLM


Amos



[squid-users] Directives ignore-private and override-expire not working Squid 3.2 and 3.3

2013-11-20 Thread Lê Trung Kiên
Hello everyone,

I’m using these configurations which work fine with squid 3.1 every items
gets HIT. However these configurations  don’t work properly with Squid 3.2
and 3.3, because I always get MISS with all items

http_port 127.0.0.1:82 accel ignore-cc
cache_peer 192.168.2.43 parent 80 0 no-query originserver name=Site1
max-conn=15 cache_peer_domain Site1 mysite.com refresh_pattern -i ((.)*) 30
30% 60 ignore-no-cache ignore-private ignore-reload ignore-no-store
override-lastmod override-expire

Header from 3.3 version:

HTTP/1.1 200 OK
Cache-Control: private
Content-Length: 117991
Content-Type: text/html; charset=utf-8
Expires: Thu, 21 Nov 2013 03:12:14 GMT
Server: Microsoft-IIS/7.5
Date: Thu, 21 Nov 2013 03:12:15 GMT
X-Cache: MISS from localhost.localdomain
Connection: close

So what wrong with me ? Please help.



[squid-users] Re: Cannot get basic_ldap_auth to work with AD

2013-11-20 Thread Brig

Upgraded to ubuntu 14.04 and tried the bundled basic_ldap_auth binary, same
errors. Then recompiled Squid 3.3.10 tried that basic_ldap_auth binary, same
errors . . .

I guess I just am not meant to use Squid with AD . . . :-( 



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Cannot-get-basic-ldap-auth-to-work-with-AD-tp4663282p4663415.html
Sent from the Squid - Users mailing list archive at Nabble.com.


[squid-users] Re: Cannot get basic_ldap_auth to work with AD

2013-11-20 Thread Andrey
Did you tried default squid?

apt-get install squid3

Maybe something else uses ldap port?
Try with (if I am not wrong):

debug_options 82,0 84,9

Do you have wireshark? Can you capture ldap requests on windows server from
Ubuntu?
Do you have firewall from Windows Server on? 
From my practice it is better to switch firewall off. Configure all
services. Capture ports. After slowly put it up.

It is very strange... I got one time working the Squid 3.1.x - W2k3 on
U12.04, but Squid 3.1.x - w2k8 on 12.04 doesn't worked by me.

Oh yeah, one more thing chmod... Do you have right privileges? Do you work
from root? Based on my practice helpers are working only on proxy user or
root. Proxy user is a squid default user. 



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Cannot-get-basic-ldap-auth-to-work-with-AD-tp4663282p4663416.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] Re: Cannot get basic_ldap_auth to work with AD

2013-11-20 Thread Amos Jeffries
On 21/11/2013 5:00 p.m., Brig wrote:
 
 Upgraded to ubuntu 14.04 and tried the bundled basic_ldap_auth binary, same
 errors. Then recompiled Squid 3.3.10 tried that basic_ldap_auth binary, same
 errors . . .
 
 I guess I just am not meant to use Squid with AD . . . :-( 
 

Did you try the debug parameter on the latest helper version? -d.

Amos



Re: [squid-users] Squid stops handling requests after 30-35 requests

2013-11-20 Thread Bhagwat Yadav
Hi Eliezer/All,

Thanks for your help.

PFA log snippets.
Log1.txt is having sample 1 of cache.log in which you can find the time gap.
Log2.txt is having sample 2 of client output and cache.log showing the time gap.

It seems that there is some in memory operation StatHistCopy which
is causing this issue, not sure though.

Squid version is: Squid Cache: Version 3.1.6.

Please let me know that if these logs are helpfull.


Thanks  Regards,

On Wed, Nov 20, 2013 at 6:11 PM, Eliezer Croitoru elie...@ngtech.co.il wrote:
 Hey,

 Can you try another test?
 It is very nice to use wget but there are couple options that needs to be
 consider.
 Just to help you if was not there until now add: --delete-after
 to the wget command line.

 It's not related to squid but it helps a lot.
 Now If you are up to it I will be happy to see the machine specs and OS.
 Also what is squid -v output?

 Can you ping the machine at the time it got stuck? what about tcp-ping or
 nc -v squid_ip port ?
 we need to verify also in the access logs that it's not naukri.com that
 thinks your client is trying to covert it into a DDOS target.
 What about trying to access other resources?
 What is written in this 503 response page?

 Eliezer


 On 20/11/13 12:35, Bhagwat Yadav wrote:

 Hi,

 I enable the logging but didn't find any conclusive or decisive logs
 so that I can forward you.

 In my testing, I am accessing same URL 500 times in a loop from the
 client using wget.
 Squid got hanged sometimes after 120 requests ,sometimes after 150
 requests as:

 rm: cannot remove `index.html': No such file or directory
 --2013-11-20 03:52:37--http://www.naukri.com/
 Resolvingwww.naukri.com... 23.72.136.235, 23.72.136.216
 Connecting towww.naukri.com|23.72.136.235|:80... connected.

 HTTP request sent, awaiting response... 503 Service Unavailable
 2013-11-20 03:53:39 ERROR 503: Service Unavailable.


 Whenever it got hanged, it resumes after 1 minute e.g in above example
 after 03:52:37 the response came at 03:53:39.

 Please provide more help.

 Many Thanks,
 Bhagwat


This is gap of ~1 min visible in log file:

2013/11/21 00:42:44.246| fwdComplete: server FD -1 not re-forwarding status 503
status 503
2013/11/21 00:43:45.175| fwdComplete: server FD -1 not re-forwarding status 503
+++
Also in below lines a two lines showing gap is observed at:
2013/11/21 00:42:44.260| clientReadSomeData: FD 8: reading request...
2013/11/21 00:43:28.921| AuthUser::cacheCleanup: Cleaning the user cache now

++
2013/11/21 00:42:44.260| PconnPool::pop: lookup for key 
{www.naukri.com:80-192.168.5.22} failed.
2013/11/21 00:42:44.260| FilledChecklist.cc(168) ~ACLFilledChecklist: 
ACLFilledChecklist destroyed 0x7fff61ed2990
2013/11/21 00:42:44.260| ACLChecklist::~ACLChecklist: destroyed 0x7fff61ed2990
2013/11/21 00:42:44.260| fwdConnectStart: got outgoing addr 192.168.5.22, tos 0
2013/11/21 00:42:44.260| comm_openex: Attempt open socket for: 192.168.5.22
2013/11/21 00:42:44.260| comm_openex: Opened socket FD 10 : family=2, type=1, 
protocol=6
2013/11/21 00:42:44.260| fd_open() FD 10 http://www.naukri.com/
2013/11/21 00:42:44.260| fwdConnectStart: got TCP FD 10
2013/11/21 00:42:44.260| The AsyncCall SomeCloseHandler constructed, 
this=0x12ca220 [call97241]
2013/11/21 00:42:44.260| comm.cc(1195) commSetTimeout: FD 10 timeout 60
2013/11/21 00:42:44.260| The AsyncCall SomeTimeoutHandler constructed, 
this=0x13bc920 [call97242]
2013/11/21 00:42:44.260| comm.cc(1206) commSetTimeout: FD 10 timeout 60
2013/11/21 00:42:44.260| The AsyncCall SomeCommConnectHandler constructed, 
this=0xe37dc0 [call97243]
2013/11/21 00:42:44.260| commConnectStart: FD 10, cb 0xe37dc0*1, 
www.naukri.com:80
2013/11/21 00:42:44.260| The AsyncCall SomeCloseHandler constructed, 
this=0x12c7ba0 [call97244]
2013/11/21 00:42:44.260| ipcache_nbgethostbyname: Name 'www.naukri.com'.
2013/11/21 00:42:44.260| ipcache_nbgethostbyname: HIT for 'www.naukri.com'
2013/11/21 00:42:44.260| StoreEntry::unlock: key 
'5502B1298080D6C371128B36A03F5C69' count=2
2013/11/21 00:42:44.260| FilledChecklist.cc(168) ~ACLFilledChecklist: 
ACLFilledChecklist destroyed 0xe3d8e8
2013/11/21 00:42:44.260| ACLChecklist::~ACLChecklist: destroyed 0xe3d8e8
2013/11/21 00:42:44.260| FilledChecklist.cc(168) ~ACLFilledChecklist: 
ACLFilledChecklist destroyed 0xe3d688
2013/11/21 00:42:44.260| ACLChecklist::~ACLChecklist: destroyed 0xe3d688
2013/11/21 00:42:44.260| clientReadSomeData: FD 8: reading request...
2013/11/21 00:43:28.921| AuthUser::cacheCleanup: Cleaning the user cache now
2013/11/21 00:43:28.921| AuthUser::cacheCleanup: Current time: 1385016208
2013/11/21 00:43:28.921| AuthUser::cacheCleanup: Finished cleaning the user 
cache.
2013/11/21 00:43:29.368| statHistCopy: Dest=0x8e94a8, Orig=0x936168
2013/11/21 00:43:29.368| statHistCopy: capacity 300 300
2013/11/21 00:43:29.368|