[squid-users] SSLBUMP Issue with SSL websites

2012-07-10 Thread Muhammad Shehata
Dears,
hope you all are doing well
actually I was following the replies on squid users-mail-list about sslbump 
issues with showing up some websites inline without images or css style sheet
like https://gmail.com and https://facebook.com  as I have same issue in 
version squid 3.1.19, I know that when sslbump is enabled it intercept the 
CONNECT method and modify it  to be GET method that when I used broken sites 
acl  to exclude them however I see that the method is CONNECT  for those 
excluded website not Get as all other bumped sites but it still the same result
1341837646.893  45801 x.x.x.x TCP_MISS/200 62017 CONNECT twitter.com:443 - 
DIRECT/199.59.150.7

acl broken_sites dstdomain .twitter.com
acl broken_sites dstdomain .facebook.com
ssl_bump deny broken_sites
ssl_bump allow all
http_port 192.168.0.1:3128  ssl-bump generate-host-certificates=on 
dynamic_cert_mem_cache_size=40MB  cert=/etc/pki/tls/certs/sslintercept.crt 
key=/etc/pki/tls/certs/sslintercept.key

[squid-users] different geographical location problem

2012-07-10 Thread ajendra singh
Hi,

My squid proxy (location A) and http client (location B) are running
in different geographical location.
Whenever i make a request to googledot it is showing me google page
for location A (location at which proxy is present).
Is there some way to change this so that i get the google page for
location B (http client) instead of location A.
I have searched on net and found out using tproxy is one of the way.
Is there any other way (perhaps by changing its config) ?

--aj


Re: [squid-users] Cannot run '/usr/lib/squid3/squid_session' process

2012-07-10 Thread Stefanie Clormann
I did run it from the command line with all parameters with the user 
proxy.

It did not give me any errors and the permissions are right.

So, any more clues?
Thanks,
Stefanie

On 26.06.2012 20:08, Eliezer Croitoru wrote:
i dont know this external acl but i would try to first run it from 
command line with all the parameters.

/usr/lib/squid3/squid_session -t 60 -b /usr/share/squid3/session.db

you should get the feeling of it running or not.
it's ubuntu so try sudo and also su - proxy command...
so you would get the feeling of root permissions and the proxy user 
that runs the command for squid.
also make sure that all the tree of directories /usr/lib/squid3/ has 
permissions that allows the proxy user access to them.

maybe the specific file has the right ones but the tree dirs dont.

Regards,
Eliezer

On 6/26/2012 1:23 PM, Stefanie Clormann wrote:

Hi,

I am running an Ubuntu Linux Server 12.04 - 64 bit - Kernel
3.2.0-24-generic.
and the squid3 package (3.1.19-1ubuntu3).

I wanted to try the following:
# Test 2
external_acl_type splash_page ttl=60 concurrency=200 %SRC
/usr/lib/squid3/squid_session -t 60 -b /usr/share/squid3/session.db
acl existing_users external splash_page
deny_info splash.html existing_users
http_access deny !existing_users

and I get this error:
2012/06/26 12:00:52| helperOpenServers: Starting 5/5 'squid_session'
processes
2012/06/26 12:01:55| WARNING: Cannot run '/usr/lib/squid3/squid_session'
process.
2012/06/26 12:02:58| WARNING: Cannot run '/usr/lib/squid3/squid_session'
process.
2012/06/26 12:04:01| WARNING: Cannot run '/usr/lib/squid3/squid_session'
process.
2012/06/26 12:05:04| WARNING: Cannot run '/usr/lib/squid3/squid_session'
process.

Output of:
ls -la /usr/lib/squid3/squid_session:
-rwxr-xr-x 1 root root 10200 Jun 21 11:53 /usr/lib/squid3/squid_session
ls -la/usr/share/squid3/session.db
-rw-r--r-- 1 proxy proxy 0 Mai 16 13:32 /usr/share/squid3/session.db


I also tried a compiled source version (squid-3.1.20 ) - but it gives me
the same error.

What could be the problem?
Stefanie








[squid-users] Rules problem

2012-07-10 Thread Carlo Filippetto
Hi all,
I need to create a rules where some users, logged in with ntlm, must
be restrictet only in few sites.

I tried something as:


acl RESTRICTED_USER proxy_auth /etc/squid/restricted_user.allow
acl RESTRICTED_WEB dstdomain /etc/squid/restricted_web.limited

http_reply_access allow RESTRICTED_WEB RESTRICTED_USER
http_reply_access deny all RESTRICTED_USER


It work, but other user seems are affected with continuos
authentication request.

Any suggestion?

Thanks

---
Carlo


Re: [squid-users] different geographical location problem

2012-07-10 Thread Amos Jeffries

On 10/07/2012 7:24 p.m., ajendra singh wrote:

Hi,

My squid proxy (location A) and http client (location B) are running
in different geographical location.
Whenever i make a request to googledot it is showing me google page
for location A (location at which proxy is present).
Is there some way to change this so that i get the google page for
location B (http client) instead of location A.
I have searched on net and found out using tproxy is one of the way.
Is there any other way (perhaps by changing its config) ?

--aj


This is how Google operates.

TPROXY will not help unless the proxy and the clients are within the 
same network path. It is designed to be used on Gateway proxies in ISP 
situations. With separate locations you will have triangular routing 
problems as the packets from Google go straight back to the client 
instead of the proxy. Using VPN or tunnel solutions to avoid that that 
will make the client get the location-B responses from Google anyway.


The best you can do is ensure forwarded_for and via features are both ON 
and hope that Google servers are paying attention to the clients 
location information they contain.


Amos



Re: [squid-users] Rules problem

2012-07-10 Thread Amos Jeffries

On 10/07/2012 8:22 p.m., Carlo Filippetto wrote:

Hi all,
I need to create a rules where some users, logged in with ntlm, must
be restrictet only in few sites.

I tried something as:


acl RESTRICTED_USER proxy_auth /etc/squid/restricted_user.allow
acl RESTRICTED_WEB dstdomain /etc/squid/restricted_web.limited

http_reply_access allow RESTRICTED_WEB RESTRICTED_USER
http_reply_access deny all RESTRICTED_USER


The magic ACL all only means something when its on the end (right hand 
side) of the line.


By placing all on the end of a line containing authentication ACLs you 
prevent login challenge from being done by *that* line.


Also note that by doing these restructions on *reply* access, it means 
the user/clients details have already been sent to the remote website 
for processing. Only the remote websites reponse is blocked from 
delivery to the client. NTLM could be doing some very strange thinsg 
with its multiple requests.
  There is no reason why these rules cannot be done in http_access 
where it is safer and NTLM cannot have such dangerous side effects. I 
suggest moving them and seeing what improves.






It work, but other user seems are affected with continuos
authentication request.


By user what do you mean other already logged in *users*? or non-login 
*clients*?



Amos


Re: [squid-users] Rules problem

2012-07-10 Thread Carlo Filippetto
2012/7/10 Amos Jeffries squ...@treenet.co.nz:
 On 10/07/2012 8:22 p.m., Carlo Filippetto wrote:

 Hi all,
 I need to create a rules where some users, logged in with ntlm, must
 be restrictet only in few sites.

 I tried something as:


 acl RESTRICTED_USER proxy_auth /etc/squid/restricted_user.allow
 acl RESTRICTED_WEB dstdomain /etc/squid/restricted_web.limited

 http_reply_access allow RESTRICTED_WEB RESTRICTED_USER
 http_reply_access deny all RESTRICTED_USER


 The magic ACL all only means something when its on the end (right hand
 side) of the line.

 By placing all on the end of a line containing authentication ACLs you
 prevent login challenge from being done by *that* line.

 Also note that by doing these restructions on *reply* access, it means the
 user/clients details have already been sent to the remote website for
 processing. Only the remote websites reponse is blocked from delivery to the
 client. NTLM could be doing some very strange thinsg with its multiple
 requests.
   There is no reason why these rules cannot be done in http_access where it
 is safer and NTLM cannot have such dangerous side effects. I suggest moving
 them and seeing what improves.



I tried to use  http_access but in this case on every page I tried to
access out of the restriscted ones I receive an authentication
request, and it isn't a good thing

Now I remove the 'all' from the second http_reply_access line and
seems works fine.

Thank's for the explanation on the use of http_reply_access, but I
don't know another command that block the sites and don't asks for
authentication







 It work, but other user seems are affected with continuos
 authentication request.


 By user what do you mean other already logged in *users*? or non-login
 *clients*?


 Amos


First of all I authenticate all the users, only a list of these users
can't serf on the web but is limited as above.

Thanks

---
Carlo


[squid-users] strange behavior with https sites and ntlm/basic authentication

2012-07-10 Thread Bruno Santos

Hi all !

I finally (sort of) manage to get squid with ntlm authentication. I now have it 
working as i want it, but there's a configuration that i had to change and 
that's keeping bugging me in the why.

Everything was workig fine until reaching https sites.

If i had enabled both types of authentication: ntlm and basic (for those under 
Linux or not using a ntlm enabled browser):

# Autenticacao NTLM - Winbind - AD
auth_param ntlm program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp
auth_param ntlm children 300
auth_param ntlm keep_alive off

auth_param basic program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-basic
auth_param basic children 100
auth_param basic realm Por favor autentique-se!
auth_param basic credentialsttl 2 hours

acl ntlmAuth proxy_auth REQUIRED



This configuration worked fine, but those with NTLM (windows + IE / Firefox) 
were asked for authentication (that shouldn't happen). Those in Linux worked 
just fine (with an authentication dialog) and every site appears as it should 
be.


If i remove the basic authentication, those with windows (IE and Firefox) are 
NOT asked for authentication and those using Linux are asked for authentication 
(everything fine here). Here is the problem:

Those using Linux can't access (most) https sites. It just gives:

 TCP_DENIED/407 3833 CONNECT twitter.com:443 - NONE/- text/html

And nothing happens...

So i've decided to do an experiment

In squid.conf, i've changed:

# Deny CONNECT to other than secure SSL ports
http_access deny CONNECT !SSL_ports

to

http_access allow CONNECT SSL_ports

And sudden all those https sites began working...

Well, by question is:

Is this correect ? What would be happening with the other configuration? Is it 
safe ?

hope someone can shed some light in this matter.

Thank you all




--

Use Open Source Software
Human knowledge belongs to the world
Bruno Santos
bvsan...@ulscb.min-saude.pt
http://www.twitter.com/feiticeir0
Tel: +351 962 753 053
Divisão de Informática
informat...@ulscb.min-saude.pt
Tel: +351 272 000 155
Fax: +351 272 000 257
Unidade Local de Saúde de Castelo Branco, E.P.E.
ge...@ulscb.min-saude.pt
Tel: +351 272 000 272
Fax: +351 272 000 257

Linux registered user #349448

LPIC-1 Certification


Re: [squid-users] strange behavior with https sites and ntlm/basic authentication

2012-07-10 Thread Amos Jeffries

On 10/07/2012 9:59 p.m., Bruno Santos wrote:

Hi all !

I finally (sort of) manage to get squid with ntlm authentication. I now have it 
working as i want it, but there's a configuration that i had to change and 
that's keeping bugging me in the why.

Everything was workig fine until reaching https sites.

If i had enabled both types of authentication: ntlm and basic (for those under 
Linux or not using a ntlm enabled browser):

# Autenticacao NTLM - Winbind - AD
auth_param ntlm program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-ntlmssp
auth_param ntlm children 300
auth_param ntlm keep_alive off

auth_param basic program /usr/bin/ntlm_auth --helper-protocol=squid-2.5-basic
auth_param basic children 100
auth_param basic realm Por favor autentique-se!
auth_param basic credentialsttl 2 hours

acl ntlmAuth proxy_auth REQUIRED



This configuration worked fine, but those with NTLM (windows + IE / Firefox) 
were asked for authentication (that shouldn't happen). Those in Linux worked 
just fine (with an authentication dialog) and every site appears as it should 
be.


If i remove the basic authentication, those with windows (IE and Firefox) are 
NOT asked for authentication and those using Linux are asked for authentication 
(everything fine here). Here is the problem:


By those I assume you mean the persons/users, and not their browser 
agents.


By asked I assume you mean the auth popup window, and not the 407 
proxy challenge.


Popups are a browser feature, when it happens is decided *only* by the 
browser, usually because it was unable to find any working credentials 
that could be used [some browsers are broken].


Ideally no user would be asked for authentication when NTLM is used. The 
grand benefit offering from NTLM is that it works from the users network 
login credentials and the browser never has to ask them to type anything.




Those using Linux can't access (most) https sites. It just gives:

  TCP_DENIED/407 3833 CONNECT twitter.com:443 - NONE/- text/html

And nothing happens...


Most likely your:  auth_param ntlm keep_alive off is breaking the 
fragile support CONNECT method has for NTLM.




So i've decided to do an experiment

In squid.conf, i've changed:

# Deny CONNECT to other than secure SSL ports
http_access deny CONNECT !SSL_ports

to

http_access allow CONNECT SSL_ports

And sudden all those https sites began working...

Of course. You just bypassed authentication.



Well, by question is:

Is this correect ? What would be happening with the other configuration? Is it 
safe ?


No. See above. No, it allows anyone unlimited access to tunnel via 
CONNECT method to SSL_ports.


HTH
Amos


Re: [squid-users] Rules problem

2012-07-10 Thread Amos Jeffries

On 10/07/2012 9:37 p.m., Carlo Filippetto wrote:

2012/7/10 Amos Jeffries squ...@treenet.co.nz:

On 10/07/2012 8:22 p.m., Carlo Filippetto wrote:

Hi all,
I need to create a rules where some users, logged in with ntlm, must
be restrictet only in few sites.

I tried something as:


acl RESTRICTED_USER proxy_auth /etc/squid/restricted_user.allow
acl RESTRICTED_WEB dstdomain /etc/squid/restricted_web.limited

http_reply_access allow RESTRICTED_WEB RESTRICTED_USER
http_reply_access deny all RESTRICTED_USER


The magic ACL all only means something when its on the end (right hand
side) of the line.

By placing all on the end of a line containing authentication ACLs you
prevent login challenge from being done by *that* line.

Also note that by doing these restructions on *reply* access, it means the
user/clients details have already been sent to the remote website for
processing. Only the remote websites reponse is blocked from delivery to the
client. NTLM could be doing some very strange thinsg with its multiple
requests.
   There is no reason why these rules cannot be done in http_access where it
is safer and NTLM cannot have such dangerous side effects. I suggest moving
them and seeing what improves.



I tried to use  http_access but in this case on every page I tried to
access out of the restriscted ones I receive an authentication
request, and it isn't a good thing


Client who did not send credentials are asked to do so. Authentication 
does not work without credentials.





Now I remove the 'all' from the second http_reply_access line and
seems works fine.


Strange. As I said all was not doing anything on that line, just 
wasting space in the config file.




Thank's for the explanation on the use of http_reply_access, but I
don't know another command that block the sites and don't asks for
authentication


Adding all on the right-hand side of both lines, and making them 
http_access instead of http_reply_access will do that. Just make 
sure these are under the lines which authenticate all your users.


Amos


[squid-users] Only Debug/Log TCP_DENIED/403

2012-07-10 Thread ml ml
Hello List,

can i only Log/Debug TCP_DENIED/403 hits? I have a LOT of traffic and
i am only interested in TCP_DENIED

Thanks,
Mario


Re: [squid-users] Only Debug/Log TCP_DENIED/403

2012-07-10 Thread Amos Jeffries

On 11/07/2012 12:25 a.m., ml ml wrote:

Hello List,

can i only Log/Debug TCP_DENIED/403 hits? I have a LOT of traffic and
i am only interested in TCP_DENIED

Thanks,
Mario


http://www.squid-cache.org/Doc/config/access_log/

Takes ACLs such as the http_status ACL.

Amos


Re: [squid-users] Only Debug/Log TCP_DENIED/403

2012-07-10 Thread ml ml
Hello Amos,

thanks. I am using Squid Version 3.1.19 and those rules:

acl DENY_ACCESS http_status 403
access_log daemon:/var/log/squid/DENY.log squid DENY_ACCESS

However i get:

2012/07/10 15:18:13| aclParseAclList: ACL name 'DENY_ACCESS' not found.
FATAL: Bungled squid.conf line 695: access_log
daemon:/var/log/squid/DENY.log squid DENY_ACCESS
Squid Cache (Version 3.1.19): Terminated abnormally.

What am i doing wrong here?

Thanks,
Mario

On Tue, Jul 10, 2012 at 2:45 PM, Amos Jeffries squ...@treenet.co.nz wrote:
 On 11/07/2012 12:25 a.m., ml ml wrote:

 Hello List,

 can i only Log/Debug TCP_DENIED/403 hits? I have a LOT of traffic and
 i am only interested in TCP_DENIED

 Thanks,
 Mario


 http://www.squid-cache.org/Doc/config/access_log/

 Takes ACLs such as the http_status ACL.

 Amos


Re: [squid-users] Only Debug/Log TCP_DENIED/403

2012-07-10 Thread Alan
Its written clearly in the manual:
access_log module:place [logformat name [acl acl ...]]

In your case:
acl DENY_ACCESS http_status 403
access_log squid DENY_ACCESS

squid refers to a predefined logformat, see
http://www.squid-cache.org/Doc/config/logformat/


On Tue, Jul 10, 2012 at 10:23 PM, ml ml mliebher...@googlemail.com wrote:
 Hello Amos,

 thanks. I am using Squid Version 3.1.19 and those rules:

 acl DENY_ACCESS http_status 403
 access_log daemon:/var/log/squid/DENY.log squid DENY_ACCESS

 However i get:

 2012/07/10 15:18:13| aclParseAclList: ACL name 'DENY_ACCESS' not found.
 FATAL: Bungled squid.conf line 695: access_log
 daemon:/var/log/squid/DENY.log squid DENY_ACCESS
 Squid Cache (Version 3.1.19): Terminated abnormally.

 What am i doing wrong here?

 Thanks,
 Mario

 On Tue, Jul 10, 2012 at 2:45 PM, Amos Jeffries squ...@treenet.co.nz wrote:
 On 11/07/2012 12:25 a.m., ml ml wrote:

 Hello List,

 can i only Log/Debug TCP_DENIED/403 hits? I have a LOT of traffic and
 i am only interested in TCP_DENIED

 Thanks,
 Mario


 http://www.squid-cache.org/Doc/config/access_log/

 Takes ACLs such as the http_status ACL.

 Amos


Re: [squid-users] external_acl_type helper problems

2012-07-10 Thread Alan
I suggest you to try with squid 2.7 or 3.2 series.
I had some strange problems with the 3.1 series, I think external acls
was one of those problems.
When I tested 2.7 and 3.2, all the strange problems were gone.  I know
2.7 sounds old, but it is incredibly faster than the rest.

Regarding your script, keep in mind that Squid is able to cache
results from external acls, so even if the script is not so efficient,
you can take advantage of that caching. Read the docs on external
acls.
But anyway, if you post your script someone might be able to help with
that as well.

On Mon, Jul 9, 2012 at 6:32 PM, ml ml mliebher...@googlemail.com wrote:
 Hello List,

 i am using a perl script for ACL like this:

 external_acl_type ldap_surfer negative_ttl=60  ttl=60 children=200
 %DST %SRC /etc/squid/ldap_default_allow.pl
 acl ldap_users external ldap_surfer
 http_access allow ldap_users

 However, after a squid upgrade from squid-3.1.0.14 to squid-3.1.19 i
 am getting DENIED request. When i turn on ACL Debug i seee this:
 ACL::ChecklistMatches: result for 'ldap_users' is -1

 My /etc/squid/ldap_default_allow.pl perl script might not be the best
 ( i am doing some ldap and mysql stuff in there), so i modified it to
 a very simple script:


 #!/usr/bin/perl
 use strict;

 $|=1;
 while(defined(my $INPUT = STDIN)) {
 print OK\n;
 next;
 }


 I have about 300 Clients and the traffic is quite high. I have the
 feeling that squid  or the script is not very efficent.
 Can i use concurrency=X here with this perl script? Am i using the
 syntax right? Or am i doing anything wrong?

 Thanks,
 Mario


Re: [squid-users] squid_session problem

2012-07-10 Thread Jack Black
Thank you for the abundance of information you have provided - I'm
going through it all now to get a better understanding of
squid_session and how it works, as well as the external_acl_type
directive.

On Mon, Jul 9, 2012 at 9:46 PM, Amos Jeffries squ...@treenet.co.nz wrote:
 On 10.07.2012 15:12, Jack Black wrote:

 On Mon, Jul 9, 2012 at 7:48 PM, Amos Jeffries wrote:

 On 10.07.2012 13:18, Jack Black wrote:


 Hi.

 Has anyone successfully used squid, and the squid_session helper in
 order to force users of the proxy server to see a webpage (be
 redirected to it) at the beginning of each session?



 Yes, many.



 After spending weeks trying to get this to work, I was finally
 successful using squid version 3.1.10 on CentOS. Unfortunately, I need
 it to work on Ubuntu Server 12.04 with squid version 3.1.19 instead,
 and doing exactly the same thing as I did in CentOS, this fails to
 work on the Ubuntu Server, and my /var/log/squid3/cache.log has a line
 similar to:

 externalAclLookup: 'session' queue overload (ch=0x)



 HINT: queue overload - you are getting more requests per second than
 the
 helper can reply to. Even with TTL  0.

 I'm a bit suspicious that with children=1 the queue is only 2 entries
 long
 (not good). Since it is based on number of helpers, and seems not to
 account
 for concurrency. The log message could be due to that, but that would not
 allow requests past the splash page, quite the opposite in fact.


 How can you tell the queue is only 2 entries long?


 The queue is 2x n_children, or 2x1 in length, with 200x n_children slots of
 concurrency to use up before the queue starts filling. It should not matter
 if the helper is fast enough, but explains that queue overload message will
 appear if you have 202 lookups waiting for a response.



 Am I missing
 something? My main focus during the weeks I spent getting this to work
 was getting squid to talk to Cisco using WCCP. I am still far from
 understanding exactly how this helper works, and am no squid expert.


 The helper maintains a database of previously seen strings/sessions. In your
 case a database of previously seen IP addresses (%SRC).


  When there is no existing entry in the session database, one is created
 immediately and ERR/no-match is returned. http_access results in a no-match
 and !session makes that a pass, so the deny is done and splash page is
 displayed.
  When a lookup is done the database is checked, any existing entry results
 in O/match and the DB record is updated to keep it another 900 seconds (-t
 900). http_access results in a match and !session makes that a fail, so the
 deny is not done.
  Combining these, you can see that the first request gets the splash page,
 and following ones get the requested content.

 Using the %SRC, means any packets arriving from that IP can do the above and
 you have almost no control over whether an actual user or some blind
 automated software is getting the splash page.
  It is extremely easy for automated bits (I listed a few earlier) to end up
 with the splash p[age and users not even noticing the sessions. Which is
 what you described happening.



 I
 found the lines below for configuring squid_session in squid.conf
 online (I didn't write them), and only slightly changed them to work
 for me. The only instructions I can find online for how the
 squid_session helper works is an exact copy of the man page for
 squid_session, which only has one example and not much explanation for
 what the different values mean on the line that starts with
 external_acl_type (the man page has no mention of ttl, negative ttl,
 children, etc...).


 Those are all options for the external_acl_type directive. Not the session
 helper itself. External ACL is an interface used by Squid to run custom
 scripts that do complex access control things.
  It is documented at
 http://www.squid-cache.org/Doc/config/external_acl_type/


 There is a bit more documentation about the session helper and how it does
 splash pages at
 http://wiki.squid-cache.org/ConfigExamples/Portal/Splash




 Oh and while my final goal is to get this working with WCCP, right now
 I'm testing without it so as to keep things simple.




 for every http request my client sends (so a lot of those lines). The
 client is forwarded through the squid proxy directly to the page they
 request every time, and the splash page is always ignored. Here are
 the relevant lines from squid.conf:

 external_acl_type session ttl=300 negative_ttl=0 children=1
 concurrency=200 %SRC /usr/lib/squid3/squid_session -t 900
 acl session external session
 http_access deny !session
 deny_info http://example.com session



 Does anyone know the problem? Am I doing something wrong?

 Tal



 Splash page is only at the beginning of a browsing session. If their
 requests are less then 900 seconds apart the existing session is extended
 another 900 seconds from that time.

  * you are making a session based on any HTTP request made by that IP
 

Re: [squid-users] Only Debug/Log TCP_DENIED/403

2012-07-10 Thread Amos Jeffries

On 11.07.2012 02:00, Alan wrote:

Its written clearly in the manual:
access_log module:place [logformat name [acl acl ...]]

In your case:
acl DENY_ACCESS http_status 403
access_log squid DENY_ACCESS

squid refers to a predefined logformat, see
http://www.squid-cache.org/Doc/config/logformat/


You are missing the module:place parameters. He had access_log 
right the first time.





On Tue, Jul 10, 2012 at 10:23 PM, ml ml wrote:

Hello Amos,

thanks. I am using Squid Version 3.1.19 and those rules:

acl DENY_ACCESS http_status 403
access_log daemon:/var/log/squid/DENY.log squid DENY_ACCESS

However i get:

2012/07/10 15:18:13| aclParseAclList: ACL name 'DENY_ACCESS' not 
found.

FATAL: Bungled squid.conf line 695: access_log
daemon:/var/log/squid/DENY.log squid DENY_ACCESS
Squid Cache (Version 3.1.19): Terminated abnormally.

What am i doing wrong here?



Something about the acl line. It looks fine in the snippet you posted 
here.


We have had similar issues with perfectly valid-looking config years 
ago. In those cases it turned out to be Unicode-enabled text editor or 
GUI editors adding binary characters or Unicode letters into squid.conf. 
Check for that kind of thing on the acl definition line.


Also check that the acl line is above the access_log line in config, 
like you posted here.


Run squid -k parse and see if there is anything earlier in the config 
file clashing.



Amos