Fwd: Re: [squid-users] Squid Reverse Proxy (accel) always contacting the server

2012-04-02 Thread Daniele Segato

(re-send, sent off-list as a mistake)

On 04/01/2012 03:21 AM, Amos Jeffries wrote:

revalidation is more of a threshold which gets set on each object. Under
the threshold no valdation takes place, above it every request gets
validated. BUT ... a 304 response revalutating the object can change the
threshold by sending new timestamp and caching headers.


Thank you I now managed to do exactly what I need...

I still have 2 little issues but I'll open another thread for those :)
you've been very helpful


You have the two options of max-age or Expires. The thing to remember is
to increment the value / threshold forward to the next poitn where you
want revalidation to take place.

with a max-age N value which you generate dynamically by: calculate
current age of object when responding, add 60.

with Expires: you simply emit a timestamp of now() + 60 seconds on each
response.


yes I experimented.. I think 60 seconds is perfect for max-age and I get
rid of Expires time, it's overridden by the max-age anyway.

I also set up Vary and Last-Modified headers.
And added age (always 0) and Date (always now) on my server response.

Squid3 is now caching perfectly my RESTfull service (GET)



Other useful things to know;
Generating an ETag label for each unique output helps caches detect
unique versions without timestamp calculations. The easy ways to do this
are to make ETag a MD5 hash of the body object. Or a hash of the
Last-Modified timestamp string if the body is too expensive to locate
MD5 for. Or some other property of the resource which is guaranteed to
change any time the body changes and not otherwise.


Yeah, that's would be the next step, but it's a little complicated for
me to extract something that makes sense as an ETag, when I'll be able I
will



Cache-Control:stale-while-revalidate tells caches to revalidate, but not
to block the client response waiting for that validation to finish.
Clients will get the old object until a new one or 304 is received back.


that's really interesting but I didn't find anything about it here:
http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html

is it standard?

thanks

do you, by any chance, know how to tell the cache to return a stale
value if the server is not responsive and while waiting it comes back
online?

this would be wonderful because it would allow me to take down the
server for maintenance without having a service interruption.




2) which is the best way to debug why squid3 is deciding to keep a
cache entry, contact the server or not? looking at the huge debug log
is not very simple maybe some log option to filter it with the cache
decisions informations only would help


debug_options 22,3
... or maybe 22,5 if there is not enough at level 3.



perfect!!!

where can I find a list of sections id and their meaning?



[squid-users] ACL based on XFF

2012-04-02 Thread Sekar Duraisamy
Hello All,

Can create an ACL based on XFF?

Since the squid placed  after the loadbancer, it will send the XFF and
LB ip as source ip for all the request. So I want to put ACL based on
XFF.

Is this possible?

Thanks in Advance,
Sekar


Re: [squid-users] squid + sslbump compile errors

2012-04-02 Thread Michael Hendrie

On 06/02/2012, at 10:08 AM, Henrik Nordström wrote:

 sön 2012-02-05 klockan 14:09 -0600 skrev James R. Leu:
 
 certificate_db.cc: In member function ‘void Ssl::CertificateDb::load()’:
 certificate_db.cc:455:1: error: ‘index_serial_hash_LHASH_HASH’ was not 
 declared in this scope
 
 Hm.. fails for me as well. Please try the attached patch.

Getting the same error as the original poster with 3.2.0.16.  Patch fixes part 
of the errors but not all.  Remaining is :

certificate_db.cc: In member function ‘bool 
Ssl::CertificateDb::deleteInvalidCertificate()’:
certificate_db.cc:522: error: invalid conversion from ‘void*’ to ‘const _STACK*’
certificate_db.cc:522: error:   initializing argument 1 of ‘void* 
sk_value(const _STACK*, int)’
certificate_db.cc: In member function ‘bool 
Ssl::CertificateDb::deleteOldestCertificate()’:
certificate_db.cc:553: error: invalid conversion from ‘void*’ to ‘const _STACK*’
certificate_db.cc:553: error:   initializing argument 1 of ‘void* 
sk_value(const _STACK*, int)’
certificate_db.cc: In member function ‘bool 
Ssl::CertificateDb::deleteByHostname(const std::string)’:
certificate_db.cc:570: error: invalid conversion from ‘void*’ to ‘const _STACK*’
certificate_db.cc:570: error:   initializing argument 1 of ‘void* 
sk_value(const _STACK*, int)’

This is with Scientific Linux 6.1 (x86_64):
OpenSSL 1.0.0-fips 29 Mar 2010
gcc version 4.4.5 20110214 (Red Hat 4.4.5-6) (GCC) 


 
 Regards
 Henrik
 
 openssl-1.0.0g.diff



Re: [squid-users] Allowing linked sites - NTLM and un-authenticated users

2012-04-02 Thread Amos Jeffries

On 2/04/2012 5:54 p.m., Jasper Van Der Westhuizen wrote:


-Original Message-
From: Amos Jeffries

On 30/03/2012 11:45 p.m., Jasper Van Der Westhuizen wrote:

Hi everyone

I've been struggling to get a very specific setup going.

Some background:  Our users are split into Internet users and Non-Internet users. 
Everyone in a specific AD group is allowed to have full internetaccess. I have two SQUID proxies with 
squidGuard load balanced with NTLM authentication to handle the group authentication. All traffic also then 
getssent to a cache peer.

This is basically what I need:
1. All users(internet and non-internet) must be able to access sites in 
/etc/squid/lists/whitelist.txt
2. If a user wants to access any external site that is not in the whitelist then he 
must be authenticated. Obviously a non-internet user can try until he is 
bluein the face, it won't work.

These two scenarios are working 100%, except for one irritating bit. Most of the 
whitelisted sites have got linked websites like facebook or twitter oryourtube in 
them that load icons and graphics or adds etc. This causes a auth-prompt for non-internet 
users. I can see the requests in the logs being0DENIED.

The only way I could think of getting rid of these errors was to
implement a http_access deny !whitelist after the allow. This works
great for non-internet users and it blocks all the linked sites
without asking to authenticate, but obviously this breaks access to
all other sites for authenticated users.(access denied for all sites)

You can use the all hack and two login lines:

http_access allow whitelist# allow authed users, but dont challenge if missing 
auth http_access allow authed all # block access to some sites unless 
alreadylogged in http_access deny blacklist http_access deny !authed


The authed users may still have problems logging in if the first site they visit is one of 
the blacklist ones. But if they visit another page first they can loginand 
get there.


Amos

Hi Amos

Thank you for the reply.

I think I already tried this method but it still fails. In any case I tried 
what you suggested and the problem remains that my 
unauthenticated(non-internet) users can get to the whitelisted sites just fine, 
but they still get authentication prompts for the linked content like facebook 
and youtube that the site contains. An example of a site is 
http://www.triptrack.co.za/ and you will see what I mean. At the bottom right 
of the site there are links to facebook and youtube. Those links cause a 
authentication request to the unauthenticated(or non-internet) users. I can't 
have these prompts appear for these users. They have a set list of sites they 
can visit, and it should work for them and should not get asked to 
authenticate. Only once they try and go directly to sites that are not in the 
whitelist, should they be prompted, and obviously denied since they are not 
included in the AD group.


The problem of course is that they *are* going directly to the 
blacklisted sites when they load an object from those sites. Even if the 
object was embeded in some third-party whitelisted sites HTML.
HTTP protocol makes no distinctions about how HTML, XML, or Flash 
document structures group objects. All Squid sees is a request for an 
object on a non-whitelisted site.




Current rules:
http_access allow whitelist
http_access allow authenticated all
http_access deny blacklist
http_access deny !authenticated

Kind Regards
Jasper






Re: [squid-users] ACL based on XFF

2012-04-02 Thread Amos Jeffries

On 2/04/2012 7:15 p.m., Sekar Duraisamy wrote:

Hello All,

Can create an ACL based on XFF?


Yes.

Now what do you mean by based on?


Since the squid placed  after the loadbancer, it will send the XFF and
LB ip as source ip for all the request. So I want to put ACL based on
XFF.

Is this possible?


This is the purpose of XFF header and the follow_x_forwarded_for directive.

This config:
  acl LB src your LB IP address
  follow_x_forwarded_for allow LB
  follow_x_forwarded_for deny all

With the LB setting the XFF header correctly the above will make Squid 
see and use the IP of clients on other side of the LB.


Amos


Re: [squid-users] Re: Squid Reverse Proxy (accel) always contacting the server

2012-04-02 Thread Daniele Segato

On 04/02/2012 03:22 AM, Amos Jeffries wrote:

Last-Modified: date here, should change when the content change
Cache-Control: public, max-age=60

60 = 60 seconds, means: squid please do not bother the server for 60
seconds after this reply, even if they ask for If-Modified-Since


Small correction: means don't ask again until 60 seconds from
Last-Modified. If Last-Modified is missing or invalid, 60 seconds from
Date:.



that's not what I've seen

I returned Last-Modified (very old), Date: now and max-age: 60

squid3 is not checking the server again for 1 minute, then when it does 
it keep replying without checking the server for another 1 minute and so on.


Is it because I specified Age: 0 and Date now?


I also added Age: 0 (i tell squid that I'm providing a fresh content).
And Date: with the current date, I think this also tells squid the
content is fresh
not sure those are needed but probably helps.


Tells when the response was generated, in case of transfer delays. Acts
as a backup for Last-Modified as above, and a value to synchronise
Expires: comparisons between proxies and servers despite any clock
difference problems.


My server return Age:0 and Date: now that should do right?



On the squid size I configured the refresh_pattern regex 0 20% 4320

without adding any other option, this was perfectly fine.


refresh_pattern provides default values for max-age / min-age and next
revalidate time if none are provided by the combination of cache control
headers discussed above. When Expires: or Cache-Control: are sent
refresh_pattern value is not used.


In the log it say:
2012/04/02 07:35:47.326| refreshCheck: Matched '/alfresco/service/stream 
0 20%% 259200'


are you saying this is ignored?

I tried by setting that rule with 0 %20 0 and I had all TCP_MISS

so apparently the rule win against the http headers.

or maybe I misunderstood you :)

thanks again,
Daniele


Re: [squid-users] Squid Reverse Proxy (accel) always contacting the server

2012-04-02 Thread Daniele Segato

On 04/02/2012 02:04 AM, Amos Jeffries wrote:

yes I experimented.. I think 60 seconds is perfect for max-age and I
get rid of Expires time, it's overridden by the max-age anyway.


For Squid-3.1+ yes that is true, older HTTP/1.0 software only obeys
Expires:. So it is a matter of whether you want to further leverage any
old software caches around the 'Net your users might be behind.


good to know!
I don't need support for old HTTP/1.0 but I'll keep it in mind, thanks


that's really interesting but I didn't find anything about it here:
http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html

is it standard?



Yes. http://tools.ietf.org/html/rfc5861

NP: Squid-3 is not obeying it properly yet, but other caches around the
'Net do. So its incrementally useful already and when we roll it into
Squid the gain will be immediate wherever its used.


I wonder why the w3c doesn't list it.

thanks! I'll integrate it as soon as possible

when you say squid3 do not obey properly to it what do you exactly mean?


Cache-Control:stale-if-error=N, also documented in RFC 5861. Squid-3.2
obey this one already. Sorry, no 3.1 support.


our squid3 production server is a 3.1 but I'll implement it so that it 
comes to work when we upgrade it!

thanks again, you've been of great help.


http://wiki.squid-cache.org/KnowledgeBase/DebugSections


perfect!

ciao,
Daniele


Re: [squid-users] ACL based on XFF

2012-04-02 Thread Sekar Duraisamy
Thanks Amos. Actually My loadBalancer will send the XFF with source
information. So i will use XFF as the source to block the users intead
of IP.

Is this possible?

-Sekar

On Mon, Apr 2, 2012 at 1:03 PM, Amos Jeffries squ...@treenet.co.nz wrote:
 On 2/04/2012 7:15 p.m., Sekar Duraisamy wrote:

 Hello All,

 Can create an ACL based on XFF?


 Yes.

 Now what do you mean by based on?


 Since the squid placed  after the loadbancer, it will send the XFF and
 LB ip as source ip for all the request. So I want to put ACL based on
 XFF.

 Is this possible?


 This is the purpose of XFF header and the follow_x_forwarded_for directive.

 This config:
  acl LB src your LB IP address
  follow_x_forwarded_for allow LB
  follow_x_forwarded_for deny all

 With the LB setting the XFF header correctly the above will make Squid see
 and use the IP of clients on other side of the LB.

 Amos


Re: [squid-users] squid + sslbump compile errors

2012-04-02 Thread Henrik Nordström
mån 2012-04-02 klockan 16:47 +0930 skrev Michael Hendrie:
 On 06/02/2012, at 10:08 AM, Henrik Nordström wrote:
 
  sön 2012-02-05 klockan 14:09 -0600 skrev James R. Leu:
  
  certificate_db.cc: In member function ‘void Ssl::CertificateDb::load()’:
  certificate_db.cc:455:1: error: ‘index_serial_hash_LHASH_HASH’ was not 
  declared in this scope
  
  Hm.. fails for me as well. Please try the attached patch.
 
 Getting the same error as the original poster with 3.2.0.16.  Patch fixes 
 part of the errors but not all.  Remaining is :
 
 certificate_db.cc: In member function ‘bool 
 Ssl::CertificateDb::deleteInvalidCertificate()’:
 certificate_db.cc:522: error: invalid conversion from ‘void*’ to ‘const 
 _STACK*’
 certificate_db.cc:522: error:   initializing argument 1 of ‘void* 
 sk_value(const _STACK*, int)’
 certificate_db.cc: In member function ‘bool 
 Ssl::CertificateDb::deleteOldestCertificate()’:
 certificate_db.cc:553: error: invalid conversion from ‘void*’ to ‘const 
 _STACK*’
 certificate_db.cc:553: error:   initializing argument 1 of ‘void* 
 sk_value(const _STACK*, int)’
 certificate_db.cc: In member function ‘bool 
 Ssl::CertificateDb::deleteByHostname(const std::string)’:
 certificate_db.cc:570: error: invalid conversion from ‘void*’ to ‘const 
 _STACK*’
 certificate_db.cc:570: error:   initializing argument 1 of ‘void* 
 sk_value(const _STACK*, int)’
 
 This is with Scientific Linux 6.1 (x86_64):
 OpenSSL 1.0.0-fips 29 Mar 2010
 gcc version 4.4.5 20110214 (Red Hat 4.4.5-6) (GCC) 

The problem is due to a RedHat patch to OpenSSL 1.0 where OpenSSL lies
about it's version. Not yet sure what is the best way to solve this but
I guess we need to make configure probe for these OpenSSL features
instead of relying on the advertised version if we want to support
--enable-ssl-crtd on these OS version.

It should be fixed in Fedora rawhide, but apparently can't be fixed for
released versions of Fedora or RHEL having the hacked openssl version.

Regards
Henrik



Re: [squid-users] squid + sslbump compile errors

2012-04-02 Thread Michael Hendrie

On 02/04/2012, at 6:29 PM, Henrik Nordström wrote:

 mån 2012-04-02 klockan 16:47 +0930 skrev Michael Hendrie:
 On 06/02/2012, at 10:08 AM, Henrik Nordström wrote:
 
 sön 2012-02-05 klockan 14:09 -0600 skrev James R. Leu:
 
 certificate_db.cc: In member function ‘void Ssl::CertificateDb::load()’:
 certificate_db.cc:455:1: error: ‘index_serial_hash_LHASH_HASH’ was not 
 declared in this scope
 
 Hm.. fails for me as well. Please try the attached patch.
 
 Getting the same error as the original poster with 3.2.0.16.  Patch fixes 
 part of the errors but not all.  Remaining is :
 
 certificate_db.cc: In member function ‘bool 
 Ssl::CertificateDb::deleteInvalidCertificate()’:
 certificate_db.cc:522: error: invalid conversion from ‘void*’ to ‘const 
 _STACK*’
 certificate_db.cc:522: error:   initializing argument 1 of ‘void* 
 sk_value(const _STACK*, int)’
 certificate_db.cc: In member function ‘bool 
 Ssl::CertificateDb::deleteOldestCertificate()’:
 certificate_db.cc:553: error: invalid conversion from ‘void*’ to ‘const 
 _STACK*’
 certificate_db.cc:553: error:   initializing argument 1 of ‘void* 
 sk_value(const _STACK*, int)’
 certificate_db.cc: In member function ‘bool 
 Ssl::CertificateDb::deleteByHostname(const std::string)’:
 certificate_db.cc:570: error: invalid conversion from ‘void*’ to ‘const 
 _STACK*’
 certificate_db.cc:570: error:   initializing argument 1 of ‘void* 
 sk_value(const _STACK*, int)’
 
 This is with Scientific Linux 6.1 (x86_64):
 OpenSSL 1.0.0-fips 29 Mar 2010
 gcc version 4.4.5 20110214 (Red Hat 4.4.5-6) (GCC) 
 
 The problem is due to a RedHat patch to OpenSSL 1.0 where OpenSSL lies
 about it's version. Not yet sure what is the best way to solve this but
 I guess we need to make configure probe for these OpenSSL features
 instead of relying on the advertised version if we want to support
 --enable-ssl-crtd on these OS version.

Thanks for the info, I have used the '--with-openssl=' configure option to 
compile against a different OpenSSL version (1.0.0g) and this compiled without 
error.

 
 It should be fixed in Fedora rawhide, but apparently can't be fixed for
 released versions of Fedora or RHEL having the hacked openssl version.
 
 Regards
 Henrik
 



Re: [squid-users] ACL based on XFF

2012-04-02 Thread Amos Jeffries

On 2/04/2012 8:24 p.m., Sekar Duraisamy wrote:

Thanks Amos. Actually My loadBalancer will send the XFF with source
information. So i will use XFF as the source to block the users intead
of IP.

Is this possible?


Try using the config lines I gave.

Amos



-Sekar

On Mon, Apr 2, 2012 at 1:03 PM, Amos Jeffries wrote:

On 2/04/2012 7:15 p.m., Sekar Duraisamy wrote:

Hello All,

Can create an ACL based on XFF?


Yes.

Now what do you mean by based on?



Since the squid placed  after the loadbancer, it will send the XFF and
LB ip as source ip for all the request. So I want to put ACL based on
XFF.

Is this possible?


This is the purpose of XFF header and the follow_x_forwarded_for directive.

This config:
  acl LB srcyour LB IP address
  follow_x_forwarded_for allow LB
  follow_x_forwarded_for deny all

With the LB setting the XFF header correctly the above will make Squid see
and use the IP of clients on other side of the LB.

Amos




Re: [squid-users] Re: Squid Reverse Proxy (accel) always contacting the server

2012-04-02 Thread Amos Jeffries

On 2/04/2012 8:05 p.m., Daniele Segato wrote:

On 04/02/2012 03:22 AM, Amos Jeffries wrote:

Last-Modified: date here, should change when the content change
Cache-Control: public, max-age=60

60 = 60 seconds, means: squid please do not bother the server for 60
seconds after this reply, even if they ask for If-Modified-Since


Small correction: means don't ask again until 60 seconds from
Last-Modified. If Last-Modified is missing or invalid, 60 seconds from
Date:.



that's not what I've seen

I returned Last-Modified (very old), Date: now and max-age: 60

squid3 is not checking the server again for 1 minute, then when it 
does it keep replying without checking the server for another 1 minute 
and so on.


Is it because I specified Age: 0 and Date now?


Possibly. there are a few bugs still in Squids HTTP/1.1 compliance. You 
may have lucked out and hit one :)





I also added Age: 0 (i tell squid that I'm providing a fresh content).
And Date: with the current date, I think this also tells squid the
content is fresh
not sure those are needed but probably helps.


Tells when the response was generated, in case of transfer delays. Acts
as a backup for Last-Modified as above, and a value to synchronise
Expires: comparisons between proxies and servers despite any clock
difference problems.


My server return Age:0 and Date: now that should do right?


Since it works sure. Technically Age is a header only sent by caches, 
not origin servers. Things are a bit convoluted when its present.






On the squid size I configured the refresh_pattern regex 0 20% 4320

without adding any other option, this was perfectly fine.


refresh_pattern provides default values for max-age / min-age and next
revalidate time if none are provided by the combination of cache control
headers discussed above. When Expires: or Cache-Control: are sent
refresh_pattern value is not used.


In the log it say:
2012/04/02 07:35:47.326| refreshCheck: Matched 
'/alfresco/service/stream 0 20%% 259200'


are you saying this is ignored?

I tried by setting that rule with 0 %20 0 and I had all TCP_MISS

so apparently the rule win against the http headers.

or maybe I misunderstood you :)


Seems to be a bug or a case of min(refresh_pattern max-age, headers 
max-age) I overlooked.


Amos


RE: [squid-users] https analyze, squid rpc proxy to rpc proxy ii6 exchange2007 with ntlm

2012-04-02 Thread Clem
Re,

I've found the option that generate issue only with windows7, in outlook proxy 
http settings window, we have this checked automatically : connect only to 
server proxy certificate that use this principal (common) name :
Msstd : externalfqdn

When I uncheck this option, my outlook (2007/2010) can connect trough squid 
with ntlm in my Exchange via outlook anywhere, If it's checked I've got a : 
server is unavailable.
In windows XP, checked or not, that works.

By the way, after connection to exchange succeed in w7, that option rechecks 
itself automatically ...

The point is, why ? Maybe windows7 is more paranoid with certificate ??

Have you an idea ?

Regards

Clem

-Message d'origine-
De : Amos Jeffries [mailto:squ...@treenet.co.nz] 
Envoyé : mardi 27 mars 2012 23:27
À : squid-users@squid-cache.org
Objet : RE: [squid-users] https analyze, squid rpc proxy to rpc proxy ii6 
exchange2007 with ntlm

On 27.03.2012 21:31, Clem wrote:
 Hi Amos,

 Administrateur is the french AD name for Administrator :)


Yes. I'm just wondering if it is correct for what your IIS is checking against.

Amos



Re: [squid-users] ACL based on XFF

2012-04-02 Thread Amos Jeffries

On 3/04/2012 1:13 a.m., Sekar Duraisamy wrote:

This will allow XFF header from the LB requests to squid. How to block
the original users in squid with the XFF information?

I mean the ACL configuration please...


Exactly as you would if the clients had connected to Squid directly. 
Using the src ACL type.


I'm not sure what your confusion is. Have you added the 
follow_x_forwarded_for rules yet and seen what they do?




This is the purpose of XFF header and the follow_x_forwarded_for
directive.

This config:
  acl LB srcyour LB IP address
  follow_x_forwarded_for allow LB
  follow_x_forwarded_for deny all

With the LB setting the XFF header correctly the above will make Squid
see
and use the IP of clients on other side of the LB.

Amos




Re: [squid-users] https analyze, squid rpc proxy to rpc proxy ii6 exchange2007 with ntlm

2012-04-02 Thread Amos Jeffries

On 3/04/2012 1:33 a.m., Clem wrote:

Re,

I've found the option that generate issue only with windows7, in outlook proxy 
http settings window, we have this checked automatically : connect only to 
server proxy certificate that use this principal (common) name :
Msstd : externalfqdn

When I uncheck this option, my outlook (2007/2010) can connect trough squid 
with ntlm in my Exchange via outlook anywhere, If it's checked I've got a : 
server is unavailable.
In windows XP, checked or not, that works.

By the way, after connection to exchange succeed in w7, that option rechecks 
itself automatically ...

The point is, why ? Maybe windows7 is more paranoid with certificate ??

Have you an idea ?


Strange. Smells like a bug in Windows7 or a domain policy being pushed out.

Does the FRONT_END_HTTPS cache_peer setting make any change to that 
flags behaviour?


Amos



RE: [squid-users] https analyze, squid rpc proxy to rpc proxy ii6 exchange2007 with ntlm

2012-04-02 Thread Clem
Does the FRONT_END_HTTPS cache_peer setting make any change to that flags 
behaviour?

Whether I write this option in cache_peer or not, no change ...

-Message d'origine-
De : Amos Jeffries [mailto:squ...@treenet.co.nz] 
Envoyé : lundi 2 avril 2012 16:00
À : squid-users@squid-cache.org
Objet : Re: [squid-users] https analyze, squid rpc proxy to rpc proxy ii6 
exchange2007 with ntlm

On 3/04/2012 1:33 a.m., Clem wrote:
 Re,

 I've found the option that generate issue only with windows7, in outlook 
 proxy http settings window, we have this checked automatically : connect only 
 to server proxy certificate that use this principal (common) name :
 Msstd : externalfqdn

 When I uncheck this option, my outlook (2007/2010) can connect trough squid 
 with ntlm in my Exchange via outlook anywhere, If it's checked I've got a : 
 server is unavailable.
 In windows XP, checked or not, that works.

 By the way, after connection to exchange succeed in w7, that option rechecks 
 itself automatically ...

 The point is, why ? Maybe windows7 is more paranoid with certificate ??

 Have you an idea ?

Strange. Smells like a bug in Windows7 or a domain policy being pushed out.

Does the FRONT_END_HTTPS cache_peer setting make any change to that flags 
behaviour?

Amos



[squid-users] Are dns_v4_first and acl to_ipv6 dst ipv6 mutually exclusive?

2012-04-02 Thread Peter Olsson
Hello!

Squid 3.1.19.

Our squid servers are dual stack IPv4/IPv6 since about a year,
with this config hack:

tcp_outgoing_address x:x:x:x::x to_ipv6
tcp_outgoing_address x.x.x.x !to_ipv6
acl to_ipv6 dst ipv6
http_access allow to_ipv6 !all

But now our users are tired of webs that announce IPv6 addresses
but don't answer on port 80 on these addresses. So I enabled
dns_v4_first in the config and did squid -k reconfigure.
But it didn't help, we still get IPv6 timeouts towards
misconfigured web sites.

I'm guessing that dns_v4_first and the ipv6 config above are
mutually exclusive? Should I change the tcp_outgoing_address
line to just this:
tcp_outgoing_address x:x:x:x::x
tcp_outgoing_address x.x.x.x
and remove these lines:
acl to_ipv6 dst ipv6
http_access allow to_ipv6 !all

Or will this remove all of our IPv6 connectivity through squid?

Thanks!

-- 
Peter Olssonp...@leissner.se


Re: [squid-users] limiting connections

2012-04-02 Thread Carlos Manuel Trepeu Pupo
Thanks a looot !! That's what I'm missing, everything work
fine now. So this script can use it cause it's already works.

Now, I need to know if there is any way to consult the active request
in squid that work faster that squidclient 

On Sat, Mar 31, 2012 at 9:58 PM, Amos Jeffries squ...@treenet.co.nz wrote:
 On 1/04/2012 7:58 a.m., Carlos Manuel Trepeu Pupo wrote:

 On Sat, Mar 31, 2012 at 4:18 AM, Amos Jeffriessqu...@treenet.co.nz
  wrote:

 On 31/03/2012 3:07 a.m., Carlos Manuel Trepeu Pupo wrote:


 Now I have the following question:
 The possible error to return are 'OK' or 'ERR', if I assume like
 Boolean answer, OK-TRUE    ERR-FALSE. Is this right ?


 Equivalent, yes. Specifically it means success / failure or match /
 non-match on the ACL.


 So, if I deny my acl:
 http_access deny external_helper_acl

 work like this (with the http_access below):
 If return OK -    I denied
 If return ERR -    I do not denied

 It's right this ??? Tanks again for the help !!!


 Correct.

 OK, following the idea of this thread that's what I have:

 #!/bin/bash
 while read line; do
         # -  This it for debug (Testing i saw that not always save to
 file, maybe not always pass from this ACL)
         echo $line  /home/carlos/guarda

         result=`squidclient -h 10.11.10.18 mgr:active_requests | grep
 -c $line`

   if [ $result == 1 ]
         then
         echo 'OK'
         echo 'OK'/home/carlos/guarda
   else
         echo 'ERR'
         echo 'ERR'/home/carlos/guarda
   fi
 done

 In the squid.conf this is the configuration:

 acl test src 10.11.10.12/32
 acl test src 10.11.10.11/32

 acl extensions url_regex /etc/squid3/extensions
 # extensions contains:

 \.(iso|avi|wav|mp3|mp4|mpeg|swf|flv|mpg|wma|ogg|wmv|asx|asf|deb|rpm|exe|zip|tar|tgz|rar|ppt|doc|tiff|pdf)$
 external_acl_type one_conn %URI /home/carlos/contain
 acl limit external one_conn

 http_access allow localhost
 http_access deny extensions !limit
 deny_info ERR_LIMIT limit
 http_access allow test


 I start to download from:
 10.11.10.12 -
  http://ch.releases.ubuntu.com//oneiric/ubuntu-11.10-desktop-i386.iso
 then start from:
 10.11.10.11 -
  http://ch.releases.ubuntu.com//oneiric/ubuntu-11.10-desktop-i386.iso

 And let me download. What I'm missing ???


 You must set ttl=0 negative_ttl=0 grace=0 as options for your
 external_acl_type directive. To disable caching optimizations on the helper
 results.

 Amos


[squid-users] Authentication problem

2012-04-02 Thread Mohamed Amine Kadimi
Dear Developpers and Community,

I would like to set up the following configuration using squid:

When a user asks for a web page he is transparently redirected to
squid, where an authentication must be done before serving the user
with content.

However, users IP are being NATed before going to the proxy. So the
solution would be to use an application-layer verification: cookies or
http headers

So, I come across the following solutions:

1. Use an ICAP server which checks if a cookie is set, otherwise set
it for an authenticated user
 the problem is: cookies are bound to domains + each http request must
be validated

2. Use a php splash page which sets the cookie then redirect to destination
 same problem as ICAP

3. using squid authentication and checking if Proxy-Authorization
header is set before serving the client
  problem: sessions are associated to the IP by squid

I'm using squid 3.1

Thank you for any idea


[squid-users] squid refresh_pattern - different url with same XYZ package

2012-04-02 Thread Mohsen Saeedi

Hi

I have a problem with squid refresh_pattern. i used regex on 
refresh_pattern and every exe file for example cached and then clients 
can download it with high rate. but when someone download from some 
website(for example mozilla or filehippo)  , they redirect to different 
url but the same XYZ exe file. for example firefox-version.exe cached to 
the disk but when another clients send new request, it redirect 
automatically to different url for downloading same firefox. how can i 
configure squid for this  condition?



Thanks,




[squid-users] bash/mysql script not working

2012-04-02 Thread Osmany Goderich
Hi everyone,

Please have a look at this bash/mysql external helper. Can anyone tell me
why is it not working?

#/bin/bash
connect=mysql -h 127.0.0.1 -b squid -u squid -p password -e
url=%DST
while read $url
do
if [ $connect select site from porn where site='$url' ]
then
echo OK
else
echo ERR
fi
done

is there anyway I can test this directly on the server's shell



Re: [squid-users] bash/mysql script not working

2012-04-02 Thread Jose-Marcio Martins da Cruz

Osmany Goderich wrote:

Hi everyone,

Please have a look at this bash/mysql external helper. Can anyone tell me
why is it not working?

#/bin/bash


Maybe

#!/bin/bash

instead of

#/bin/bash

Just a first guess to begin...


connect=mysql -h 127.0.0.1 -b squid -u squid -p password -e
url=%DST
while read $url
do
if [ $connect select site from porn where site='$url' ]
then
echo OK
else
echo ERR
fi
done

is there anyway I can test this directly on the server's shell





Re: [squid-users] bash/mysql script not working

2012-04-02 Thread Andrew Beverley
On Mon, 2012-04-02 at 14:28 -0400, Osmany Goderich wrote:
 Please have a look at this bash/mysql external helper. Can anyone tell me
 why is it not working?
...
 is there anyway I can test this directly on the server's shell
 

Yes, just run it on the shell as you would any other script, and input
the expected values (as specified in squid.conf) followed by a carriage
return. The script should return OK or ERR as appropriate.

Andy




Re: [squid-users] Are dns_v4_first and acl to_ipv6 dst ipv6 mutually exclusive?

2012-04-02 Thread Amos Jeffries

On 03.04.2012 02:21, Peter Olsson wrote:

Hello!

Squid 3.1.19.

Our squid servers are dual stack IPv4/IPv6 since about a year,
with this config hack:

tcp_outgoing_address x:x:x:x::x to_ipv6
tcp_outgoing_address x.x.x.x !to_ipv6
acl to_ipv6 dst ipv6
http_access allow to_ipv6 !all

But now our users are tired of webs that announce IPv6 addresses
but don't answer on port 80 on these addresses. So I enabled
dns_v4_first in the config and did squid -k reconfigure.
But it didn't help, we still get IPv6 timeouts towards
misconfigured web sites.

I'm guessing that dns_v4_first and the ipv6 config above are
mutually exclusive? Should I change the tcp_outgoing_address
line to just this:
tcp_outgoing_address x:x:x:x::x
tcp_outgoing_address x.x.x.x
and remove these lines:
acl to_ipv6 dst ipv6
http_access allow to_ipv6 !all

Or will this remove all of our IPv6 connectivity through squid?



You are the first person to report any issues. They are interrelated 
but should not be exclusive. Does ordering the tcp_outgoing_address with 
IPv4 address first help?


Amos



Re: [squid-users] limiting connections

2012-04-02 Thread Amos Jeffries

On 03.04.2012 02:21, Carlos Manuel Trepeu Pupo wrote:

Thanks a looot !! That's what I'm missing, everything work
fine now. So this script can use it cause it's already works.

Now, I need to know if there is any way to consult the active request
in squid that work faster that squidclient 



ACL types are pretty easy to add to the Squid code. I'm happy to throw 
an ACL patch your way for a few $$.


Which comes back to me earlier still unanswered question about why you 
want to do this very, very strange thing?


Amos



[squid-users] Logging ACL name with requests

2012-04-02 Thread Will Roberts

Hi,

I'm trying to log the name of the ACL that allowed/denied access for a 
particular request. I have a patch that seems to work fine on all my 
machines except one. On that one machine it'll work fine for several 
hours, but then begins logging other garbage; sometimes parts of URLs, 
other times it's just random bytes. I think my patch is correct and this 
machine has a problem, but I'd appreciate it if someone could take a look.


My real goal is to associate a username with requests that are allowed 
based on a whitelisted IP. I had originally done this using an external 
acl helper, but found that it was too slow and would cause connections 
to randomly fail. So instead I now generate a .conf file that is 
included with my main squid config which looks like this:


acl foo src 10.3.4.0/24
acl foo src 10.4.5.0/24
http_access allow foo

acl bar src 120.3.4.0/24
acl bar src 120.4.5.0/24
http_access allow bar

hence why I'm then trying to log the name of the ACL that allowed the 
connection. If there's a different way of doing that I'm open to 
suggestions.


Here's the patch, I allowed the ACL to be accessible via its own token 
or to replace the user one if the user is null.


Thanks,
--Will

Index: squid3-3.1.19/src/AccessLogEntry.h
===
--- squid3-3.1.19.orig/src/AccessLogEntry.h	2012-02-05 
06:51:32.0 -0500

+++ squid3-3.1.19/src/AccessLogEntry.h  2012-03-29 00:57:22.0 -0400
@@ -96,6 +96,7 @@
 msec(0),
 rfc931 (NULL),
 authuser (NULL),
+aclname (NULL),
 extuser(NULL)
 #if USE_SSL
 ,ssluser(NULL)
@@ -114,6 +115,7 @@
 int msec;
 const char *rfc931;
 const char *authuser;
+const char *aclname;
 const char *extuser;
 #if USE_SSL

Index: squid3-3.1.19/src/access_log.cc
===
--- squid3-3.1.19.orig/src/access_log.cc2012-02-05 06:51:32.0 
-0500
+++ squid3-3.1.19/src/access_log.cc 2012-03-29 01:01:43.0 -0400
@@ -404,6 +404,7 @@
 LFT_TAG,
 LFT_IO_SIZE_TOTAL,
 LFT_EXT_LOG,
+LFT_ACCEPTED_ACL,

 #if USE_ADAPTATION
 LTF_ADAPTATION_SUM_XACT_TIMES,
@@ -561,6 +562,7 @@
 {et, LFT_TAG},
 {st, LFT_IO_SIZE_TOTAL},
 {ea, LFT_EXT_LOG},
+{ACL, LFT_ACCEPTED_ACL},

 {%, LFT_PERCENT},

@@ -1017,6 +1019,9 @@
 if (!out)
 out = accessLogFormatName(al-cache.extuser);

+if (!out)
+out = accessLogFormatName(al-cache.aclname);
+
 #if USE_SSL

 if (!out)
@@ -1182,6 +1187,10 @@

 break;

+case LFT_ACCEPTED_ACL:
+out = al-cache.aclname;
+break;
+
 case LFT_PERCENT:
 out = %;

@@ -1764,6 +1773,9 @@
 if (!user)
 user = accessLogFormatName(al-cache.extuser);

+if (!user)
+user = accessLogFormatName(al-cache.aclname);
+
 #if USE_SSL

 if (!user)
@@ -2431,6 +2443,7 @@

 safe_free(aLogEntry-headers.reply);
 safe_free(aLogEntry-cache.authuser);
+safe_free(aLogEntry-cache.aclname);

 safe_free(aLogEntry-headers.adapted_request);
 HTTPMSGUNLOCK(aLogEntry-adapted_request);
Index: squid3-3.1.19/src/client_side.cc
===
--- squid3-3.1.19.orig/src/client_side.cc	2012-02-05 06:51:32.0 
-0500

+++ squid3-3.1.19/src/client_side.cc2012-04-01 22:13:11.0 -0400
@@ -558,6 +558,8 @@

 al.cache.msec = tvSubMsec(start_time, current_time);

+al.cache.aclname = xstrdup( aclname );
+
 if (request)
 prepareLogWithRequestDetails(request, al);

Index: squid3-3.1.19/src/client_side_request.cc
===
--- squid3-3.1.19.orig/src/client_side_request.cc	2012-02-05 
06:51:32.0 -0500
+++ squid3-3.1.19/src/client_side_request.cc	2012-04-01 
22:13:24.0 -0400

@@ -588,6 +588,8 @@
 else if (http-request-auth_user_request != NULL)
 proxy_auth_msg = 
http-request-auth_user_request-denyMessage(null);


+http-aclname = AclMatchedName;
+
 if (answer != ACCESS_ALLOWED) {
 /* Send an error */
 int require_auth = (answer == ACCESS_REQ_PROXY_AUTH || 
aclIsProxyAuth(AclMatchedName));

Index: squid3-3.1.19/src/client_side_request.h
===
--- squid3-3.1.19.orig/src/client_side_request.h	2012-02-05 
06:51:32.0 -0500
+++ squid3-3.1.19/src/client_side_request.h	2012-03-26 
22:54:59.0 -0400

@@ -98,6 +98,7 @@
 HttpRequest *request;  /* Parsed URL ... */
 char *uri;
 char *log_uri;
+const char *aclname;

 struct {
 int64_t offset;


Re: [squid-users] Are dns_v4_first and acl to_ipv6 dst ipv6 mutually exclusive?

2012-04-02 Thread Peter Olsson
On Tue, Apr 03, 2012 at 10:28:38AM +1200, Amos Jeffries wrote:
 On 03.04.2012 02:21, Peter Olsson wrote:
  Hello!
 
  Squid 3.1.19.
 
  Our squid servers are dual stack IPv4/IPv6 since about a year,
  with this config hack:
 
  tcp_outgoing_address x:x:x:x::x to_ipv6
  tcp_outgoing_address x.x.x.x !to_ipv6
  acl to_ipv6 dst ipv6
  http_access allow to_ipv6 !all
 
  But now our users are tired of webs that announce IPv6 addresses
  but don't answer on port 80 on these addresses. So I enabled
  dns_v4_first in the config and did squid -k reconfigure.
  But it didn't help, we still get IPv6 timeouts towards
  misconfigured web sites.
 
  I'm guessing that dns_v4_first and the ipv6 config above are
  mutually exclusive? Should I change the tcp_outgoing_address
  line to just this:
  tcp_outgoing_address x:x:x:x::x
  tcp_outgoing_address x.x.x.x
  and remove these lines:
  acl to_ipv6 dst ipv6
  http_access allow to_ipv6 !all
 
  Or will this remove all of our IPv6 connectivity through squid?
 
 
 You are the first person to report any issues. They are interrelated 
 but should not be exclusive. Does ordering the tcp_outgoing_address with 
 IPv4 address first help?
 
 Amos

Changing order of tcp_outgoing_address doesn't help, our squid with
dns_v4_first on still gives the Operation timed out error, and it
is trying to connect to the IPv6 address of the web server.

I also tried removing these four lines completely:
tcp_outgoing_address x:x:x:x::x to_ipv6
tcp_outgoing_address x.x.x.x !to_ipv6
acl to_ipv6 dst ipv6
http_access allow to_ipv6 !all

But that didn't help either, it still tries the IPv6 address even
though I have dns_v4_first on.

Is there some internal DNS timeout in squid that I should wait for
before testing between changes?

What debug setting should I use to see why squid is choosing the
IPv6 address?

Thanks!

-- 
Peter Olssonp...@leissner.se


Re: [squid-users] Are dns_v4_first and acl to_ipv6 dst ipv6 mutually exclusive?

2012-04-02 Thread Amos Jeffries

On 03.04.2012 12:12, Peter Olsson wrote:

On Tue, Apr 03, 2012 at 10:28:38AM +1200, Amos Jeffries wrote:

On 03.04.2012 02:21, Peter Olsson wrote:
 Hello!

 Squid 3.1.19.

 Our squid servers are dual stack IPv4/IPv6 since about a year,
 with this config hack:

 tcp_outgoing_address x:x:x:x::x to_ipv6
 tcp_outgoing_address x.x.x.x !to_ipv6
 acl to_ipv6 dst ipv6
 http_access allow to_ipv6 !all

 But now our users are tired of webs that announce IPv6 addresses
 but don't answer on port 80 on these addresses. So I enabled
 dns_v4_first in the config and did squid -k reconfigure.
 But it didn't help, we still get IPv6 timeouts towards
 misconfigured web sites.

 I'm guessing that dns_v4_first and the ipv6 config above are
 mutually exclusive? Should I change the tcp_outgoing_address
 line to just this:
 tcp_outgoing_address x:x:x:x::x
 tcp_outgoing_address x.x.x.x
 and remove these lines:
 acl to_ipv6 dst ipv6
 http_access allow to_ipv6 !all

 Or will this remove all of our IPv6 connectivity through squid?


You are the first person to report any issues. They are interrelated
but should not be exclusive. Does ordering the tcp_outgoing_address 
with

IPv4 address first help?

Amos


Changing order of tcp_outgoing_address doesn't help, our squid with
dns_v4_first on still gives the Operation timed out error, and it
is trying to connect to the IPv6 address of the web server.

I also tried removing these four lines completely:
tcp_outgoing_address x:x:x:x::x to_ipv6
tcp_outgoing_address x.x.x.x !to_ipv6
acl to_ipv6 dst ipv6
http_access allow to_ipv6 !all

But that didn't help either, it still tries the IPv6 address even
though I have dns_v4_first on.

Is there some internal DNS timeout in squid that I should wait for
before testing between changes?


Er, yes. Whatever the TTL of the domain being tested against is. A 
restart clears the DNS caches, so may be better here than just a 
reconfigure.




What debug setting should I use to see why squid is choosing the
IPv6 address?


comm (5) and DNS (78) sections at level 6. Possibly more if that is not 
enough.


Amos


Re: [squid-users] Logging ACL name with requests

2012-04-02 Thread Amos Jeffries

On 03.04.2012 12:02, Will Roberts wrote:

Hi,

I'm trying to log the name of the ACL that allowed/denied access for
a particular request. I have a patch that seems to work fine on all 
my

machines except one. On that one machine it'll work fine for several
hours, but then begins logging other garbage; sometimes parts of 
URLs,

other times it's just random bytes. I think my patch is correct and
this machine has a problem, but I'd appreciate it if someone could
take a look.

My real goal is to associate a username with requests that are
allowed based on a whitelisted IP. I had originally done this using 
an

external acl helper, but found that it was too slow and would cause
connections to randomly fail. So instead I now generate a .conf file
that is included with my main squid config which looks like this:

acl foo src 10.3.4.0/24
acl foo src 10.4.5.0/24
http_access allow foo


At this point 'foo=true allowed it.



acl bar src 120.3.4.0/24
acl bar src 120.4.5.0/24
http_access allow bar


At this point the ACL foo=false and bar=true allowed it.

Implicit default rule: http_access deny all

At this point the ACL foo=false and bar=false and src-IP denied it.



hence why I'm then trying to log the name of the ACL that allowed the
connection. If there's a different way of doing that I'm open to
suggestions.

Here's the patch, I allowed the ACL to be accessible via its own
token or to replace the user one if the user is null.



What you are logging is the last ACL tested. In the case of default 
rules, they do not get tested as matches, so the deny line there above 
will deny with ACL name bar.


The whole config file line being matched would be better thing to log 
if you can find it.


PS. Patches to squid-dev please so they can be audited.


Amos


Re: [squid-users] Are dns_v4_first and acl to_ipv6 dst ipv6 mutually exclusive?

2012-04-02 Thread Peter Olsson
On Tue, Apr 03, 2012 at 12:22:52PM +1200, Amos Jeffries wrote:
 On 03.04.2012 12:12, Peter Olsson wrote:
  On Tue, Apr 03, 2012 at 10:28:38AM +1200, Amos Jeffries wrote:
  On 03.04.2012 02:21, Peter Olsson wrote:
   Hello!
  
   Squid 3.1.19.
  
   Our squid servers are dual stack IPv4/IPv6 since about a year,
   with this config hack:
  
   tcp_outgoing_address x:x:x:x::x to_ipv6
   tcp_outgoing_address x.x.x.x !to_ipv6
   acl to_ipv6 dst ipv6
   http_access allow to_ipv6 !all
  
   But now our users are tired of webs that announce IPv6 addresses
   but don't answer on port 80 on these addresses. So I enabled
   dns_v4_first in the config and did squid -k reconfigure.
   But it didn't help, we still get IPv6 timeouts towards
   misconfigured web sites.
  
   I'm guessing that dns_v4_first and the ipv6 config above are
   mutually exclusive? Should I change the tcp_outgoing_address
   line to just this:
   tcp_outgoing_address x:x:x:x::x
   tcp_outgoing_address x.x.x.x
   and remove these lines:
   acl to_ipv6 dst ipv6
   http_access allow to_ipv6 !all
  
   Or will this remove all of our IPv6 connectivity through squid?
  
 
  You are the first person to report any issues. They are interrelated
  but should not be exclusive. Does ordering the tcp_outgoing_address 
  with
  IPv4 address first help?
 
  Amos
 
  Changing order of tcp_outgoing_address doesn't help, our squid with
  dns_v4_first on still gives the Operation timed out error, and it
  is trying to connect to the IPv6 address of the web server.
 
  I also tried removing these four lines completely:
  tcp_outgoing_address x:x:x:x::x to_ipv6
  tcp_outgoing_address x.x.x.x !to_ipv6
  acl to_ipv6 dst ipv6
  http_access allow to_ipv6 !all
 
  But that didn't help either, it still tries the IPv6 address even
  though I have dns_v4_first on.
 
  Is there some internal DNS timeout in squid that I should wait for
  before testing between changes?
 
 Er, yes. Whatever the TTL of the domain being tested against is. A 
 restart clears the DNS caches, so may be better here than just a 
 reconfigure.

Excellent! It works now after restart. I will keep the ipv6 lines
above out of our config, I don't think we really need them.

Thanks!
 
-- 
Peter Olssonp...@leissner.se
CCIE #8963 RS, Security+46 520 500511
Leissner Data AB+46 701 809511


Re: [squid-users] Logging ACL name with requests

2012-04-02 Thread Will Roberts

On 04/02/2012 08:41 PM, Amos Jeffries wrote:

On 03.04.2012 12:02, Will Roberts wrote:
What you are logging is the last ACL tested. In the case of default
rules, they do not get tested as matches, so the deny line there above
will deny with ACL name bar.


Right. In my config the last ACL tested will be the one that allowed or 
denied; I don't have any lines that look like


http_access allow acl1 acl2


The whole config file line being matched would be better thing to log if
you can find it.


For general usability yes, but that wasn't my goal.


PS. Patches to squid-dev please so they can be audited.


I CC'ed squid-dev, but I don't think this is really a patch that should 
be integrated with squid which is why I primarily sent this to squid-users.


Regards,
--Will


RE: [squid-users] time error squid

2012-04-02 Thread Amos Jeffries

On 03.04.2012 04:19, Jose R. Cristo Almaguer wrote:

Amos, sorry for any inconvenience and thanks for your time, the
problem is also in the squid logs, first how I switch on the
template?. To fix the time in the logs I have to modify the LogFormat
of squid but I tried all the possibilities and does not change, which
I recommend.?


Please stop thinking of this as a fix. It is not, what you are doing 
is _breaking_ things in order to get a little bit of temporary comfort.


To change the logs you need to write your own log format to display 
local time instead of the transactions UTC time 
(http://www.squid-cache.org/Doc/config/logformat/). Be aware this breaks 
almost all log processing and analysis tools, as well as causing the 
logs to display time-travel behaviour when 
leap-seconds/minutes/hours/days or daylight savings changes happen.


Amos