Re: [squid-users] IPv4 addresses go missing - markAsBad wrong?

2024-02-20 Thread Alex Rousskov

On 2024-02-12 06:46, Stephen Borrill wrote:

On 16/01/2024 14:37, Alex Rousskov wrote:

On 2024-01-16 06:01, Stephen Borrill wrote:
The problem is no different with 6.6. Is there any more debugging I 
can provide, Alex?


Yes, but I need to give you a patch that adds that (temporary) 
debugging first (assuming I fail to reproduce the problem in the lab). 
The ball is on my side (unless somebody else steps in). Unfortunately, 
I do not have any free time for any of that right now. If you do not 
hear from me sooner, please ping me again on or after February 8, 2024.



PING!


I reproduced this bug and posted a minimal master/v7 fix for the 
official review: https://github.com/squid-cache/squid/pull/1691


Please test the corresponding patch; it applies to Squid v5 and v6:

https://github.com/squid-cache/squid/commit/7d255a72131217d30af3653cec10452fa53289c3.patch


Thank you,

Alex.


I will get 6.7 compiled up so we can add debugging to it quickly. It 
would be good if we could get something in place this week as it is 
school holidays next week in the UK and so there will be little 
opportunity to test until afterwards.



On 10/01/2024 12:40, Stephen Borrill wrote:

On 09/01/2024 15:42, Alex Rousskov wrote:

On 2024-01-09 05:56, Stephen Borrill wrote:

On 09/01/2024 09:51, Stephen Borrill wrote:

On 09/01/2024 03:41, Alex Rousskov wrote:

On 2024-01-08 08:31, Stephen Borrill wrote:
I'm trying to determine why squid 6.x (seen with 6.5) connected 
via IPv4-only periodically fails to connect to the destination 
and then requires a restart to fix it (reload is not sufficient).


The problem appears to be that a host that has one address each 
of IPv4 and IPv6 occasionally has its IPv4 address go missing 
as a destination. On closer inspection, this appears to happen 
when the IPv6 address (not the IPv4) address is marked as bad.


ipcache.cc(990) have: [2001:4860:4802:32::78]:443 at 0 in 
216.239.38.120 #1/2-0



Thank you for sharing more debugging info!


The following seemed odd to. It finds an IPv4 address (this host 
does not have IPv6), puts it in the cache and then says "No DNS 
records":


2024/01/09 12:31:24.020 kid1| 14,4| ipcache.cc(617) nbgethostbyname: 
schoolbase.online
2024/01/09 12:31:24.020 kid1| 14,3| ipcache.cc(313) ipcacheRelease: 
ipcacheRelease: Releasing entry for 'schoolbase.online'
2024/01/09 12:31:24.020 kid1| 14,5| ipcache.cc(670) 
ipcache_nbgethostbyname_: ipcache_nbgethostbyname: MISS for 
'schoolbase.online'
2024/01/09 12:31:24.020 kid1| 14,3| ipcache.cc(480) ipcacheParse: 1 
answers for schoolbase.online
2024/01/09 12:31:24.020 kid1| 14,7| ipcache.cc(995) have:  no 
20.54.32.34 in [no cached IPs]
2024/01/09 12:31:24.020 kid1| 14,7| ipcache.cc(995) have:  no 
20.54.32.34 in [no cached IPs]
2024/01/09 12:31:24.020 kid1| 14,5| ipcache.cc(549) updateTtl: use 
first 69 from RR TTL 69
2024/01/09 12:31:24.020 kid1| 14,3| ipcache.cc(535) addGood: 
schoolbase.online #1 20.54.32.34
2024/01/09 12:31:24.020 kid1| 14,7| ipcache.cc(253) forwardIp: 
20.54.32.34
2024/01/09 12:31:24.020 kid1| 44,2| peer_select.cc(1174) handlePath: 
PeerSelector72389 found conn564274 local=0.0.0.0 
remote=20.54.32.34:443 HIER_DIRECT flags=1, destination #1 for 
schoolbase.online:443
2024/01/09 12:31:24.020 kid1| 14,3| ipcache.cc(459) latestError: 
ERROR: DNS failure while resolving schoolbase.online: No DNS records
2024/01/09 12:31:24.020 kid1| 14,3| ipcache.cc(586) 
ipcacheHandleReply: done with schoolbase.online: 20.54.32.34 #1/1-0
2024/01/09 12:31:24.020 kid1| 14,7| ipcache.cc(236) finalCallback: 
0x1b7381f38  lookup_err=No DNS records


It seemed to happen about the same time as the other failure, so 
perhaps another symptom of the same.


The above log line is self-contradictory AFAICT: It says that the 
cache has both IPv6-looking and IPv4-looking address at the same 
cache position (0) and, judging by the corresponding code, those 
two IP addresses are equal. This is not possible (for those 
specific IP address values). The subsequent Squid behavior can be 
explained by this (unexplained) conflict.


I assume you are running official Squid v6.5 code.


Yes, compiled from source on NetBSD. I have the patch I refer to 
here applied too:

https://lists.squid-cache.org/pipermail/squid-users/2023-November/026279.html


I can suggest the following two steps for going forward:

1. Upgrade to the latest Squid v6 in hope that the problem goes away.


I have just upgraded to 6.6.

2. If the problem is still there, patch the latest Squid v6 to add 
more debugging in hope to explain what is going on. This may take a 
few iterations, and it will take me some time to produce the 
necessary debugging patch.


Unfortunately, I don't have a test case that will cause the problem 
so I need to run this at a customer's production site that is 
particularly affected by it. Luckily, the problem recurs pretty 
quickly.


Here's a run with 6.6 where the number of destinations drops from 2 
to 1 before reverting. Not seen this b

Re: [squid-users] Unable to filter javascript exchanges

2024-02-20 Thread Alex Rousskov

On 2024-02-12 17:40, speed...@chez.com wrote:

I'm using Squid 3.5.24 (indluded in Synology DSM 6) and I've an issue 
with time acl. All works fine except some websites like myhordes.de. 
Once the user connected to this kind of website, the time acl has no 
effect while the web page is not reloaded. All datas sent and received 
by the javascript scripts continue going thru the proxy server without 
any filter.


Squid does not normally evaluate ACLs while tunneling traffic: Various 
directives are checked at the tunnel establishment time and after the 
tunnel is closed, but not when bytes are shoveled back and forth between 
a TCP client and a TCP server.


The same can be said about processing (large) HTTP message bodies.

If your use case involves CONNECT tunnels, intercepted (but not bumped) 
TLS connections, or very large/slow HTTP messages, then you need to 
enhance Squid to apply some [time-related] checks "in the middle of a 
[long] transaction".


https://wiki.squid-cache.org/SquidFaq/AboutSquid#how-to-add-a-new-squid-feature-enhance-of-fix-something

N.B. Squid v3 is very buggy and has not been supported by the Squid 
Project for many years. Please upgrade to Squid v6 or later. The upgrade 
itself will not add a "check directive X when tunneling for a long time" 
feature though.



HTH,

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid delay_access with external acl

2024-02-20 Thread Alex Rousskov

On 2024-02-20 03:14, Francesco Chemolli wrote:


acl users ext_user foo bar gazonk
http_access allow users all  # always allow


The above does not always allow. What you meant it probably this:

# This rule never matches. It is used for its side effect:
# The rule evaluates users ACL, caching evaluation result.
http_access allow users !all



delay_access 3 allow users

should do the trick


... but sometimes will not. Wiki recommendation to "exploit caching" is 
an ugly outdated hack that should be avoided. The correct solution these 
days is to use annotate_transaction ACL to mark the transaction 
accordingly. Here is an untested sketch:


acl fromUserThatShouldBeLimited ext_user ...
acl markAsLimited annotate_transaction limited=yes
acl markedAsLimited note limited yes

# This rule never matches; used for its annotation side effect.
http_access allow fromUserThatShouldBeLimited markAsLimited !all

delay_access 3 allow markedAsLimited

HTH,

Alex.




On Tue, Feb 20, 2024 at 2:15 PM Szilárd Horváth wrote:

Good Day!

I try to make limitation bandwidth for some user group. I have an
external acl which get the users from ldap database server. In the
old version of config we blocked the internet with http_access deny
GROUP, but now i try to allow the internet which has limited
bandwidth. I know that the delay_access work with only fast ACL and
external acl or proxy_auth acl are slow. I already tried some
opportunity but i couldn't solve.

Maybe have you any solution for this? Or any idea how can limitation
the bandwidth for some user? I need use the username (e-mail address
format) because that use to login to the proxy.

Version: Squid Cache: Version 5.6

Thank you so much and i am waiting for your answer!

Have a good day!

Br,
Szilard Horvath

___
squid-users mailing list
squid-users@lists.squid-cache.org

https://lists.squid-cache.org/listinfo/squid-users




--
     Francesco

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Google recaptcha use

2024-02-20 Thread Dsant

Solved !

I had to add more than only google.com/recaptcha/ ...

So do : tail -f /var/log/squid/access.log

I had to add (not all mandatory) : .ireby.fr .mozilla.org 
.callnowbutton.com .googleapis.com .consentmanager.net 
.googletagmanager.com .gstatic.com


Thanks a lot.

Dsant from France


On 2/20/24 09:40, Stephen Borrill wrote:

On 20/02/2024 08:06, Dsant wrote:
Hello, I set up a squid proxy, I want to allow some sites, Google 
recaptcha and block everything else.


acl mydest dstdomain .projet-voltaire.fr
http_access allow mydest
acl  google_recaptcha url_regex ^www.google.com/recaptcha/$
http_access allow google_recaptcha
http_access deny all

The captcha is not showing. A syntax error ?


www.google.com is an HTTPS site. This means that from the point of 
view of the proxy, only the hostname is visible (i.e. www.google.com) 
and so your regex can never match. Look in your logs, you will see:


CONNECT www.google.com

and not:

GET http://www.google.com/recaptcha/

The only way round this is to use ssl_bump to intercept and decrypt 
the traffic so that the HTTP request is visible. This is, however, not 
for the faint-hearted and will require a CA certificate to be 
installed on each client machine.
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Google recaptcha use

2024-02-20 Thread Stephen Borrill

On 20/02/2024 08:06, Dsant wrote:
Hello, I set up a squid proxy, I want to allow some sites, Google 
recaptcha and block everything else.


acl mydest dstdomain .projet-voltaire.fr
http_access allow mydest
acl  google_recaptcha url_regex ^www.google.com/recaptcha/$
http_access allow google_recaptcha
http_access deny all

The captcha is not showing. A syntax error ?


www.google.com is an HTTPS site. This means that from the point of view 
of the proxy, only the hostname is visible (i.e. www.google.com) and so 
your regex can never match. Look in your logs, you will see:


CONNECT www.google.com

and not:

GET http://www.google.com/recaptcha/

The only way round this is to use ssl_bump to intercept and decrypt the 
traffic so that the HTTP request is visible. This is, however, not for 
the faint-hearted and will require a CA certificate to be installed on 
each client machine.


--
Stephen


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid delay_access with external acl

2024-02-20 Thread Francesco Chemolli
Hello Szilárd,
  quoting from the squid wiki
 :

"A possible workaround which can mitigate the effect of this characteristic
consists in exploiting caching, by setting some “useless” ACL checks in
slow clauses, so that subsequent fast clauses may have a cached result to
evaluate against."

In other words, a simplified example like:

acl users ext_user foo bar gazonk
http_access allow users all  # always allow, verify and cache cache user
auth
delay_access 3 allow users

should do the trick


On Tue, Feb 20, 2024 at 2:15 PM Szilárd Horváth  wrote:

> Good Day!
>
> I try to make limitation bandwidth for some user group. I have an external
> acl which get the users from ldap database server. In the old version of
> config we blocked the internet with http_access deny GROUP, but now i try
> to allow the internet which has limited bandwidth. I know that the
> delay_access work with only fast ACL and external acl or proxy_auth acl are
> slow. I already tried some opportunity but i couldn't solve.
>
> Maybe have you any solution for this? Or any idea how can limitation the
> bandwidth for some user? I need use the username (e-mail address format)
> because that use to login to the proxy.
>
> Version: Squid Cache: Version 5.6
>
>
>
> Thank you so much and i am waiting for your answer!
>
> Have a good day!
>
> Br,
> Szilard Horvath
> ___
> squid-users mailing list
> squid-users@lists.squid-cache.org
> https://lists.squid-cache.org/listinfo/squid-users
>


-- 
Francesco
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


[squid-users] Google recaptcha use

2024-02-20 Thread Dsant
Hello, I set up a squid proxy, I want to allow some sites, Google 
recaptcha and block everything else.


acl mydest dstdomain .projet-voltaire.fr
http_access allow mydest
acl  google_recaptcha url_regex ^www.google.com/recaptcha/$
http_access allow google_recaptcha
http_access deny all

The captcha is not showing. A syntax error ?

Thanks.


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users