Re: [squid-users] v6.12 build error from release tarball

2024-11-08 Thread Alex Rousskov

On 2024-11-08 02:33, Marko, Peter wrote:


I don't really know autotools and how they are supposed to be used.


We are in the same boat here -- there are a lot of gray areas and dark 
corners with autotools. Bootstrapping of components is one of them, as 
commit 82e18891 message illustrates. And in your particular case, you 
are essentially dealing with bootstrapping a component of a component!


As an additional disclaimer, I know nothing about Yocto :-).



This process (using bootstrapped release tarball + running autoreconf) is
the standard way how Yocto project builds autotool based components.


Please consider adjusting Yocto "standard way" to honor and use 
component-supplied bootstrap.sh (instead of autoreconf) when it exists.




Maybe because release tarballs are bootstrapped with a possibly/usually
different version of libtool and other toolchain parts than are used by Yocto


Yes, most likely they are, but the differences may not matter to Yocto. 
And if they do matter to Yocto, then Yocto should honor component's way 
of bootstrapping before blindly assuming that autoreconf "works".



or because of different configuration options?


Squid bootstrapping does not have options, but even if it had them, 
running autoreconf (without options) would probably ignore them, so this 
is probably not about bootstrapping options.




Your explanation makes sense to me and I have some idea why this
build problem occurs and have some arguments for my patch within Yocto.


I am very glad to hear that you are making progress.


Good luck,

Alex.



-----Original Message-
From: Alex Rousskov 
Sent: Friday, November 8, 2024 5:27
To: squid-users@lists.squid-cache.org
Cc: Marko, Peter (FT D EU SK BFS1) 
Subject: Re: [squid-users] v6.12 build error from release tarball

On 2024-11-07 16:48, Marko, Peter wrote:


Commit [1] removed directory libltdl/m4 from release tarball by merging
all those files into libltdl/aclocal.m4,


Clarification: While commit b4addc22 itself did not remove any
directories or merged any files, bootstrapping Squid after that commit
may have such an effect. The exact bootstrapping outcome depends, in
part, on bootstrapping environment (e.g., installed libtool version)...



however makefiles still
reference it causing following error (in Yocto project):


libltdl/Makefile.am that references m4 directory comes from Libtool.
That particular Makefile.am does not exist in primary Squid sources
(i.e. what gets committed to the official git repository). It gets
created (by libtoolize IIRC) during bootstrapping of Squid sources.




| autoreconf: Entering directory 'libltdl'


To bootstrap Squid, one has to run ./bootstrap.sh instead of autoreconf.
AFAIK, Squid does not fully support bootstrapping with autoreconf;
autoreconf fails to do the right thing in some environments. If
autoreconf had worked for you, it was just temporary luck.

However, _why_ run autoreconf (i.e. bootstrapping Squid) after
downloading _bootstrapped_ sources?! Clarifying this contradiction may
help identify and address the underlying problem.


Thank you,

Alex.







| autoreconf: configure.ac: not using Gettext
| autoreconf: running: aclocal --system-acdir=WORKDIR/recipe-

sysroot/usr/share/aclocal/ -I WORKDIR/squid-6.12/acinclude/ -I
WORKDIR/recipe-sysroot-native/usr/share/aclocal/ --force -I m4

| aclocal: error: couldn't open directory 'm4': No such file or directory
| autoreconf: error: aclocal failed with exit status: 1

Following change of release tarball will make the build pass:
diff squid-6.12-orig/libltdl/Makefile.am squid-6.12/libltdl/Makefile.am
--- squid-6.12-orig/libltdl/Makefile.am
+++ squid-6.12/libltdl/Makefile.am
@@ -29,7 +29,7 @@
   ## 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
   #

-ACLOCAL_AMFLAGS = -I m4
+ACLOCAL_AMFLAGS =
   AUTOMAKE_OPTIONS = foreign
   AM_CPPFLAGS =
   AM_LDFLAGS =
diff squid-6.12-orig/libltdl/Makefile.in squid-6.12/libltdl/Makefile.in
--- squid-6.12-orig/libltdl/Makefile.in
+++ squid-6.12/libltdl/Makefile.in
@@ -448,7 +448,7 @@ target_alias = @target_alias@
   top_build_prefix = @top_build_prefix@
   top_builddir = @top_builddir@
   top_srcdir = @top_srcdir@
-ACLOCAL_AMFLAGS = -I m4
+ACLOCAL_AMFLAGS =
   AUTOMAKE_OPTIONS = foreign

   # -I$(srcdir) is needed for user that built libltdl with a sub-Automake

I don't know how to fix it in source repository.
Help would be appreciated.

Thanks,
Peter

[1] https://github.com/squid-

cache/squid/commit/b4addc2262e5bee37543f8d1ab9dd98337bafba3

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users




___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] v6.12 build error from release tarball

2024-11-07 Thread Alex Rousskov

On 2024-11-07 16:48, Marko, Peter wrote:


Commit [1] removed directory libltdl/m4 from release tarball by merging
all those files into libltdl/aclocal.m4,


Clarification: While commit b4addc22 itself did not remove any 
directories or merged any files, bootstrapping Squid after that commit 
may have such an effect. The exact bootstrapping outcome depends, in 
part, on bootstrapping environment (e.g., installed libtool version)...




however makefiles still
reference it causing following error (in Yocto project):


libltdl/Makefile.am that references m4 directory comes from Libtool. 
That particular Makefile.am does not exist in primary Squid sources 
(i.e. what gets committed to the official git repository). It gets 
created (by libtoolize IIRC) during bootstrapping of Squid sources.





| autoreconf: Entering directory 'libltdl'


To bootstrap Squid, one has to run ./bootstrap.sh instead of autoreconf. 
AFAIK, Squid does not fully support bootstrapping with autoreconf; 
autoreconf fails to do the right thing in some environments. If 
autoreconf had worked for you, it was just temporary luck.


However, _why_ run autoreconf (i.e. bootstrapping Squid) after 
downloading _bootstrapped_ sources?! Clarifying this contradiction may 
help identify and address the underlying problem.



Thank you,

Alex.







| autoreconf: configure.ac: not using Gettext
| autoreconf: running: aclocal 
--system-acdir=WORKDIR/recipe-sysroot/usr/share/aclocal/ -I 
WORKDIR/squid-6.12/acinclude/ -I 
WORKDIR/recipe-sysroot-native/usr/share/aclocal/ --force -I m4
| aclocal: error: couldn't open directory 'm4': No such file or directory
| autoreconf: error: aclocal failed with exit status: 1

Following change of release tarball will make the build pass:
diff squid-6.12-orig/libltdl/Makefile.am squid-6.12/libltdl/Makefile.am
--- squid-6.12-orig/libltdl/Makefile.am
+++ squid-6.12/libltdl/Makefile.am
@@ -29,7 +29,7 @@
  ## 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
  #
  
-ACLOCAL_AMFLAGS = -I m4

+ACLOCAL_AMFLAGS =
  AUTOMAKE_OPTIONS = foreign
  AM_CPPFLAGS =
  AM_LDFLAGS =
diff squid-6.12-orig/libltdl/Makefile.in squid-6.12/libltdl/Makefile.in
--- squid-6.12-orig/libltdl/Makefile.in
+++ squid-6.12/libltdl/Makefile.in
@@ -448,7 +448,7 @@ target_alias = @target_alias@
  top_build_prefix = @top_build_prefix@
  top_builddir = @top_builddir@
  top_srcdir = @top_srcdir@
-ACLOCAL_AMFLAGS = -I m4
+ACLOCAL_AMFLAGS =
  AUTOMAKE_OPTIONS = foreign
  
  # -I$(srcdir) is needed for user that built libltdl with a sub-Automake


I don't know how to fix it in source repository.
Help would be appreciated.

Thanks,
   Peter

[1] 
https://github.com/squid-cache/squid/commit/b4addc2262e5bee37543f8d1ab9dd98337bafba3
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Redmine Bug #14390: Squid: SECURITY ALERT: Host header forgery detected

2024-10-31 Thread Alex Rousskov

On 2024-10-30 20:46, Jonathan Lee wrote:
Hello, thank you for the update Francesso, there is also some chatter 
about bugs within the Netgate community. Is this also related to the 
fixes in V7 (please see Redmine attached)?


AFAICT, Redmine Bug #14390 is pretty much unrelated to "Joshua 55" 
vulnerabilities.



This Redmine should have been more concise and simplified within its 
notes, it seems to just generate confusion.  I do not have issues like 
this and that is where I start to question what this is related to.  Can 
Someone please respond to this Redmine for verification that has a 
higher-level knowledge about Squid?


FWIW, I did not find Redmine Bug #14390 particularly confusing: Folks 
are having problems with a particular Squid functionality. Those 
problems are known within Squid community. Unfortunately, nobody who can 
address them stepped up to properly address them (so far; for various 
reasons). Comment #15 looks like an out-of-bug-scope distraction to me; 
I am not sure what should be "verified", but Redmine users are welcome 
to seek Squid help here (and some of them may have already).



I hate to see this removed for some 
simple reason like a PHP issue that causes configuration issues.


AFAICT, Redmine Bug #14390 is not specific to PHP clients, and there are 
no good configuration-only solutions for the problem that bug identifies.



HTH,

Alex.

Bug #14390: Squid: SECURITY ALERT: Host header forgery detected - 
pfSense Packages - pfSense bugtracker 



___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] proxy_auth_regex

2024-10-28 Thread Alex Rousskov

On 2024-10-28 16:35, Piana, Josh wrote:


TL;DR, anytime I turn on one of our ACL's that have
"proxy_auth_regex", I'm unable to access the internet through the
proxy at all.


Hello Josh,

Your Squid authentication helper probably does not work. Until that 
problem is fixed, you will not be able to use authentication ACLs 
reliably or at all, as detailed at 
https://lists.squid-cache.org/pipermail/squid-users/2024-October/027224.html


Alex.



Here's an example of one of our rules:

# block certain user IDs from using proxy server
acl block_user proxy_auth_regex -i "/etc/squid/block_user"
http_access deny block_user

I'm testing with an account that is not on the "block_user" list. Yet, when I browse to known good 
sites, I'm met with a generic webpage error, and not a squid error. The generic error, using Firefox, is 
"The proxy server is refusing connections". Yet, activating that ACL should not have this behavior. 
Only if a username that's a part of the block_user list is on there. This is the same for the 3 other rules 
we're using " proxy_auth_regex" for. Which leads me to believe we're not able to pass what the 
directive is asking for.

The issue is, I'm having trouble pinpointing it.

I went here, on the Squid config pages and looked into it further:
https://www.squid-cache.org/Doc/config/acl/

I read the notes for this directive but I'm still a bit confused on where the 
issue is. Authentication seems to be working, but it's like this term either 
doesn't pass the credentials along, or it's expecting some other response. Is 
there anyone that could help me figure out what the issue is with this?

Thank you,
Josh

-Original Message-
From: Alex Rousskov 
Sent: Thursday, October 24, 2024 4:46 PM
To: Piana, Josh ; squid-users@lists.squid-cache.org
Subject: Re: [squid-users] proxy_auth_regex

Caution: This email originated from outside of Hexcel. Do not click links or 
open attachments unless you recognize the sender and know the content is safe.


On 2024-10-24 16:23, Piana, Josh wrote:


 From what I can tell, squid does not receive a good username. When I check the 
access logs, I receive something like this:



24/Oct/2024:16:01:08 -0400.334 10.46.49.190 TCP_DENIED/407 7821
CONNECT
http://www.g/
oogle.com%3A443%2F&data=05%7C02%7CJosh.Piana%40hexcel.com%7C29bbc9983f
2742ff81e108dcf46cd820%7C4248050df19546d5ac9c0c7c52b04cae%7C0%7C0%7C63
8653995560620056%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV
2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C6%7C%7C%7C&sdata=W%2BZDXkp
r6hr5wvICtWMUGsugzHqz7%2BHS7UEKX7N5CPI%3D&reserved=0 - \ HIER_NONE/-
text/html ERR_CACHE_ACCESS_DENIED/-


Please check CONNECT request headers in packet captures or cache.log with 
debug_options set to ALL,2. If client does not send any user credentials to 
Squid, then Squid should request them (with a 407 CONNECT response containing 
appropriate headers).



In regards to the TLS connectors vs HTTP CONNECT requests ...

  > I'm not sure if that is the case

Does the problematic test transaction arrive at http_port or https_port?


  > is "proxy_auth_regex" not compatible with certain things?

Nearly every Squid configuration option is not compatible with certain things.


  > How would you recommend I test plain http traffic?

I would start with

  curl --verbose ... http://example.com/

or a similar request (add proxy/other options instead of "..." as needed).

Alex.




However, I think some of my log information may be missing. I believe you mentioned it 
before, so I'll show you what I'm using for our custom log directive. Maybe we can fix 
this too? What I originally tried to do with this was just get "human readable" 
time stamps:
logformat custom %tl.%03tu %>a %Ss/%03>Hs %
Sent: Thursday, October 24, 2024 4:13 PM
To: Piana, Josh ;
squid-users@lists.squid-cache.org
Subject: Re: [squid-users] proxy_auth_regex

Caution: This email originated from outside of Hexcel. Do not click links or 
open attachments unless you recognize the sender and know the content is safe.


On 2024-10-24 15:53, Piana, Josh wrote:

Hey Squid users,

Running into an issue I’m trying to figure out.

We have a few acl directives using “proxy_auth_regex –i” and when I
have these active, it blocks any proxy connection with an HTTP 407
error, according to the logs.

Here’s an example:

# block certain user IDs from using proxy server

#acl block_user proxy_auth_regex -i "/etc/squid/block_user"

#http_access deny block_user

What’s supposed to happen with this ACL, is that any username we have
on that list is to be blocked from internet access. But it seems to
be blocking known good usernames too. I’m not sure where to go from
here


After asking for one with a 407 response, does Squid ever receive a "good 
username" from the clie

Re: [squid-users] FW: proxy_auth_regex

2024-10-28 Thread Alex Rousskov

On 2024-10-28 15:08, Piana, Josh wrote:


2024/10/25 11:49:18 kid1| ERROR: Negotiate Authentication validating user. 
Result: {result=BH, notes={message: gss_acquire_cred() failed: No credentials 
were supplied, or the credentials were unavailable or inaccessible. No 
principal in keytab matches desired name; }}


Yes, I have already responded to email with that information. Please 
continue that thread: 
https://lists.squid-cache.org/pipermail/squid-users/2024-October/027224.html


Alex.


-Original Message-
From: Alex Rousskov 
Sent: Thursday, October 24, 2024 4:46 PM
To: Piana, Josh ; squid-users@lists.squid-cache.org
Subject: Re: [squid-users] proxy_auth_regex

Caution: This email originated from outside of Hexcel. Do not click links or 
open attachments unless you recognize the sender and know the content is safe.


On 2024-10-24 16:23, Piana, Josh wrote:


 From what I can tell, squid does not receive a good username. When I check the 
access logs, I receive something like this:



24/Oct/2024:16:01:08 -0400.334 10.46.49.190 TCP_DENIED/407 7821
CONNECT
http://www.g/
oogle.com%3A443%2F&data=05%7C02%7CJosh.Piana%40hexcel.com%7C29bbc9983f
2742ff81e108dcf46cd820%7C4248050df19546d5ac9c0c7c52b04cae%7C0%7C0%7C63
8653995560620056%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV
2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C6%7C%7C%7C&sdata=W%2BZDXkp
r6hr5wvICtWMUGsugzHqz7%2BHS7UEKX7N5CPI%3D&reserved=0 - \ HIER_NONE/-
text/html ERR_CACHE_ACCESS_DENIED/-


Please check CONNECT request headers in packet captures or cache.log with 
debug_options set to ALL,2. If client does not send any user credentials to 
Squid, then Squid should request them (with a 407 CONNECT response containing 
appropriate headers).



In regards to the TLS connectors vs HTTP CONNECT requests ...

  > I'm not sure if that is the case

Does the problematic test transaction arrive at http_port or https_port?


  > is "proxy_auth_regex" not compatible with certain things?

Nearly every Squid configuration option is not compatible with certain things.


  > How would you recommend I test plain http traffic?

I would start with

  curl --verbose ... http://example.com/

or a similar request (add proxy/other options instead of "..." as needed).

Alex.




However, I think some of my log information may be missing. I believe you mentioned it 
before, so I'll show you what I'm using for our custom log directive. Maybe we can fix 
this too? What I originally tried to do with this was just get "human readable" 
time stamps:
logformat custom %tl.%03tu %>a %Ss/%03>Hs %
Sent: Thursday, October 24, 2024 4:13 PM
To: Piana, Josh ;
squid-users@lists.squid-cache.org
Subject: Re: [squid-users] proxy_auth_regex

Caution: This email originated from outside of Hexcel. Do not click links or 
open attachments unless you recognize the sender and know the content is safe.


On 2024-10-24 15:53, Piana, Josh wrote:

Hey Squid users,

Running into an issue I’m trying to figure out.

We have a few acl directives using “proxy_auth_regex –i” and when I
have these active, it blocks any proxy connection with an HTTP 407
error, according to the logs.

Here’s an example:

# block certain user IDs from using proxy server

#acl block_user proxy_auth_regex -i "/etc/squid/block_user"

#http_access deny block_user

What’s supposed to happen with this ACL, is that any username we have
on that list is to be blocked from internet access. But it seems to
be blocking known good usernames too. I’m not sure where to go from
here


After asking for one with a 407 response, does Squid ever receive a "good 
username" from the client? Do you see a username in client HTTP request headers or 
access.log records containing %un field?

Perhaps the client refuses to authenticate its requests because Squid 
intercepts TLS client connections rather than receiving HTTP CONNECT requests 
from the client? Have you tested this with plain text traffic?

Alex.



we would like to use these ACL’s but for right now I have these rules
commented out.

Here's a few other rules we have that have the same issue:

# executable blocking

# reference this list for extensions to block

acl exec_files url_regex -i "/etc/squid/exec_files"

# ignore these usernames from being blocked

#acl exec_users proxy_auth_regex -i "/etc/squid/exec_users"

# combine the rules

#http_access deny !bad_exception_urls !exec_users exec_files

#deny_info ERR_BLOCK_TYPE exec_files

   From what you can see above, we have “acl exec_files url_regex -i
/etc/squid/exec_files" uncommented, but it’s not active because the
“http_access directive” had to be commented out because it includes
the other statements that include “proxy_auth_regex –i” which block
all internet access as well.


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lis/

Re: [squid-users] Help regarding access controls for TLS connections

2024-10-28 Thread Alex Rousskov

On 2024-10-28 11:47, Erik Schulz wrote:


I realized later that I was applying 'localnet' rules before the
dstdomain rules, which was the cause of the unauthorized dns lookup.
By rearranging the rules, such that `dstdomain -n` rules are tested
first, there is no dns lookup.


Glad you are making progress!



Well, I do see a reverse dns lookup for
the client's ip, that I can't explain, but even if that is the case,
it would require client IP spoofing. But I don't understand why the
reverse lookup happens. I'm only testing `src`, not `srcdomain`.


Check access_log (including its default in your Squid build). Some of 
the logformat %codes require reverse DNS lookups (e.g., %icap_log and some other logs, but they are not enabled by default IIRC.


If logs are not to blame directly, check whether your Squid code has the 
equivalent of the following commit (available in v6+ but not in v5 
AFAICT): 
https://github.com/squid-cache/squid/commit/a8c7a1107de9d9365dfe10749821f74aeedac777



If you want to preclude Squid from making prohibited DNS queries, the 
safest bet may be in configuring Squid to use a DNS resolver (running on 
the same box as Squid) that does not make prohibited DNS queries.  And 
whenever that resolver receives a prohibited DNS query, investigate what 
triggered it -- there may be more bugs in Squid that result in unwanted 
DNS queries. Belt and suspenders...



HTH,

Alex.




# deny if not authenticated
auth_param basic program /usr/lib/squid/basic_ncsa_auth /etc/squid/passwd
acl authenticated proxy_auth REQUIRED
http_access deny !authenticated

# deny if not allowed domains
acl user_user2 proxy_auth user2
acl allowed_domains2 dstdomain -n .example.com
http_access deny user_user2 !allowed_domains2

# deny public ip
acl localnet_src src 0.0.0.1-0.255.255.255
acl localnet_src src 10.0.0.0/8
acl localnet_src src 100.64.0.0/10
acl localnet_src src 169.254.0.0/16
acl localnet_src src 172.16.0.0/12
acl localnet_src src 192.168.0.0/16
acl localnet_src src fc00::/7
acl localnet_src src fe80::/10
http_access deny !localnet_src

# deny local destination (resolved hostname)
acl localnet_dst dst 0.0.0.1-0.255.255.255
acl localnet_dst dst 10.0.0.0/8
acl localnet_dst dst 100.64.0.0/10
acl localnet_dst dst 169.254.0.0/16
acl localnet_dst dst 172.16.0.0/12
acl localnet_dst dst 192.168.0.0/16
acl localnet_dst dst fc00::/7
acl localnet_dst dst fe80::/10
http_access deny localnet_dst

acl Safe_ports port 80
acl Safe_ports port 443
http_access deny !Safe_ports

cache deny all

http_access allow


Regarding what DNS egress attack is:
The use of a forward proxy ("egress controller") is to prevent
unauthorized egress, i.e. leaking any data, including TLS or CA
private keys, which can be relatively few bytes.
Unauthorized DNS egress is low bandwidth, but that doesn't matter.
Here is a ChatGPT explanation of what a DNS egress attack is:
---
A DNS egress attack exploits DNS queries to stealthily extract
sensitive information from within a network. Here’s how it operates
and why it’s effective:
- DNS as a Covert Channel: DNS requests are typically unfiltered and
allowed through most firewalls. Attackers use DNS requests to
exfiltrate data by embedding information in DNS queries, which often
bypasses traditional egress filters.
- Steganography in DNS Requests: Attackers encode sensitive
information (like passwords or encryption keys) into DNS request
subdomains. For example, a query like data1234.attacker.com can encode
data (data1234) before resolving to the attacker-controlled domain.
- Recursive Resolver Complicity: Since DNS queries are recursively
resolved, intermediate DNS resolvers help propagate the data-laden
requests to the final DNS server controlled by the attacker. This adds
complexity to tracking and blocking the egress path.
- Low Detectability: Normal DNS queries blend into typical network
traffic, making these attacks hard to detect, especially if small
amounts of data are exfiltrated over time.
- Mitigation Gaps: Most firewalls overlook the payload within DNS
queries, focusing instead on blocking IPs or domains, which doesn't
stop the encoding of sensitive data in the DNS request itself.
- DNS data egress attacks are potent because they exploit a
foundational internet protocol for covert data transmission. Solutions
demand vigilant DNS traffic analysis and strict egress filtering
policies.
---

On Mon, Oct 28, 2024 at 12:14 AM Alex Rousskov
 wrote:


On 2024-10-25 18:18, Erik Schulz wrote:


I would like to use squid as an egress proxy, to prevent unauthorized egress.

Let's say that the only allowed egress is 'example.com'.
I can define acl along the lines of:
```
acl allowed_domains ssl::server_name .example.com
http_access allow allowed_domains
```



But can someone help me understand what actually happens?


A lot of thing happens, including: After parsing received HTTP(S) or FTP
request, Squid checks http_access rules

Re: [squid-users] Help regarding access controls for TLS connections

2024-10-27 Thread Alex Rousskov

On 2024-10-25 18:18, Erik Schulz wrote:


I would like to use squid as an egress proxy, to prevent unauthorized egress.

Let's say that the only allowed egress is 'example.com'.
I can define acl along the lines of:
```
acl allowed_domains ssl::server_name .example.com
http_access allow allowed_domains
```



But can someone help me understand what actually happens?


A lot of thing happens, including: After parsing received HTTP(S) or FTP 
request, Squid checks http_access rules and either allows or denies the 
request. For HTTPS requests that are subject to ssl_bump rules, that 
processing happens multiple times, at various SslBump steps, as detailed 
at https://wiki.squid-cache.org/Features/SslPeekAndSplice


N.B. http_access check is a part of "Callout Sequence" referenced on the 
above page in step1 and step2. There are some bugs in the current Squid 
implementation of that document, but it is still a useful starting point.




I want to avoid any DNS egress attack.


How do you define "DNS egress attack"? To the extent possible, please 
answer in terms of what Squid should or should not do with traffic Squid 
receives.




The client does not have DNS access.
Am I correct that the client can use HTTPS_PROXY without DNS, such
that the proxy will perform the DNS lookup?


If you do not control the client, then you should assume that it can 
send any request to/through Squid. This includes sending an HTTPS 
requests without using DNS. Whether the proxy receiving the request is 
going to perform a DNS lookup is a separate question. Many factors 
affect that proxy decision.




Can you help me understand how the acl checks the server_name?


You already know how server_name documentation answers this question. It 
is not clear to me what undocumented aspects you want to know about. 
Could you please clarify by asking a more specific question?




In order to connect to the server, it must perform a DNS lookup, which
causes a leak.


Sorry, I am not sure what "it" is in this context, but, as you probably 
already know from the same docs, "Unlike dstdomain, [ssl::server_name] 
ACL does not perform DNS lookups."


If Squid needs to connect to server X, and X is a domain name, Squid 
will (in most cases) attempt to resolve X to get server IP address(es), 
but that attempt happens before and/or after Sqiud evaluates 
ssl::server_name ACL.




So the ACL must validate the server_name without a DNS lookup, and
since the server IP is therefore unknown, without connecting to the
server or verifying against its certificate.


During server_name ACL evaluation, Squid does not attempt to connection 
to any other server or service. The ACL match/mismatch decision is made 
by comparing various strings using either domain comparison function 
(for server_name) or regex evaluation (for server_name_regex).


The server IP address may or may not be known at ssl::server_name 
evaluation time.




I'm assuming the hostname is known in the CONNECT phase of the request?


Assuming that CONNECT request has arrived at http_port, the answer 
depends on many factors, possibly including:


* whether the client supplied a hostname in CONNECT request headers;
* whether the client supplied a hostname in TLS SNI field;
* whether the CONNECT request is subject to ssl_bump rules;
* whether Squid is opening a tunnel to an origin server or cache_peer


Is it possible to check against the connect hostname only?


Without SslBump, "dstdomain -n" would probably do that. With SslBump, 
"dstdomain -n" would probably do that during SslBump step1 (and step2 if 
client does not supply TLS SNI). All this needs careful testing though; 
there are many configuration parameters and request specifics at play here!




The docs say that

"The ACL computes server name(s) using such information sources as CONNECT request 
URI, TLS client SNI, and TLS server certificate subject (CN and SubjectAltName). The 
computed server name(s) usually change with each SslBump step"


I find this concerning, because I assume the client could perform a
request with an IP, and a forged SNI name that passes the acl.
So I would like to only allow requests that declare FQDN hostname, and
reject IP hostnames.


You probably can use dstdomain_reject to reject IP-based hostnames, but 
it may not be easy to do it reliably using regular expressions because 
IP addresses come in many forms. Squid is probably missing an 
"dst_is_ip" or similar ACL(s) to make such checks reliable.




And, only perform validation against the CONNECT request URI.


See above regarding using "dstdomain -n" for this.


HTH,

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Square Bracket in LogFormat

2024-10-25 Thread Alex Rousskov

On 2024-10-25 14:28, GM Test wrote:


I'm not sure if this is the right place to ask this question


Yes, it is.

but in the 
*logformat *command, I cannot seem to work out what the square bracket 
is for?


When used at the beginning of a logformat %code name, a single square 
bracket character specifies a particular encoding algorithm for that 
%code values. To find documentation for that encoding, search for the 
words "Custom Squid encoding where percent" in recent 
squid.conf.documented or at


http://www.squid-cache.org/Doc/config/logformat

HTH,

Alex.



For example:

logformat squid %ts.%03tu %6tr %>a %Ss/%03>Hs %http://www.squid-cache.org/Doc/config/logformat/ 



Would anyone be able to tell me?

Many thanks,
lan

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] proxy_auth_regex

2024-10-25 Thread Alex Rousskov

On 2024-10-25 13:19, Piana, Josh wrote:


2024/10/25 11:49:18 kid1| ERROR: Negotiate Authentication validating
user. Result: {result=BH, notes={message: gss_acquire_cred() failed:
No credentials were supplied, or the credentials were unavailable or
inaccessible. No principal in keytab matches desired name; }}


Most likely, this is the error you need to fix first. I know almost 
nothing about Kerberos, but it looks like your authentication helper 
and/or its environment is not configured correctly.


Others may be able to guide you from here, but that BH (i.e. Broken 
Helper) problem may not be specific to any HTTP(S) transaction that 
Squid is handling. If you can test your authentication helper in 
isolation by starting it from the command line and feeding it helper 
commands, do that.


Alex.



-Original Message-
From: Alex Rousskov 
Sent: Thursday, October 24, 2024 4:46 PM
To: Piana, Josh ; squid-users@lists.squid-cache.org
Subject: Re: [squid-users] proxy_auth_regex

Caution: This email originated from outside of Hexcel. Do not click links or 
open attachments unless you recognize the sender and know the content is safe.


On 2024-10-24 16:23, Piana, Josh wrote:


 From what I can tell, squid does not receive a good username. When I check the 
access logs, I receive something like this:



24/Oct/2024:16:01:08 -0400.334 10.46.49.190 TCP_DENIED/407 7821
CONNECT
http://www.g/
oogle.com%3A443%2F&data=05%7C02%7CJosh.Piana%40hexcel.com%7C29bbc9983f
2742ff81e108dcf46cd820%7C4248050df19546d5ac9c0c7c52b04cae%7C0%7C0%7C63
8653995560620056%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV
2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C6%7C%7C%7C&sdata=W%2BZDXkp
r6hr5wvICtWMUGsugzHqz7%2BHS7UEKX7N5CPI%3D&reserved=0 - \ HIER_NONE/-
text/html ERR_CACHE_ACCESS_DENIED/-


Please check CONNECT request headers in packet captures or cache.log with 
debug_options set to ALL,2. If client does not send any user credentials to 
Squid, then Squid should request them (with a 407 CONNECT response containing 
appropriate headers).



In regards to the TLS connectors vs HTTP CONNECT requests ...

  > I'm not sure if that is the case

Does the problematic test transaction arrive at http_port or https_port?


  > is "proxy_auth_regex" not compatible with certain things?

Nearly every Squid configuration option is not compatible with certain things.


  > How would you recommend I test plain http traffic?

I would start with

  curl --verbose ... http://example.com/

or a similar request (add proxy/other options instead of "..." as needed).

Alex.




However, I think some of my log information may be missing. I believe you mentioned it 
before, so I'll show you what I'm using for our custom log directive. Maybe we can fix 
this too? What I originally tried to do with this was just get "human readable" 
time stamps:
logformat custom %tl.%03tu %>a %Ss/%03>Hs %
Sent: Thursday, October 24, 2024 4:13 PM
To: Piana, Josh ;
squid-users@lists.squid-cache.org
Subject: Re: [squid-users] proxy_auth_regex

Caution: This email originated from outside of Hexcel. Do not click links or 
open attachments unless you recognize the sender and know the content is safe.


On 2024-10-24 15:53, Piana, Josh wrote:

Hey Squid users,

Running into an issue I’m trying to figure out.

We have a few acl directives using “proxy_auth_regex –i” and when I
have these active, it blocks any proxy connection with an HTTP 407
error, according to the logs.

Here’s an example:

# block certain user IDs from using proxy server

#acl block_user proxy_auth_regex -i "/etc/squid/block_user"

#http_access deny block_user

What’s supposed to happen with this ACL, is that any username we have
on that list is to be blocked from internet access. But it seems to
be blocking known good usernames too. I’m not sure where to go from
here


After asking for one with a 407 response, does Squid ever receive a "good 
username" from the client? Do you see a username in client HTTP request headers or 
access.log records containing %un field?

Perhaps the client refuses to authenticate its requests because Squid 
intercepts TLS client connections rather than receiving HTTP CONNECT requests 
from the client? Have you tested this with plain text traffic?

Alex.



we would like to use these ACL’s but for right now I have these rules
commented out.

Here's a few other rules we have that have the same issue:

# executable blocking

# reference this list for extensions to block

acl exec_files url_regex -i "/etc/squid/exec_files"

# ignore these usernames from being blocked

#acl exec_users proxy_auth_regex -i "/etc/squid/exec_users"

# combine the rules

#http_access deny !bad_exception_urls !exec_users exec_files

#deny_info ERR_BLOCK_TYPE exec_files

   From what you can see above, we have “acl exec_files url_regex -i
/etc/squid/exec_files" uncommented, but

Re: [squid-users] proxy_auth_regex

2024-10-24 Thread Alex Rousskov

On 2024-10-24 16:23, Piana, Josh wrote:


From what I can tell, squid does not receive a good username. When I check the 
access logs, I receive something like this:



24/Oct/2024:16:01:08 -0400.334 10.46.49.190 TCP_DENIED/407 7821 CONNECT 
www.google.com:443 - \ HIER_NONE/- text/html ERR_CACHE_ACCESS_DENIED/-


Please check CONNECT request headers in packet captures or cache.log 
with debug_options set to ALL,2. If client does not send any user 
credentials to Squid, then Squid should request them (with a 407 CONNECT 
response containing appropriate headers).




In regards to the TLS connectors vs HTTP CONNECT requests ...

> I'm not sure if that is the case

Does the problematic test transaction arrive at http_port or https_port?


> is "proxy_auth_regex" not compatible with certain things?

Nearly every Squid configuration option is not compatible with certain 
things.



> How would you recommend I test plain http traffic?

I would start with

curl --verbose ... http://example.com/

or a similar request (add proxy/other options instead of "..." as needed).

Alex.




However, I think some of my log information may be missing. I believe you mentioned it 
before, so I'll show you what I'm using for our custom log directive. Maybe we can fix 
this too? What I originally tried to do with this was just get "human readable" 
time stamps:
logformat custom %tl.%03tu %>a %Ss/%03>Hs %
Sent: Thursday, October 24, 2024 4:13 PM
To: Piana, Josh ; squid-users@lists.squid-cache.org
Subject: Re: [squid-users] proxy_auth_regex

Caution: This email originated from outside of Hexcel. Do not click links or 
open attachments unless you recognize the sender and know the content is safe.


On 2024-10-24 15:53, Piana, Josh wrote:

Hey Squid users,

Running into an issue I’m trying to figure out.

We have a few acl directives using “proxy_auth_regex –i” and when I
have these active, it blocks any proxy connection with an HTTP 407
error, according to the logs.

Here’s an example:

# block certain user IDs from using proxy server

#acl block_user proxy_auth_regex -i "/etc/squid/block_user"

#http_access deny block_user

What’s supposed to happen with this ACL, is that any username we have
on that list is to be blocked from internet access. But it seems to be
blocking known good usernames too. I’m not sure where to go from here


After asking for one with a 407 response, does Squid ever receive a "good 
username" from the client? Do you see a username in client HTTP request headers or 
access.log records containing %un field?

Perhaps the client refuses to authenticate its requests because Squid 
intercepts TLS client connections rather than receiving HTTP CONNECT requests 
from the client? Have you tested this with plain text traffic?

Alex.



we would like to use these ACL’s but for right now I have these rules
commented out.

Here's a few other rules we have that have the same issue:

# executable blocking

# reference this list for extensions to block

acl exec_files url_regex -i "/etc/squid/exec_files"

# ignore these usernames from being blocked

#acl exec_users proxy_auth_regex -i "/etc/squid/exec_users"

# combine the rules

#http_access deny !bad_exception_urls !exec_users exec_files

#deny_info ERR_BLOCK_TYPE exec_files

  From what you can see above, we have “acl exec_files url_regex -i
/etc/squid/exec_files" uncommented, but it’s not active because the
“http_access directive” had to be commented out because it includes
the other statements that include “proxy_auth_regex –i” which block
all internet access as well.


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Flist
s.squid-cache.org%2Flistinfo%2Fsquid-users&data=05%7C02%7CJosh.Piana%4
0hexcel.com%7C7fb1087ae57b47c397a408dcf4684eaf%7C4248050df19546d5ac9c0
c7c52b04cae%7C0%7C0%7C638653976041876515%7CUnknown%7CTWFpbGZsb3d8eyJWI
joiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7
C%7C&sdata=D6d%2BXvUp%2FHTvsqC5oneWw%2FdYNWYkmsQUz0lKHLzRk0o%3D&reserv
ed=0


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] proxy_auth_regex

2024-10-24 Thread Alex Rousskov

On 2024-10-24 15:53, Piana, Josh wrote:

Hey Squid users,

Running into an issue I’m trying to figure out.

We have a few acl directives using “proxy_auth_regex –i” and when I have 
these active, it blocks any proxy connection with an HTTP 407 error, 
according to the logs.


Here’s an example:

# block certain user IDs from using proxy server

#acl block_user proxy_auth_regex -i "/etc/squid/block_user"

#http_access deny block_user

What’s supposed to happen with this ACL, is that any username we have on 
that list is to be blocked from internet access. But it seems to be 
blocking known good usernames too. I’m not sure where to go from here


After asking for one with a 407 response, does Squid ever receive a 
"good username" from the client? Do you see a username in client HTTP 
request headers or access.log records containing %un field?


Perhaps the client refuses to authenticate its requests because Squid 
intercepts TLS client connections rather than receiving HTTP CONNECT 
requests from the client? Have you tested this with plain text traffic?


Alex.


we would like to use these ACL’s but for right now I have these rules 
commented out.


Here's a few other rules we have that have the same issue:

# executable blocking

# reference this list for extensions to block

acl exec_files url_regex -i "/etc/squid/exec_files"

# ignore these usernames from being blocked

#acl exec_users proxy_auth_regex -i "/etc/squid/exec_users"

# combine the rules

#http_access deny !bad_exception_urls !exec_users exec_files

#deny_info ERR_BLOCK_TYPE exec_files

 From what you can see above, we have “acl exec_files url_regex -i 
/etc/squid/exec_files" uncommented, but it’s not active because the 
“http_access directive” had to be commented out because it includes the 
other statements that include “proxy_auth_regex –i” which block all 
internet access as well.



___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 6.10 SSL-Bump Woes

2024-10-11 Thread Alex Rousskov

On 2024-10-10 20:48, Jonathan Lee wrote:

miss means it stored items


Just to correct a misunderstanding: A cache miss does _not_ imply that 
Squid stored the response.


Alex.



On Oct 10, 2024, at 15:27, Bryan Seitz  wrote:


I removed the header mods and changed the refresh pattern to:

refresh_pattern .               15      20%     1800   
 override-expire ignore-no-cache ignore-no-store ignore-private


And I always get TCP_MISS.  Any other thoughts?

Thanks!

On Thu, Oct 10, 2024 at 12:35 PM Alex Rousskov 
<mailto:rouss...@measurement-factory.com>> wrote:


On 2024-10-09 15:40, Bryan Seitz wrote:

 > SSL-Bump Woes

AFAICT, the problem you are trying to solve is not caused by SslBump.


 > reply_header_access Cache-Control deny all
 > reply_header_add Cache-Control  "public, max-age=1800"

The above directives are applied to responses that Squid sends to
clients. These post-cache response modification directives have no
effect on Squid response caching decisions (which are done earlier,
pre-cache, while looking at the virgin or adapted response
received from
the origin server of cache_peer).

FWIW, this caveat is documented in reply_header_add description, but
documentation improvements are welcome:

> This option adds header fields to outgoing HTTP responses (i.e.,
response
> headers delivered by Squid to the client). This option has no
effect on
> cache hit detection. The equivalent adaptation vectoring point in
> ICAP terminology is post-cache RESPMOD.


To allow Squid to violate HTTP caching rules when deciding whether
to a
cache a response, see refresh_pattern options (e.g.,
"ignore-private").
http://www.squid-cache.org/Doc/config/refresh_pattern/
<http://www.squid-cache.org/Doc/config/refresh_pattern/>


HTH,

Alex.


> I have the following configuration:
>
> http_port 3128 ssl-bump generate-host-certificates=on
> tls-cert=/etc/squid/ssl/myCA.pem
> ssl_bump bump all
>
> # BMCs return Cache-Control: private
> reply_header_access Cache-Control deny all
> reply_header_add Cache-Control  "public, max-age=1800"
>
> follow_x_forwarded_for allow all
> http_access allow all
> include /etc/squid/conf.d/*.conf
> host_verify_strict off
> tls_outgoing_options min-version=1.0
> flags=DONT_VERIFY_PEER,DONT_VERIFY_DOMAIN
> sslproxy_cert_error allow all
>
> sslcrtd_program /usr/lib/squid/security_file_certgen -s
> /var/spool/squid/ssl_db -M 4MB
> sslcrtd_children 5
>
> cache_mem 8192 MB
> cache_dir rock /cm/squid/squid 8192
>
> buffered_logs on
> access_log daemon:/var/log/squid/access.log logformat=squid
> logfile_daemon /usr/lib/squid/log_file_daemon
> cache_store_log daemon:/var/log/squid/store.log
> log_mime_hdrs on
> coredump_dir /var/spool/squid
> shutdown_lifetime 2 seconds
> max_filedesc 4096
> workers 4
>
>
> A curl will note the resource is stale (with new host), but I
never get
> a cache hit on subsequent retries:
>
> Store log:
>
> 1728502393.992 RELEASE -1 
02003A632F000300  200
> 1728502382        -1        -1 application/json 1182/1182 GET
>

https://10.170.31.77/redfish/v1/Oem/Supermicro/HGX_H100/Systems/HGX_Baseboard_0/Processors/GPU_SXM_4/ProcessorMetrics
 
<https://10.170.31.77/redfish/v1/Oem/Supermicro/HGX_H100/Systems/HGX_Baseboard_0/Processors/GPU_SXM_4/ProcessorMetrics>
 
<https://10.170.31.77/redfish/v1/Oem/Supermicro/HGX_H100/Systems/HGX_Baseboard_0/Processors/GPU_SXM_4/ProcessorMetrics
 
<https://10.170.31.77/redfish/v1/Oem/Supermicro/HGX_H100/Systems/HGX_Baseboard_0/Processors/GPU_SXM_4/ProcessorMetrics>>
> 1728502395.674 RELEASE -1 
02003B632F000200  200
> 1728502384        -1        -1 application/json 1182/1182 GET
>

https://10.170.31.77/redfish/v1/Oem/Supermicro/HGX_H100/Systems/HGX_Baseboard_0/Processors/GPU_SXM_4/ProcessorMetrics
 
<https://10.170.31.77/redfish/v1/Oem/Supermicro/HGX_H100/Systems/HGX_Baseboard_0/Processors/GPU_SXM_4/ProcessorMetrics>
 
<https://10.170.31.77/redfish/v1/Oem/Supermicro/HGX_H100/Systems/HGX_Baseboard_0/Processors/GPU_SXM_4/ProcessorMetrics
 
<https://10.170.31.77/redfish/v1/Oem/Supermicro/HGX_H100/Systems/HGX_Baseboard_0/Processors/GPU_SXM_4/ProcessorMetrics>>
> 1728502408.317 RELEASE 00 00056924
04003C632F000100  200
> 1728420588        -1 1728422388 application/json 1189/1189 GET
>

https://10.170.31.81/redfish/v1/Oem/Supermicro/HGX_H100/Systems/HGX_Baseboard_0/Processors/GPU_SXM_4/ProcessorMetrics

Re: [squid-users] Squid 6.10 SSL-Bump Woes

2024-10-10 Thread Alex Rousskov

On 2024-10-09 15:40, Bryan Seitz wrote:

> SSL-Bump Woes

AFAICT, the problem you are trying to solve is not caused by SslBump.


> reply_header_access Cache-Control deny all
> reply_header_add Cache-Control  "public, max-age=1800"

The above directives are applied to responses that Squid sends to 
clients. These post-cache response modification directives have no 
effect on Squid response caching decisions (which are done earlier, 
pre-cache, while looking at the virgin or adapted response received from 
the origin server of cache_peer).


FWIW, this caveat is documented in reply_header_add description, but 
documentation improvements are welcome:



This option adds header fields to outgoing HTTP responses (i.e., response
headers delivered by Squid to the client). This option has no effect on
cache hit detection. The equivalent adaptation vectoring point in
ICAP terminology is post-cache RESPMOD.



To allow Squid to violate HTTP caching rules when deciding whether to a 
cache a response, see refresh_pattern options (e.g., "ignore-private").

http://www.squid-cache.org/Doc/config/refresh_pattern/


HTH,

Alex.



I have the following configuration:

http_port 3128 ssl-bump generate-host-certificates=on 
tls-cert=/etc/squid/ssl/myCA.pem

ssl_bump bump all

# BMCs return Cache-Control: private
reply_header_access Cache-Control deny all
reply_header_add Cache-Control  "public, max-age=1800"

follow_x_forwarded_for allow all
http_access allow all
include /etc/squid/conf.d/*.conf
host_verify_strict off
tls_outgoing_options min-version=1.0 
flags=DONT_VERIFY_PEER,DONT_VERIFY_DOMAIN

sslproxy_cert_error allow all

sslcrtd_program /usr/lib/squid/security_file_certgen -s 
/var/spool/squid/ssl_db -M 4MB

sslcrtd_children 5

cache_mem 8192 MB
cache_dir rock /cm/squid/squid 8192

buffered_logs on
access_log daemon:/var/log/squid/access.log logformat=squid
logfile_daemon /usr/lib/squid/log_file_daemon
cache_store_log daemon:/var/log/squid/store.log
log_mime_hdrs on
coredump_dir /var/spool/squid
shutdown_lifetime 2 seconds
max_filedesc 4096
workers 4


A curl will note the resource is stale (with new host), but I never get 
a cache hit on subsequent retries:


Store log:

1728502393.992 RELEASE -1  02003A632F000300  200 
1728502382        -1        -1 application/json 1182/1182 GET 
https://10.170.31.77/redfish/v1/Oem/Supermicro/HGX_H100/Systems/HGX_Baseboard_0/Processors/GPU_SXM_4/ProcessorMetrics 
1728502395.674 RELEASE -1  02003B632F000200  200 
1728502384        -1        -1 application/json 1182/1182 GET 
https://10.170.31.77/redfish/v1/Oem/Supermicro/HGX_H100/Systems/HGX_Baseboard_0/Processors/GPU_SXM_4/ProcessorMetrics 
1728502408.317 RELEASE 00 00056924 04003C632F000100  200 
1728420588        -1 1728422388 application/json 1189/1189 GET 
https://10.170.31.81/redfish/v1/Oem/Supermicro/HGX_H100/Systems/HGX_Baseboard_0/Processors/GPU_SXM_4/ProcessorMetrics 
1728502408.318 RELEASE -1  03003C632F000100  200 
1728502404        -1        -1 application/json 1179/1179 GET 
https://10.170.31.81/redfish/v1/Oem/Supermicro/HGX_H100/Systems/HGX_Baseboard_0/Processors/GPU_SXM_4/ProcessorMetrics 
1728502417.161 RELEASE -1  05003C632F000100  200 
1728502413        -1        -1 application/json 1179/1179 GET 
https://10.170.31.81/redfish/v1/Oem/Supermicro/HGX_H100/Systems/HGX_Baseboard_0/Processors/GPU_SXM_4/ProcessorMetrics 


Response headers:

HTTP/1.1 200 Connection established

HTTP/1.1 200 OK
Link: >; rel=describedby

Allow: GET
Content-Length: 1179
Content-Type: application/json; charset=UTF-8
Strict-Transport-Security: max-age=31536000; includeSubdomains
X-XSS-Protection: 1; mode=block
Content-Security-Policy: default-src 'self';connect-src 'self' ws: 
wss:;frame-src 'self';img-src 'self' data:;object-src 'self';font-src 
'self' data:;script-src 'self' 'unsafe-inline' 'unsafe-eval';style-src 
'self' 'unsafe-inline';worker-src 'self' blob:;

X-Frame-Options: SAMEORIGIN
X-Content-Type-Options: nosniff
OData-Version: 4.0
Date: Wed, 09 Oct 2024 19:35:50 GMT
Cache-Status: squid;detail=mismatch
Via: 1.1 squid (squid/6.10)
Connection: keep-alive
Cache-Control: public, max-age=1800

If I use a cache peer with MITMPROXY, squid will c

Re: [squid-users] Questions about Squid configuration

2024-10-03 Thread Alex Rousskov

On 2024-09-25 01:57, にば wrote:


We then added the following settings that were in the existing Squid proxy

# SSL_BUMP
acl allowed_https_sites ssl::server_name "/etc/squid/whitelist"
acl allowed_https_sites ssl::server_name "/etc/squid/whitelist_transparent"
acl allowed_https_sites ssl::server_name "/etc/squid/whitelist_https"
acl allowed_https_sites ssl::server_name
"/etc/squid/whitelist_transparent_https"
sslcrtd_program [sslcrtd-program-setting]

acl step1 at_step SslBump1
acl step2 at_step SslBump2
acl step3 at_step SslBump3

ssl_bump peek step1 all
ssl_bump peek step2 allowed_https_sites
ssl_bump splice step3 allowed_https_sites
ssl_bump terminate step2 all

Then I verified the 4 patterns again and all of them gave me 403 Forbidden...
Even the following pattern which is allowed in whitelist.

1. successful communication of a valid request to an allowed site
curl https://pypi.org/ -v --cacert squid.crt -k


After checking access-transparent.log and cache.log, it appears that
pypi.org is comparing inspections by IP and not by domain.


AFAICT, you expect allowed_https_sites ACL to match the above 
intercepted pypi request and, hence, to be allowed by the following 
http_access rule:


http_access allow allowed_https_sites https_port

Since you are intercepting traffic, at SslBump step1, Squid only sees 
TCP-level information extracted from the intercepted connection. There 
are no domain names. There is no HTTP-level information at that time. 
See "Step 1:" at https://wiki.squid-cache.org/Features/SslPeekAndSplice


When evaluating that dstdomain ACL (allowed_https_sites), Squid will 
attempt to convert the intended destination IP address of the 
intercepted TCP connection to a domain name and will compare the result 
of that conversion to allowed_https_sites entries. Evidently, either 
that reverse DNS lookup fails in your case, or the result (is not 
"pypi.org" and) does not match any allowed_https_sites names.


Since no other "http_access allow" rule matches the fake CONNECT at 
SslBump step1, the transaction is denied by the first "http_access deny 
all" rule (BTW, you can remove the second one -- it is unreachable).




How do I modify the configuration to allow this correctly by domain?


It is difficult for me to answer this loaded question in a meaningful 
way. At SslBump step1, Squid gets an IP address. Squid has to either 
allow or deny that request based on available information. You have 
several options, including:


A. During SslBump step1, deny access based on IP addresses by adding 
appropriate dst ACLs that would be checked by http_access rules.


B. During SslBump step1, allow fake CONNECT requests to any IP address 
while making sure that you deny what needs to be denied at step2 or step3.



Needless to say, for an http_access rule to only apply during SslBump 
step1 it has to have an "SslBump1" ACL name. For example, the following 
rule allows any(*) fake CONNECT request during SslBump step1:


# Allow all fake CONNECTs on intercepted connections until
# we have a chance to extract TLS SNI information (step2).
http_access allow step1 https_port


(*) any request that was not denied by earlier rules, of course



Also, to begin with, these settings follow the existing squid proxy
created by my predecessor, so I don't know what they are for...
What are the disadvantages of removing these settings?


If you remove all ssl_bump rules, then Squid will not examine traffic 
inside intercepted TCP/TLS connections. Thus, TLS clients will be able 
to request/do anything (if their intercepted TCP connections are allowed 
using TCP-level information). Whether that effect is a "disadvantage" is 
your call.



HTH,

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid + c-icap + SquidClamav + ClamAV

2024-10-03 Thread Alex Rousskov

On 2024-10-03 11:10, Andrea Venturoli wrote:

> Out of 10 installations, ... on one it's very frequent.

> Any idea on what to check or try? ... Any way to get better logs?

Since the problem is frequent on that one host, I recommend privately 
sharing[1] a pointer to compressed debugging cache.log collected while 
the problem was happening. Either set debug_options to ALL,9 or use a 
pair of "squid -k debug" commands to start/stop debugging. More hints at

https://wiki.squid-cache.org/SquidFaq/BugReporting#debugging-a-single-transaction

It is enough to record one or two problematic cases.

Please make sure to patch your Squid with 6567eac.patch you reference 
below before collecting those logs -- we do not want to analyze a 
known/fixed bug.



HTH,

Alex.

[1]: You can share that sample with me or anybody you trust (who can 
analyze Squid debugging logs and is willing to help you).




I've got several machines with the following software:
FreeBSD 13.3, 13.4 or 14.1
Squid 6.10
c-icap 0.5.12
SquidClamav 7.3
ClamA: 1.3.2

This combination usually works pretty well, but it occasionally chokes, 
with the client seeing:

ICAP protocol error
The system returned: [No Error]


Out of 10 installations, 7 works perfecly (at least that I know), a 
couple very seldom have shown this error, but on one it's very frequent.




I checked the logs:
_ i think Squid's access log shows a NONE_NONE/500 code at the time of 
the error: I personally can't get anything useful from this, but maybe 
someone with better insight can tell?

_ cache.log does not seem to show anything related;
_ neither c-icap's, nor ClamAV's logs show anything useful.



I already applied the following patch:
https://github.com/measurement-factory/squid/commit/6567eac.patch

On one host that had always worked perfeclty, but suddenly had started 
giving this trouble several time per hours, it almost solved (i.e. I 
think the error was seen once in the latest few months).

However, it didn't help on the most problematic host.



Any idea on what to check or try? Any other known bug? Any way to get 
better logs?



  bye & Thanks
 av.
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid + ecap + clamav

2024-10-03 Thread Alex Rousskov

On 2024-10-03 10:12, Andrea Venturoli wrote:

On 10/2/24 23:30, Alex Rousskov wrote:
Disadvantages of using eCAP+ClamAV adapter include being dependent on 
a relatively old libecap and ClamAV eCAP adapter implementation.


I got it all wrong then... I thought ICAP was older and eCAP was meant 
to replace it.


ICAP _is_ older than eCAP.

eCAP goals did not include ICAP replacement IIRC. eCAP solves certain 
ICAP performance problems by replacing communication over TCP 
connections with function calls. Some environments are better off with 
eCAP; others with ICAP. There are many tradeoffs. I wish we had time to 
improve eCAP, but it is being successfully used in some current 
production environments despite its outdated code.



Cheers,

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid + ecap + clamav

2024-10-02 Thread Alex Rousskov

On 2024-09-29 12:40, Andrea Venturoli wrote:

I've been using Squid + C-icap + SquidClamAV + ClamAV for a long time in 
order to filter web content.
However this has lately been troublesome, leading to occasional 
hard-to-diagnose temporary failures ("ICAP protocol error").



So I'm pondering moving from ICAP to eCAP, like described here:

https://wiki.squid-cache.org/ConfigExamples/ContentAdaptation/EcapForClamav


eCAP and ICAP options offer different tradeoffs (i.e. have a different 
set of pros and cons).



According to that page there are no disadvantages wrt to the previous 
config.


Disadvantages of using eCAP+ClamAV adapter include being dependent on a 
relatively old libecap and ClamAV eCAP adapter implementation. If things 
go wrong or if you want new features, fewer folks would be able to help 
you compared to a better known ICAP protocol and (more recently updated) 
c-icap server.



However, I tried compiling the eCAP ClamAV adapter, but the code looks 
very outdated WRT to C++ standards


Yes, both libecap and eCAP ClamAV adapter were last updated in 2015.


and possibly 
also to ClamAV (giving compiling errors due to missing symbols).


If libclamav API has changed in backward incompatible ways, then eCAP 
ClamAV adapter source code would have to be updated to reflect those API 
changes. Unfortunately, I have not recorded what ClamAV version we were 
using when eCAP ClamAV adapter was developed.




Is this info still relevant?

> Is this something worth investigating or is it outdated/deprecated?


I would treat that ConfigExamples page as an old configuration example 
that probably worked for the author when that page was written years 
ago. It can be used as a starting point for getting things working in 
your environment. Nothing more, nothing less.



HTH,

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Could we have variables in squid conf file ?

2024-10-01 Thread Alex Rousskov

On 2024-10-01 11:49, Dr.X wrote:


Just wondering if I can have in squid.conf like :

export FRONTEND='1.2.3.4'
http_port {FRONTEND}:3128

But the way above did not work and seems not recognized by squid .

My question is , Is it possible that I identify a variable and give it a 
value like string , number and call it in another place in squid.conf file ?



No, squid.conf does not support such variables (yet?). Needless to say, 
you can emulate that functionality using a custom preprocessor.



HTH,

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Issues with Squid Listening on 254 IP Addresses

2024-09-30 Thread Alex Rousskov

On 2024-09-30 09:08, Alexis DAVEAU wrote:

wget http://www.squid-cache.org/Versions/v5/squid-5.2.tar.gz 
tar -xzf squid-5.2.tar.gz

cd squid-5.2
export CXXFLAGS="-DMAXTCPLISTENPORTS=254"
./configure --prefix=/usr --localstatedir=/var 
--libexecdir=/usr/lib/squid --datadir=/usr/share/squid \
--sysconfdir=/etc/squid --with-logdir=/var/log/squid 
--with-pidfile=/var/run/squid.pid \

--enable-ssl --enable-ssl-crtd --enable-auth --enable-cache-digests \
--enable-removal-policies="lru,heap" --enable-follow-x-forwarded-for
make
sudo make install
But again, after running squid -v, the custom flag doesn't appear, and 
the limit for the number of listening IP addresses is still in place.



FWIW, Squid v6 builds as expected in my tests:


$ ./src/squid -v
Squid Cache: Version 6.11-VCS
Service Name: squid
configure options:  'CXXFLAGS=-DMAXTCPLISTENPORTS=254'


And I can also see the right -D option being passed to individual g++ 
commands during "make".



I also get the expected result with Squid the latest (unsupported) v5:


$ ./src/squid -v
Squid Cache: Version 5.10-VCS
Service Name: squid
configure options:  'CXXFLAGS=-DMAXTCPLISTENPORTS=254'



I tried to use your ./configure options, but they do not work for me:


configure: error: You need ssl gatewaying support to enable ssl-crtd feature. 
Try to use --with-openssl.


Hint: Replace ancient "--enable-ssl" with "--with-openssl".


Is it possible that a ./configure failure was missed in your build 
sequence? Or perhaps you are building one Squid binary but testing another?



HTH,

Alex.


I’ve tested with various versions of Squid, ranging from 4.8 to 5.9, but 
none of them seem to apply the custom flag for increasing the number of 
listening addresses/ports.


Questions:
How can I confirm that Squid is applying the MAXTCPLISTENPORTS value? Is 
there a way to force Squid to recognize this parameter?
Is there an alternative method to configure Squid to handle 254 IP 
addresses without recompiling? Am I missing a critical step in the build 
process?
Do you have any recommendations to optimize the configuration for 
managing an entire /24 prefix with 254 addresses?
Any advice or suggestions would be greatly appreciated! I’ve done 
extensive research on the issue, but I haven’t found a solution yet.


Thanks in advance for your help!

Best regards,
Alexis

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid appears to be ignoring url_rewrite_program

2024-09-17 Thread Alex Rousskov

On 2024-09-17 10:43, Martin A. Brooks wrote:

On 2024-09-17 15:13, Alex Rousskov wrote:
What makes you think that CONNECT requests are not sent to the 
rewriter? In my quick-and-dirty tests, Squid does send CONNECT request 
targets to the URL rewriter program and honors rewriter's 
rewrite-url=... response. For example, I see the new target logged to 
access.log.


Because the target is not being changed, whereas if I force http, it is.


OK, "target is not being changed" is not necessarily the same as "bypass 
the url rewriter" or "ignoring url_rewrite_program" or "not using the 
rewriter program"; there are still several open questions:


1. Does Squid ask rewriter to rewrite CONNECT target?
2. If yes, does rewriter actually rewrite CONNECT target?
3. If yes, does Squid use that rewritten CONNECT target?
4. If yes, why is using rewritten CONNECT target not enough
   (to accomplish whatever you are trying to accomplish)?

Please keep in mind that we know almost nothing about what symptoms you 
observe and what exactly you are trying to achieve...


Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid appears to be ignoring url_rewrite_program

2024-09-17 Thread Alex Rousskov

On 2024-09-17 09:34, Martin A. Brooks wrote:
Proxied HTTPS requests use 
CONNECT and, for whatever reason, this appears to bypass the url 
rewriter.


What makes you think that CONNECT requests are not sent to the rewriter? 
In my quick-and-dirty tests, Squid does send CONNECT request targets to 
the URL rewriter program and honors rewriter's rewrite-url=... response. 
For example, I see the new target logged to access.log.


Please keep in mind that changing CONNECT target has no effect on the 
received origin server response if the new target resolves to the same 
or equivalent IP address as the original target (and the port is 
unchanged): The origin server does not get a CONNECT request. It gets a 
TCP connection from Squid. The bytes on that connection come from the 
client (e.g., browser) rather than Squid! Those bytes contain original 
(not rewritten) TLS and HTTP details. By default, Squid works as a blind 
TCP tunnel when handling CONNECT requests.



I'm looking in to it some more but, given that a very large 
part of the world is HTTPS these days, it may be that I need to look at 
another option for this requirement.


If your rewriter helper is using url=... instead of rewrite-url=... 
responses, then please note that popular browsers do not honor redirect 
responses to their HTTP CONNECT requests -- they have chosen not to 
trust the proxy to make those kind of decisions.


Most likely, if you need to redirect https traffic, you have to bump(*) 
it; if you cannot bump in your environment, then you cannot redirect 
https using an HTTP proxy.


(*) See ssl_bump directive and the following URL, but keep in mind that 
bumping is naturally full of bad side effects and corner cases:

https://wiki.squid-cache.org/Features/SslPeekAndSplice


HTH,

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Looking for a solution to identify "unauthenticated" squid proxy users.

2024-09-17 Thread Alex Rousskov

On 2024-09-17 08:07, Xavier Lecluse wrote:

Hello, with the advice from Alex, we managed to add a custom field to the access.log, 
using an always matching "annotate_transaction" ACL.

We had to add the ACL on each line of our rulesets and the value inserted was 
the rule_name.
Then, by adding %{name}note in a custom logformat, we were able to display the 
rule matching each line in the access.log.


Glad you made it work! Someday, Squid will optionally add an 
"http_access(*) rule matched" annotation to all transactions, so that 
admins do not have to manually annotate all their rules.


(*) Similar breadcrumbs will be collected for other directives as well.

Alex.



- Mail original -
De: "Alex Rousskov" 
À: squid-users@lists.squid-cache.org
Envoyé: Lundi 2 Septembre 2024 22:38:44
Objet: Re: [squid-users] Looking for a solution to identify "unauthenticated" 
squid proxy users.

On 2024-09-02 15:00, Xavier Lecluse wrote:


I am facing a problem with my actual access.log configuration.
I use this logformat for the access.log :
"logformat timereadable %tl %un %Ss %>Hs %>a:%>p %st %rm %ru %mt %.
But I have some users which are not authentified (because of incompatiblity with their 
softwares) and then I don't have any information to differentiate which requests are made 
by each "user".

I tried to add <%et> <%ea> <%ul> <%ue>, without any success (the <%ul> just display 
the same as <%un> in my case).

I am searching a way to display a field which would help me to identify the 
requester.
For example, I use an ACL a rule file for each "user" in which several ACLs are 
defined. (squid/current/etc/current/rule/PXI_TESTPXI_P.conf)

Is there a way to use the "matching rule" file in the access log ?


Since many squid.conf directives are driven by ACLs, a typical
transaction often matches dozens of rules, explicit and implicit ones.
There is no %code that correctly guesses which matching rule should be
logged.

However, you can define an always-matching annotate_transaction ACL and
add it to any rule (or multiple rules). Specific or all transaction
annotations can then be logged (or sent to helpers, etc.) using %note
logformat code.

Untested example:

  acl markAsSpecial annotate_transaction category=special
  acl markAsBad annotate_transaction category=bad
  ...
  http_access allow goodClients
  http_access allow specialClients markAsSpecial
  http_access deny to_localhost markAsBad
  ...
  logformat timereadable %tl %note{category} %un %Ss ...


* annotate_transaction ACL type is documented at
http://www.squid-cache.org/Doc/config/acl/

* %note logformat code is documented at
http://www.squid-cache.org/Doc/config/logformat/



HTH,

Alex.



Actually, this is the log from an authenticated user :
Sep  2 17:08:32 FPVPXI2 squid[312387]: 02/Sep/2024:17:08:32 +0200 test 
TCP_TUNNEL 200 10.x.x.250:51994 6765 CONNECT www.google.com:443 - 5716 
FIRSTUP_PARENT 10.x.x.241 10.x.x.240 3128 326

And one from an unauthenticated user :
Sep  2 16:38:47 QFPVPXI2 squid[311234]: 02/Sep/2024:16:38:47 +0200 - TCP_TUNNEL 
200 10.x.x.242:22426 6726 CONNECT www.google.com:443 - 5718 FIRSTUP_PARENT 
10.x.x.241 10.x.x.240 3128 249



Regards,

Xavier
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Unable to access internal resources via hostname

2024-09-16 Thread Alex Rousskov

On 2024-09-16 14:06, Piana, Josh wrote:


http_access deny !localnet



This denies HTTP traffic to what I defined as "localnet". ... Because
this argument is near the bottom of my config, won't all other ACL's
and lists apply before getting to this deny all rule?


Others have already guided you further, but I will try to correct the 
above misunderstanding:


* The order of http_access directives matters: Squid evaluates 
http_access rules in their squid.conf order and applies the first 
http_access rule that has a matching condition. All subsequent 
http_access rules are ignored (for that transaction). Here, "to apply" 
means to allow or deny the transaction (depending on the matching rule 
action) and "matching condition" means that all ACLs named in that 
http_access rule have matched the transaction.


* The order of acl directives only matters in special cases for ACLs 
with side effects. For now, you can probably ignore that order.


* For a given ACL name "foo", it is usually best to keep all "acl foo" 
directives together and above the first http_access (or any other 
ACL-using directive) line that uses ACL named "foo".




I thought every statement needs a contradicting statement as well.


No, it does not: N+1 http_access rule is only evaluated if the 
first/earlier N http_access rules did _not_ match (see the first bullet 
above), so you effectively get that "contradicting" or "else" effect 
automatically as Squid moves through http_access rules...


Said that, it is often a good idea to use "http_access deny all" as the 
very last explicit http_access rule to avoid relying on an ever-changing 
default/implicit action that is applied when none of the explicit rules 
matched. That implicit default is "ever-changing" because it depends on 
the last explicit http_access rule action (which, naturally, may change 
as folks update their rules).



FWIW, the following FAQ entry covers the same concepts:
https://wiki.squid-cache.org/SquidFaq/SquidAcl


HTH,

Alex.



-Original Message-
From: squid-users  On Behalf Of Alex 
Rousskov
Sent: Monday, September 16, 2024 10:42 AM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Unable to access internal resources via hostname

Caution: This email originated from outside of Hexcel. Do not click links or 
open attachments unless you recognize the sender and know the content is safe.


On 2024-09-16 09:58, Piana, Josh wrote:


I removed all of the special, custom ACL's and we still don't have internal to 
internal browsing via hostname.


FWIW, these first two http_access rules make all subsequent http_access rules 
irrelevant/unused because these two rules match all traffic:


http_access deny !localnet
http_access allow localnet


I did not look further, but the above combination is a sign that you interpret 
http_access rules differently than Squid does. Please make sure you understand 
why the two rules above make all subsequent http_access rules irrelevant/unused 
before adjusting your configuration further. Ask questions as needed.


If the primary problem persists after addressing this configuration problem, then my 
earlier (2024-09-04) recommendation stands: Please restate the primary problem (e.g., 
detail what "don't have browsing"
means in terms of the test transaction outcome) and share debugging log of that 
test transaction again.


HTH,

Alex.



So there's something wrong with either the order of the squid.conf or I'm missing some 
"allow" variable.

Please see below for my current config:

##

# Authentication
##


auth_param negotiate program /usr/lib64/squid/negotiate_kerberos_auth
-k /etc/squid/HTTP.keytab -s
HTTP/arcgate2.ad.arc-tech@ad.arc-tech.com
auth_param negotiate children 10
auth_param negotiate keep_alive on
acl kerb-auth proxy_auth REQUIRED

##
 # Access control - shared/common ACL definitions
##
 #
--
-- # networks and hosts (by name or IP address)

acl src_self src 127.0.0.0/8
acl src_self src 10.46.11.69

acl to_localhost dst 10.46.11.69

acl localnet src 10.0.0.0/8
acl localnet src 172.0.0.0/8

acl local_dst_addr dst 10.0.0.0/8
acl local_dst_addr dst 172.0.0.0/8

#
--
--
# protocols (URL schemes)

acl proto_FTP proto FTP
acl proto_HTTP proto HTTP

#
--
--
# TCP port numbers

# TCP ports for ordinary HTTP
acl http_ports port 80

Re: [squid-users] Unable to access internal resources via hostname

2024-09-16 Thread Alex Rousskov

On 2024-09-16 09:58, Piana, Josh wrote:


I removed all of the special, custom ACL's and we still don't have internal to 
internal browsing via hostname.


FWIW, these first two http_access rules make all subsequent http_access 
rules irrelevant/unused because these two rules match all traffic:



http_access deny !localnet
http_access allow localnet


I did not look further, but the above combination is a sign that you 
interpret http_access rules differently than Squid does. Please make 
sure you understand why the two rules above make all subsequent 
http_access rules irrelevant/unused before adjusting your configuration 
further. Ask questions as needed.



If the primary problem persists after addressing this configuration 
problem, then my earlier (2024-09-04) recommendation stands: Please 
restate the primary problem (e.g., detail what "don't have browsing" 
means in terms of the test transaction outcome) and share debugging log 
of that test transaction again.



HTH,

Alex.



So there's something wrong with either the order of the squid.conf or I'm missing some 
"allow" variable.

Please see below for my current config:

##
# Authentication
##

auth_param negotiate program /usr/lib64/squid/negotiate_kerberos_auth -k 
/etc/squid/HTTP.keytab -s HTTP/arcgate2.ad.arc-tech@ad.arc-tech.com
auth_param negotiate children 10
auth_param negotiate keep_alive on
acl kerb-auth proxy_auth REQUIRED

##
# Access control - shared/common ACL definitions
##
# 
# networks and hosts (by name or IP address)

acl src_self src 127.0.0.0/8
acl src_self src 10.46.11.69

acl to_localhost dst 10.46.11.69

acl localnet src 10.0.0.0/8
acl localnet src 172.0.0.0/8

acl local_dst_addr dst 10.0.0.0/8
acl local_dst_addr dst 172.0.0.0/8

# 
# protocols (URL schemes)

acl proto_FTP proto FTP
acl proto_HTTP proto HTTP

# 
# TCP port numbers

# TCP ports for ordinary HTTP
acl http_ports port 80   # standard HTTP
acl http_ports port 81   # common alternative
acl http_ports port 8001 # epson.com support sub-site
acl http_ports port 8080 # common alternative
acl http_ports port 88 8000    # ad-hoc services
acl http_ports port 1080 # SOCK frontend to HTTP service
acl http_ports port 21-22# http:// frontend to FTP service
acl http_ports port 443  # https:// URLs

# TCP ports for HTTP-over-SSL
acl Ssl_ports port 443   # standard HTTPS
acl Ssl_ports port 9571 # lexmark.com
acl Ssl_ports port 22 # SSH

# TCP ports for plain FTP command channel
acl ftp_ports port 21

# 
# HTTP methods (and pseudo-methods)

acl CONNECT method CONNECT

##
# Access control - general proxy
##

# define and allow new ACL for "Safe_ports"
acl Safe_Ports any-of http_ports Ssl_ports ftp_ports

# 
# basic deny rules

# deny anything not from the LAN
http_access deny !localnet

# allow localnet users
http_access allow localnet

# blocks self to self connections
http_access deny to_localhost

# deny unauthorized access and DoS attacks
http_access deny !Safe_ports
http_access deny CONNECT !Ssl_ports

# allow authenticated clients after all other ACL's
http_access allow kerb-auth

# deny any request we missed in the above
http_access deny all

# if no other ACL applies, allow
http_reply_access allow all

This is it. I commented out all other ACL's for allow/deny we had in place for 
our custom rules. Still unable to browse local via hostname, IP works fine 
again.

-Original Message-
From: squid-users  On Behalf Of Amos 
Jeffries
Sent: Saturday, September 7, 2024 10:51 AM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Unable to access internal resources via hostname

Caution: This email originated from outside of Hexcel. Do not click links or 
open attachments unless you recognize the sender and know the content is safe.


On 6/09/24 03:56, Piana, Josh wrote:

Hello Amos,

While the comments did say that it was just the 10.46.11.0 range, I don't think there's 
any other ACL forcing that. I tried adding the the two internal sites that are being 
blocked 

Re: [squid-users] Problem with 'delay_access' using acl external

2024-09-10 Thread Alex Rousskov

On 2024-09-10 13:54, Carlos André wrote:

My "delay_class" simple DON'T with if I use a acl external (helper - 
LDAP or winbind [ext_wbinfo_group_acl], same problem), delay_class work 
ok using a acl proxy_auth or acl src but nothing with a external.


I believe your configuration is suffering from two semi-independent 
problems:



Problem A:

External ACLs are so called "slow" or "asynchronous" ACLs. They should 
not be used together with directives that do not support "slow" ACLs. It 
is not explicitly documented, but delay_access directive does _not_ 
support slow ACLs AFAICT. It only supports "fast" ACLs.


N.B. Due to ACL caching side effects, using slow ACLs with directives 
that do not support them may appear to "work" in certain cases, but it 
is not supported and should not be relied upon.



I need to use external bcoz I use groups to specify Internet 
speed/policy per user.


I recommend checking your Group_Internet ACL at http_access time instead 
of delay_access time; http_access directive supports slow ACLs and 
should be evaluated before delay_access is.


Use annotate_transaction or annotate_client ACLs to remember whether 
Group_Internet ACL has matched at http_access evaluation time. Use a 
"note" ACL to check whether those annotations have been set. The "note" 
ACL is a "fast" ACL. annotate_transaction documentation in 
squid.conf.documented has a relevant example. There are also potentially 
relevant examples in bug #4993 report (among others):

https://bugs.squid-cache.org/show_bug.cgi?id=4993


Problem B:

2024/09/10 14:30:28 kid1| WARNING: Group_Internet ACL is used in context 
without an ALE state. Assuming mismatch.


I have not checked carefully, but this could be a bug fixed in v6. The 
corresponding commit says "delay_pool_access lacked ... details beyond 
src/dst addresses".


Upgrade to v6+. If you are still getting a similar runtime WARNING, then 
there is another Squid bug that needs to be fixed.



HTH,

Alex.



Bellow there my sample squid.conf:


acl SSL_ports port 443 6443 8443 8080 8008
acl Safe_ports port 80          # http
acl Safe_ports port 21          # ftp
acl Safe_ports port 443         # https
acl Safe_ports port 70          # gopher
acl Safe_ports port 210         # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280         # http-mgmt
acl Safe_ports port 488         # gss-http
acl Safe_ports port 591         # filemaker
acl Safe_ports port 777         # multiling http

http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost manager
http_access deny manager
http_access deny to_localhost

http_port 8080

cache_dir ufs /var/spool/squid 8192 32 128

coredump_dir /var/spool/squid

auth_param negotiate program /usr/lib64/squid/negotiate_kerberos_auth -k 
/etc/squid/HTTP.keytab -s HTTP/ser...@realm.lan

auth_param negotiate children 20 startup=2 idle=2

external_acl_type AD ttl=360 children-startup=2 children-max=20 
children-idle=2 %LOGIN /usr/lib64/squid/ext_ldap_group_acl -Z -K -R -d 
-h 192.168.0.10 -b "dc=realm,dc=lan" -D 
"cn=squid,cn=Users,dc=realm,dc=lan" -w password1234 -f 
"(&(cn=%u)(memberof=cn=%g,cn=Users,dc=realm,dc=lan))"


acl kerb-auth proxy_auth REQUIRED

acl Group_Internet external AD Internet_Access
acl User proxy_auth car...@realm.lan
acl src_carlos_ip src 192.168.0.100

http_access allow Group_Internet # work!
http_access deny all


delay_pools 2
delay_class 1 2
delay_class 2 2

delay_parameters 1   4096000/4096000  2048000/2048000
delay_parameters 2   2048000/2048000   512000/512000

delay_access 1 allow Group_Internet  # won't work (Squid ignore it and 
pass to next delay_access)

#delay_access 1 allow User           # work!
#delay_access 1 allow src_carlos_ip  # work!
delay_access 1 deny all

delay_access 2 allow all
###

#
delay_access 1 allow Group_Internet  # won't work (Squid ignore it and 
pass to next delay_access)

#delay_access 1 allow User           # work!
#delay_access 1 allow src_carlos_ip  # work!
delay_access 1 deny all

#
delay_access 2 allow all






___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Questions about Squid configuration

2024-09-10 Thread Alex Rousskov

On 2024-09-05 04:58, にば wrote:


I took the advice I received, reviewed the verification details, and
verified again with the two recommended steps.
The new verification includes the following four patterns:
1. successful communication of a valid request to an allowed site
[command]
curl https://pypi.org/ -v --cacert squid.crt -k

2. communication of invalid requests to allowed sites is denied
[command] ※Invalid header host
curl https://pypi.org/ -H "Host: www.yahoo.co.jp" -v --cacert squid.crt -k

3. communication of valid requests to prohibited sites is denied
[command]
curl https://www.yahoo.co.jp/ -v --cacert squid.crt -k

4. communication of invalid requests to prohibited sites is denied
[command]
curl https://www.yahoo.co.jp/ -H "Host:" -v --cacert squid.crt -k



Thank you for following my recommendations, documenting your tests, and 
sharing configurations!


Do you run these "curl" commands on the same box that runs Squid?



STEP1
>1. Remove all of your current http_access rules. Keep ACLs. Perform
>host_verify_strict and access tests to confirm that all valid requests
>are denied and all invalid requests are rejected. If necessary, ask
>questions, file bug reports, patch Squid, and/or adjust your
>configuration to pass this test.

・Results for validation patterns
1.403 Forbidden、X-Squid-Error: ERR_ACCESS_DENIED 0
2.403 Forbidden、X-Squid-Error: ERR_ACCESS_DENIED 0
3.403 Forbidden、X-Squid-Error: ERR_ACCESS_DENIED 0
4.403 Forbidden、X-Squid-Error: ERR_ACCESS_DENIED 0


I expected to get a different response for a valid request than for 
an invalid request, is this result correct?


AFAICT, the above results are correct.

Your expectations are reasonable, but you are thinking in terms of plain 
HTTP while your Squid is configured to bump intercepted HTTPS 
connections. In your "STEP 1" configuration, Squid does not see the HTTP 
request at all! Squid generates a fake CONNECT request using 
TCP/IP-level information from the intercepted TCP connection (and denies 
that generated request because there are no http_access rules to allow it).


When I was formulating my expectations, I was thinking in terms of HTTP 
requests as well. That is why I expected invalid requests to be rejected 
(by host_verify_strict) rather than denied (by http_access). 
Interception with SslBump makes everything more complex by adding 
another layer of fake CONNECT requests...


Let's consider this step 1 validation successfully completed.



STEP2
>2. Copy http_access rules, with comments, from generated
>squid.conf.default. Insert your own access rules in the location marked
>by "INSERT YOUR OWN RULE(S) HERE" comment. Perform host_verify_strict
>and access tests to confirm that all valid requests to banned sites are
>denied, all other valid requests are allowed, and all invalid requests
>are rejected. If necessary, ask questions, file bug reports, patch
>Squid, and/or adjust your configuration to pass this test.

・Results for validation patterns
1.200 OK
2.409 Conflict、X-Squid-Error: ERR_CONFLICT_HOST 0
3.409 Conflict、X-Squid-Error: ERR_CONFLICT_HOST 0
4.200 OK




・4. still returns 200 OK. Is this due to an error in the existing
configuration? Or do I need to add a new setting?


I believe Test 4 does not result in ERR_CONFLICT_HOST because Squid does 
not consider empty Host headers invalid from host header validation 
point of view: As we discussed earlier, "valueless or missing Host 
header disables all checks".


If you do consider requests with valueless or missing Host header 
invalid, then you need to add a custom "http_access deny" rule that 
would ban them. Look for "req_header" discussion in my earlier answer 
for (untested) hints about detecting requests with a valueless Host header.


However, you may want to double check whether rejecting requests with an 
empty Host header is actually necessary in your environment. Perhaps 
they can be considered valid (which is what Squid does by default)?



My primary concern here is that test 4 request was not blocked by an 
"http_access deny" rule. I suspect that happens because the following 
allow rule matched:


acl https_port port 443
http_access allow https_port

I recommend deleting the above http_access rule. AFAICT, you only want 
to allow valid requests targeting specific/allowed sites. You already 
have other rules for that. The above "all HTTPS" rule is too broad and 
is seemingly unnecessary.


I also recommend deleting a similar rule that allows all port-80 
requests, for similar reasons:


acl http_port port 80
http_access allow http_port


If you think you do need those two broad rules, please clarify what you 
think you need them for. In other words, what tests would break if you 
remove them?



HTH,

Alex.





2024年8月30日(金) 22:27 Alex Rousskov :


On 2024-08-29 22:28, にば wrote:


With the newly reviewed configuration in the att

Re: [squid-users] squid5.5 restart failure due to domain list duplication

2024-09-10 Thread Alex Rousskov

On 2024-09-05 01:52, YAMAGUCHI NOZOMI (JIT ICC) wrote:

If there were duplicate domains in the list of domains used, restarting 
the squid would cause the process to stop.

Below is the error statement.
ERROR: 'a.example.com' is a subdomain of 'example.com
FATAL: /etc/squid/squid.conf



Hi Nichole,

FWIW, this and several related problems were fixed in master/v7 
commit 0898d0f4: https://github.com/squid-cache/squid/commit/0898d0f4




I don't think the same thing happened with my previous squid3.5.


AFAICT, the relevant code is the same in v3.5 and v5.5, but I did not 
check carefully because both versions are not supported by the Squid 
Project. Squid v6.10 does suffer from the same problem as well.



・Is it possible to configure the process not to stop even if there are 
duplicates in the domain list?


Short answer: No. The problematic behavior is hard-coded.

Needless to say, one can (and should) configure Squid using ACLs without 
duplicates.



・Are there any other user actions besides duplicate domains that would 
trigger a process stop?


Yes, lots of configuration errors still result in Squid death. We are 
actively working on eliminating such cases during reconfiguration; if 
our changes are officially accepted, Squid v7 should be significantly 
better in this regard. Please see Matus's earlier response on this 
thread for ways to avoid such deaths.



HTH,

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Unable to access internal resources via hostname

2024-09-04 Thread Alex Rousskov

On 2024-09-04 12:25, Piana, Josh wrote:


That's REALLY strange that its getting blocked by "http_access deny
block_user". That's an ACL that is supposed to just deny "known good"
users that we don't want to have access to the user proxy. This ACL
references the "block_user" list which includes guest accounts,
vender accounts, and a few users. The account I'm testing with,
"jpiana" is not on that list. I wonder why I'm getting filtered here
and not the very bottom of the squid.conf where we have the
"http_access deny all" ACL.


I know a lot of things have changed since you wrote the above, but I 
wanted to answer that still-relevant question: The test transaction gets 
an HTTP 407 response not because its client name matches a block_user 
ACL entry. The test transaction gets an HTTP 407 response because the 
user has not authenticated itself -- Squid sends HTTP 407 response to 
request authentication.


If you are still having problems after changing the test proxy and its 
configuration (as you discussed in your recent posts), please restate 
the primary problem and share debugging log of a test transaction again.



Cheers,

Alex.




Maybe the order of my ACL's is wrong?

In regards to the "local_dst_dom", I've setup a a list here under this path:  
"/etc/squid/local_dst_dom", to immitate the other ACL rules. When I added the 
".hexcelssp.com" address to this list, it failed to work. Most likely because this isn't a domain.

FWIW, I just want ad.arc-tech.com devices to be allowed to reach all other 
ad.arc-tech.com devices via hostname. I'm willing to remove any other ACL 
that's getting in the way of that in order to make this work. This whole config 
that we have has been pieced together and I'd like to get it to the way it's 
supposed to be.

What do you recommend? I can send the whole config again, exactly as we have it 
now, and see what we can fix/remove/replace.

Appreicate you helping,
Josh

-Original Message-
From: squid-users  On Behalf Of Alex 
Rousskov
Sent: Tuesday, September 3, 2024 4:43 PM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Unable to access internal resources via hostname

Caution: This email originated from outside of Hexcel. Do not click links or 
open attachments unless you recognize the sender and know the content is safe.


Hi Josh,

  Thank you for sharing debugging logs. "CONNECT hexcelssp"
transaction is being denied by the following rule:

  http_access deny block_user

None of the ~10 http_access rules above the quoted one matched. Which of those rules do 
you expect to match that "CONNECT hexcelssp" request (_before_ the rule quoted 
above denies the request and asks the client to authenticate)? I suspect it is the 
local_dst_dom rule.

local_dst_dom ACL will not match this transaction. You have changed the definition of that ACL from 
a hard-coded (in squid.conf) "arcgate" name to an unknown to me set of names inside 
"etc/squid/local_dst_dom" file.
I wonder whether that file path is correct: Did you mean that path to be relative (i.e. "etc...") 
rather than absolute (i.e. "/etc...")? Try using an absolute path and double check that the file 
contains "hexcelssp" name.


HTH,

Alex.


2024/09/03 13:14:11.467 kid1| 85,2| client_side_request.cc(758)
clientAccessCheckDone: The request CONNECT hexcelssp:443 is AUTH_REQUIRED; last 
ACL checked: block_user




On 2024-08-29 10:34, Piana, Josh wrote:

Good morning Alex,

I've added the following to my squid.conf file # logformat custom
%err_code/%err_detail # access_log daemon:/var/log/squid/access,log
custom

I've also enabled debugging options:
# debug_options ALL,9

I've also cleaned up our ACL's to better reflect what is going on:
# acl src_self src 127.0.0.0/8
# acl src_self src 10.46.11.69
# acl dst_self dst 127.0.0.0/8
# acl dst_self dst 10.46.11.69

# acl from_arc src 10.46.0.0/15

# acl local_dst_addr dst 10.0.0.0/8
# acl local_dst_addr dst 172.0.0.0/8

When accessing internal devices via IP, it works.
I think this is because of this ACL:
# http_access   allow local_dst_addr
# http_reply_access allow local_dst_addr

But when I access those same internal devices via hostname, it fails.
I'm using this ACL, which I just created a separate file for:
acl local_dst_dom dstdomain "etc/squid/local_dst_dom"
http_access   allow local_dst_dom
http_reply_access allow local_dst_dom

I added an ACL file because I figure it will be more extensive than the more 
simple IP range ACL's.
I've added a few local websites to the "local_dst_dom" list, but I'm unable to 
get this to resolve without using IP.

For instance: "https://hexcelssp/"; is the name of one of our internal websites 
for our company.
T

Re: [squid-users] Unable to access internal resources via hostname

2024-09-03 Thread Alex Rousskov

Hi Josh,

Thank you for sharing debugging logs. "CONNECT hexcelssp" 
transaction is being denied by the following rule:


http_access deny block_user

None of the ~10 http_access rules above the quoted one matched. Which of 
those rules do you expect to match that "CONNECT hexcelssp" request 
(_before_ the rule quoted above denies the request and asks the client 
to authenticate)? I suspect it is the local_dst_dom rule.


local_dst_dom ACL will not match this transaction. You have changed the 
definition of that ACL from a hard-coded (in squid.conf) "arcgate" name 
to an unknown to me set of names inside "etc/squid/local_dst_dom" file. 
I wonder whether that file path is correct: Did you mean that path to be 
relative (i.e. "etc...") rather than absolute (i.e. "/etc...")? Try 
using an absolute path and double check that the file contains 
"hexcelssp" name.



HTH,

Alex.


2024/09/03 13:14:11.467 kid1| 85,2| client_side_request.cc(758) 
clientAccessCheckDone: The request CONNECT hexcelssp:443 is 
AUTH_REQUIRED; last ACL checked: block_user





On 2024-08-29 10:34, Piana, Josh wrote:

Good morning Alex,

I've added the following to my squid.conf file
# logformat custom %err_code/%err_detail
# access_log daemon:/var/log/squid/access,log custom

I've also enabled debugging options:
# debug_options ALL,9

I've also cleaned up our ACL's to better reflect what is going on:
# acl src_self src 127.0.0.0/8
# acl src_self src 10.46.11.69
# acl dst_self dst 127.0.0.0/8
# acl dst_self dst 10.46.11.69

# acl from_arc src 10.46.0.0/15

# acl local_dst_addr dst 10.0.0.0/8
# acl local_dst_addr dst 172.0.0.0/8

When accessing internal devices via IP, it works.
I think this is because of this ACL:
# http_access   allow local_dst_addr
# http_reply_access allow local_dst_addr

But when I access those same internal devices via hostname, it fails.
I'm using this ACL, which I just created a separate file for:
acl local_dst_dom dstdomain "etc/squid/local_dst_dom"
http_access   allow local_dst_dom
http_reply_access allow local_dst_dom

I added an ACL file because I figure it will be more extensive than the more 
simple IP range ACL's.
I've added a few local websites to the "local_dst_dom" list, but I'm unable to 
get this to resolve without using IP.

For instance: "https://hexcelssp/"; is the name of one of our internal websites 
for our company.
This doesn't load under our current config but "https://10.96.14.174/"; DOES 
load.

I've added "hexnt2067" as well, which you can see me testing in the below logs.
Again, this DOES work with its IP, 10.96.102.67.

I'm not getting a proxy error message when I try to browse to that URL though, 
we just can't reach it.

Here's the log results after enabling debugging and better log features:

29/Aug/2024:10:25:33 -0400.380 10.46.49.190 TCP_DENIED/407 4132 CONNECT 
hexcelssp:443 - HIER_NONE/- text/html
29/Aug/2024:10:25:33 -0400.515 10.46.49.190 NONE_NONE/500 0 CONNECT hexcelssp:443 
JPIANA@AD..COM HIER_NONE/- -
29/Aug/2024:10:25:39 -0400.799 10.46.49.190 TCP_TUNNEL/200 11716 CONNECT 
contile.services.mozilla.com:443 JPIANA@AD..COM 
HIER_DIRECT/34.117.188.166 -
29/Aug/2024:10:26:05 -0400.507 10.46.49.190 TCP_DENIED/407 4474 GET 
http://hexnt2067/sites/it/help/SitePages/Home.aspx - HIER_NONE/- text/html
29/Aug/2024:10:26:05 -0400.646 10.46.49.190 TCP_MISS/500 8320 GET 
http://hexnt2067/sites/it/help/SitePages/Home.aspx JPIANA@AD..COM 
HIER_NONE/- text/html
29/Aug/2024:10:26:05 -0400.735 10.46.49.190 TCP_DENIED/403 4440 GET 
http://arcgate2.ad/..com:8080/squid-internal-static/icons/SN.png - 
HIER_NONE/- text/html
29/Aug/2024:10:26:05 -0400.779 10.46.49.190 TCP_MISS_ABORTED/500 0 GET 
http://hexnt2067/favicon.ico JPIANA@AD..COM HIER_NONE/- text/html
29/Aug/2024:10:27:13 -0400.036 10.46.49.190 TCP_DENIED/407 4474 GET 
http://hexnt2067/sites/it/help/SitePages/Home.aspx - HIER_NONE/- text/html
29/Aug/2024:10:27:13 -0400.227 10.46.49.190 TCP_MISS/500 8334 GET 
http://hexnt2067/sites/it/help/SitePages/Home.aspx JPIANA@AD..COM 
HIER_NONE/- text/html
29/Aug/2024:10:27:13 -0400.302 10.46.49.190 TCP_DENIED/403 4440 GET 
http://arcgate2.ad/..com:8080/squid-internal-static/icons/SN.png - 
HIER_NONE/- text/html
29/Aug/2024:10:27:17 -0400.376 10.46.49.190 TCP_DENIED/407 4132 CONNECT 
hexcelssp:443 - HIER_NONE/- text/html
29/Aug/2024:10:27:17 -0400.514 10.46.49.190 NONE_NONE/500 0 CONNECT hexcelssp:443 
JPIANA@AD..COM HIER_NONE/- -

I'm not sure the debugging and extra log details were added correctly, because 
these look the same.

Thanks,
Josh


-Original Message-
From: squid-users  On Behalf Of Alex 
Rousskov
Sent: Wednesday, August 28, 2024 4:01 PM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Unable to access internal resources via hostname

Caution: Thi

Re: [squid-users] negotiate_kerberos_auth not working anymore

2024-09-03 Thread Alex Rousskov

On 2024-08-30 08:35, Michael Egert wrote:

I have a little problem with this helper, it worked fine for a while and 
then suddely stopped working.


It may help others if you detail "stopped working" based on a test case 
involving Squid. AFAICT, your email contains an attempt to manually feed 
the helper a syntactically invalid request but does not detail what does 
not work when Squid is involved. The cache.log provided shows an unused 
helper.




negotiate_kerberos_auth: DEBUG: Got 'admin@ASA.LOCAL' from squid
negotiate_kerberos_auth: ERROR: Invalid request [admin@ASA.LOCAL]


A helper request must start with "YR" or "KK" characters. This manual 
request does not.



> auth_parauth_param negotiate children 100 startup=0 idle=10

There is no "auth_parauth_param" directive. This is probably a 
copy-paste typo, but please check that the actual spelling is "auth_param".



Disclaimer: I do not know much about kerberos and negotiate_kerberos_auth.


HTH,

Alex.



I can call a kerberos ticket when using kinit

root@sv-asa-proxy:/var/log/squid# kinit -kt 
/etc/squid/sv-asa-proxy.keytab HTTP/sv-asa-proxy@ASA.LOCAL


root@sv-asa-proxy:/var/log/squid# klist

Ticket cache: FILE:/tmp/krb5cc_0

Default principal: HTTP/sv-asa-proxy@ASA.LOCAL

Valid starting Expires    Service principal

08/30/24 14:24:27  08/31/24 00:24:27  krbtgt/ASA.LOCAL@ASA.LOCAL

     renew until 08/31/24 14:24:27

root@sv-asa-proxy:/var/log/squid#

so – this works well

this is a part of my squid.conf:

auth_param negotiate program /usr/lib/squid/negotiate_kerberos_auth -k 
/etc/squid/sv-asa-proxy.keytab -s HTTP/sv-asa-proxy@ASA.LOCAL 
  -r -d


auth_parauth_param negotiate children 100 startup=0 idle=10

auth_param negotiate keep_alive on

acl kerb-auth proxy_auth REQUIRED

i also tried

auth_param negotiate program /usr/lib/squid/negotiate_kerberos_auth -k 
/etc/squid/sv-asa-proxy.keytab -s HTTP/sv-asa-proxy@ASA.LOCAL  -s 
GSS_C_NO_NAME -r -d


no success...

when i try

root@sv-asa-proxy:/var/log/squid# 
/usr/lib/squid/negotiate_kerberos_auth_test -k 
/etc/squid/sv-asa-proxy.keytab -s HTTP/sv-asa-proxy.asa.local@ASA.LOCAL 
-s GSS_C_NO_NAME -d -i


2024/08/30 14:28:35| negotiate_kerberos_auth_test: 
gss_init_sec_context() failed: Unspecified GSS failure.  Minor code may 
provide more information. Server not found in Kerberos database


Token: NULL

root@sv-asa-proxy:/var/log/squid#

and when i try this one:

root@sv-asa-proxy:/var/log/squid# /usr/lib/squid/negotiate_kerberos_auth 
-k /etc/squid/sv-asa-proxy.keytab -s 
HTTP/sv-asa-proxy.asa.local@ASA.LOCAL 
 -d -r


negotiate_kerberos_auth.cc(489): pid=5286 :2024/08/30 14:29:25| 
negotiate_kerberos_auth: INFO: Starting version 3.1.0sq


negotiate_kerberos_auth.cc(548): pid=5286 :2024/08/30 14:29:25| 
negotiate_kerberos_auth: INFO: Setting keytab to 
/etc/squid/sv-asa-proxy.keytab


negotiate_kerberos_auth.cc(571): pid=5286 :2024/08/30 14:29:25| 
negotiate_kerberos_auth: INFO: Changed keytab to 
MEMORY:negotiate_kerberos_auth_5286


admin@ASA.LOCAL 

negotiate_kerberos_auth.cc(612): pid=5286 :2024/08/30 14:30:06| 
negotiate_kerberos_auth: DEBUG: Got 'admin@ASA.LOCAL' from squid 
(length: 15).


negotiate_kerberos_auth.cc(661): pid=5286 :2024/08/30 14:30:06| 
negotiate_kerberos_auth: ERROR: Invalid request [admin@ASA.LOCAL]


BH Invalid request

And the log:

2024/08/30 14:31:25 kid1| Set Current Directory to /var/spool/squid

2024/08/30 14:31:25 kid1| Starting Squid Cache version 5.9 for 
x86_64-pc-linux-gnu...


2024/08/30 14:31:25 kid1| Service Name: squid

2024/08/30 14:31:25 kid1| Process ID 5309

2024/08/30 14:31:25 kid1| Process Roles: worker

2024/08/30 14:31:25 kid1| With 1024 file descriptors available

2024/08/30 14:31:25 kid1| Initializing IP Cache...

2024/08/30 14:31:25 kid1| DNS Socket created at [::], FD 9

2024/08/30 14:31:25 kid1| DNS Socket created at 0.0.0.0, FD 10

2024/08/30 14:31:25 kid1| Adding nameserver 192.168.40.1 from squid.conf

2024/08/30 14:31:25 kid1| Adding nameserver 192.168.40.2 from squid.conf

2024/08/30 14:31:25 kid1| helperOpenServers: Starting 0/100 
'negotiate_kerberos_auth' processes


2024/08/30 14:31:25 kid1| helperStatefulOpenServers: No 
'negotiate_kerberos_auth' processes needed.


2024/08/30 14:31:25 kid1| helperOpenServers: Starting 0/25 
'ext_kerberos_ldap_group_acl' processes


2024/08/30 14:31:25 kid1| helperOpenServers: No 
'ext_kerberos_ldap_group_acl' processes needed.


2024/08/30 14:31:25 kid1| Logfile: opening log 
daemon:/var/log/squid/access.log


2024/08/30 14:31:25 kid1| Logfile Daemon: opening log 
/var/log/squid/access.log


2024/08/30 14:31:26 kid1| Unlinkd pipe opened on FD 16

2024/08/30 14:31:26 kid1| Local cache digest enabled; rebuild/rewrite 
every 3600/3600 sec


2024/08/30 14:31:26 kid1| Logfile: opening log 
daemon:/var/log/squid/store.log


2024/08/30 14:31:26 kid1| Logfile Daemon: opening log 
/va

Re: [squid-users] Squid traffic paths

2024-09-02 Thread Alex Rousskov

On 2024-08-31 15:00, Scott Bates wrote:

The squid logs show traffic going to the expected destinations.


I assume that the above statement does _not_ talk about problematic 
traffic. In other words, Squid does handle some transactions, but not 
the problematic transactions you are asking about. I believe the above 
observation confirms part (a) of my working theory.



If I look at wireshark on one of the client systems I do see some http 
entries going to those destinations through the squid server.


OK, for the purpose of this email thread, ignore traffic going through 
the Squid server.



However 
most of the traffic (UDP / TCP) doesn't seem to be going through the 
squid server.


UDP: Squid does not proxy UDP-based protocols. If you want to proxy UDP, 
Squid is not the solution.


TCP: Squid can proxy HTTP/1 and FTP transactions (over TCP). Does that 
problematic TCP traffic in question contain HTTP or FTP transactions 
(i.e. originates from HTTP or FTP clients running on test VMs)? If not, 
then your existing "HTTP proxy configuration" on test VMs is probably 
not applicable -- the clients on those VMs probably ignore that HTTP 
proxy setting because they do not talk HTTP...




I'm not sure how to force all traffic to use squid on the client system.


I do not know enough about Windows to help you with this Squid-unrelated 
configuration, but please note that since Squid cannot proxy traffic 
other than HTTP and FTP, you probably do not want to force traffic other 
than HTTP and FTP through Squid. In other words, Squid is not a 
"universal" proxy that can proxy everything.



HTH,

Alex.



On 2024-08-28 09:14, Alex Rousskov wrote:

On 2024-08-28 08:52, Scott Bates wrote:

Alex: What protocol do those external services use in problematic use 
cases?>> Does Squid see the corresponding requests from VMs?

Squid can only proxy HTTP and FTP...



http and https only


Does Squid log the corresponding problematic transactions to its 
access.log?



The weird thing is I have an android test phone that also goes through 
squid and that device shows the correct IP on the online services.


My working theory is that (a) android test phone goes through Squid 
(i.e. uses Squid as an HTTP proxy) while (b) the problematic test 
traffic does not (i.e. goes directly to the external service).


The first guess can be confirmed using access.log (should be trivial in 
an isolated test environment). The second guess can be confirmed by 
packet capture analysis (may not be trivial in a virtualized environment 
and on Windows).



HTH,

Alex.



___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Looking for a solution to identify "unauthenticated" squid proxy users.

2024-09-02 Thread Alex Rousskov

On 2024-09-02 15:00, Xavier Lecluse wrote:


I am facing a problem with my actual access.log configuration.
I use this logformat for the access.log :
"logformat timereadable %tl %un %Ss %>Hs %>a:%>p %st %rm %ru %mt %.
But I have some users which are not authentified (because of incompatiblity with their 
softwares) and then I don't have any information to differentiate which requests are made 
by each "user".

I tried to add <%et> <%ea> <%ul> <%ue>, without any success (the <%ul> just display 
the same as <%un> in my case).

I am searching a way to display a field which would help me to identify the 
requester.
For example, I use an ACL a rule file for each "user" in which several ACLs are 
defined. (squid/current/etc/current/rule/PXI_TESTPXI_P.conf)

Is there a way to use the "matching rule" file in the access log ?


Since many squid.conf directives are driven by ACLs, a typical 
transaction often matches dozens of rules, explicit and implicit ones. 
There is no %code that correctly guesses which matching rule should be 
logged.


However, you can define an always-matching annotate_transaction ACL and 
add it to any rule (or multiple rules). Specific or all transaction 
annotations can then be logged (or sent to helpers, etc.) using %note 
logformat code.


Untested example:

acl markAsSpecial annotate_transaction category=special
acl markAsBad annotate_transaction category=bad
...
http_access allow goodClients
http_access allow specialClients markAsSpecial
http_access deny to_localhost markAsBad
...
logformat timereadable %tl %note{category} %un %Ss ...


* annotate_transaction ACL type is documented at
http://www.squid-cache.org/Doc/config/acl/

* %note logformat code is documented at
http://www.squid-cache.org/Doc/config/logformat/



HTH,

Alex.



Actually, this is the log from an authenticated user :
Sep  2 17:08:32 FPVPXI2 squid[312387]: 02/Sep/2024:17:08:32 +0200 test 
TCP_TUNNEL 200 10.x.x.250:51994 6765 CONNECT www.google.com:443 - 5716 
FIRSTUP_PARENT 10.x.x.241 10.x.x.240 3128 326

And one from an unauthenticated user :
Sep  2 16:38:47 QFPVPXI2 squid[311234]: 02/Sep/2024:16:38:47 +0200 - TCP_TUNNEL 
200 10.x.x.242:22426 6726 CONNECT www.google.com:443 - 5718 FIRSTUP_PARENT 
10.x.x.241 10.x.x.240 3128 249



Regards,

Xavier
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Unable to access internal resources via hostname

2024-08-30 Thread Alex Rousskov

On 2024-08-30 10:19, Piana, Josh wrote:


If its not one issue its another. Now I can't seem to generate a
cache log file for to to assist with further troubleshooting.

I've restarted squid and used "squid -k reconfigure" a few times,
verified debugging is enabled and set to "All,9", and I don't have
any special rules in my config to point the cache log somewhere
else.

I'm not sure what I'm doing wrong with it. I did remove the file it
created yesterday, because I wanted to make sure we had a fresh one
to write to and send. I was thinking it may need explicit permissions
to write the logs here but it didn't need that before and I'm not
getting any errors related to it. The "access.log" file is still
working, however.


I cannot guess what exactly went wrong, but I can recommend the 
following recovery steps:


1. Stop/kill Squid. Make sure there are no Squid processes running 
(e.g., using `ps aux | grep squid` or a similar low-level check). If you 
cannot figure out how to get rid of old Squid processes, reboot the box.


2. Remove old cache.log and access.log files (if any) or move them into 
a different directory.


3. Start Squid.

If the problem persists, share the command you use to start Squid and 
any console output you get from that command.



In general, avoid using "squid -k reconfigure" when possible, especially 
when using Squid v5 and earlier.



HTH,

Alex.


-Original Message-
From: squid-users  On Behalf Of Alex 
Rousskov
Sent: Thursday, August 29, 2024 2:24 PM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Unable to access internal resources via hostname

Caution: This email originated from outside of Hexcel. Do not click links or 
open attachments unless you recognize the sender and know the content is safe.


On 2024-08-29 13:08, Piana, Josh wrote:


In regards to the debugging, do you still want me to do All,9 or is
All,6 acceptable? I am reading that 9 is... huge.


I want ALL,9. I can usually handle its size. If you cannot, then adjust as 
needed, but my willingness to help will decrease (because it often takes me a 
lot longer to grok partial logs correctly). FWIW, the ALL,9 snippet size for a 
single-transaction test is usually quite manageable (under 1GB).



I did eventually find that it was the other log when I browsed 
/var/log/squid/...
The log was dated for today, which is good. But I'm not sure how to generate it 
on purpose?


Squid writes to cache.log as it runs. You can remove the old log while Squid does not run 
and Squid will create a new file, or, better, look for "tail" hints on that FAQ 
page.

When you run your test after configuring (and restarting) Squid with "debug_options 
ALL,9", Squid will write debugging info to its cache.log.
There are more hints on that FAQ page.



Just so we're on the same understanding, I've only been using Linux
for 2 months, this project is my first experience with it. I do
apologize if I miss a lot of "common sense" stuff. This was tasked for
me to figure out and I've been having a pretty fun learning experience
so far.


I sympathize, but do not have the time to offer enough guidance for basic 
sysadmin tasks. Others on the mailing list may be able to help with specific 
questions or, better, see if you can find a local Linux person who can help you 
with this task. If they know what they are doing, it should take them less than 
30 minutes to produce the requested debugging cache.log snippet for the test 
transaction, even if they have never heard about Squid before. And you will 
learn a few new tricks...



I'll update those logs and wait for your response to this before
sending them or sending you a personal drop link.


A link usually works best.


Thank you,

Alex.



-Original Message-
From: squid-users  On
Behalf Of Alex Rousskov
Sent: Thursday, August 29, 2024 12:00 PM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Unable to access internal resources via
hostname

Caution: This email originated from outside of Hexcel. Do not click links or 
open attachments unless you recognize the sender and know the content is safe.


On 2024-08-29 10:34, Piana, Josh wrote:


I've added the following to my squid.conf file # logformat custom
%err_code/%err_detail # access_log daemon:/var/log/squid/access,log
custom


I assume you are using comment character (#) for email presentation purposes 
(i.e. it does not actually exist in your squid.conf).

I assume that comma in your access log file name above is a typo that does not 
actually exist in your squid.conf.



I'm not sure ... extra log details were added correctly

The above access_log configuration should reduce all access.log records to 
exactly two fields separated by a slash. Clearly, it does not! Thus, you are 
either not testing with the new configuration file (e.g., you have not fully 

Re: [squid-users] Questions about Squid configuration

2024-08-30 Thread Alex Rousskov

On 2024-08-29 22:28, にば wrote:


With the newly reviewed configuration in the attachment


OT: Please note that your configuration does not follow the recommended 
http_access rules order template in squid.conf.default and might, 
depending on your deployment environment, allow Squid to be used for 
attacks on 3rd party resources (e.g., ssh services). This observation is 
not related to your primary question and your "ban certain sites" goal. 
Following suggestions at the end of this email should fix this problem.




I found the following statement in the following official document
https://www.squid-cache.org/Doc/config/host_verify_strict/

> * The host names (domain or IP) must be identical,
> but valueless or missing Host header disables all checks.

So I ran an additional validation with an empty Host value, and the
request succeeded for a domain that was not in the whitelist.
The curl command for verification is below, and as before, only
.pypi.org is allowed in the whitelist.

date;curl https://www.yahoo.co.jp/ -H "Host:" -v --cacert squid.crt -k

Is it possible for Squid to prevent such requests as well?


Yes, a req_header ACL should be able to detect such requests (i.e. 
requests without a Host header or with an empty Host header value). 
However, I suspect that "missing Host" is _not_ the problem you should 
be solving (as detailed below).




I was able to confirm that any one of the SNI, IP, or Host in the
request is incorrect (not whitelist allowed)
and Squid will correctly check and return a 409 Conflict.


IMHO, you should target/validate a different set of goals/conditions:

* A valid request targeting a banned site should be denied (HTTP 403 
response, %Ss=TCP_DENIED, %err_code=ERR_ACCESS_DENIED). This denial 
should be triggered by an "http_access deny" rule, preferably an 
explicit one. This denial will _not_ happen (and the request will 
instead be forwarded to the banned site it targets) if you replace all 
your http_access rules with a single "http_access allow all" line. This 
denial does not depend on host_verify_strict and underlying code.


* An invalid request should be rejected (HTTP 4xx response). This 
includes, but is not limited to, host_verify_strict-driven rejections. 
This rejection should happen even if you replace all your http_access 
rules with a single "http_access allow all" line.


AFAICT, your current configuration does not reach "deny valid requests 
targeting banned sites" goal while your question implies that you are 
incorrectly relying on host_verify_strict to perform that denial.



I recommend the following:

1. Remove all of your current http_access rules. Keep ACLs. Perform 
host_verify_strict and access tests to confirm that all valid requests 
are denied and all invalid requests are rejected. If necessary, ask 
questions, file bug reports, patch Squid, and/or adjust your 
configuration to pass this test.


2. Copy http_access rules, with comments, from generated 
squid.conf.default. Insert your own access rules in the location marked 
by "INSERT YOUR OWN RULE(S) HERE" comment. Perform host_verify_strict 
and access tests to confirm that all valid requests to banned sites are 
denied, all other valid requests are allowed, and all invalid requests 
are rejected. If necessary, ask questions, file bug reports, patch 
Squid, and/or adjust your configuration to pass this test.



HTH,

Alex.



2024年8月8日(木) 21:33 Alex Rousskov :


On 2024-08-06 20:59, にば wrote:


When using Squid transparently, is it possible to control the
whitelist of the domain to connect to and inspect the Host field in
the request header together?


Short answer: Yes.



According to the verification results, the Host field can be inspected
by "host_verify_strict on" in squid-transparent.conf, but it seems
that the whitelist is not controlled.


AFAICT, the configuration you have shared allows all banned[1] traffic
to/through https_port. For the problematic test case #5:

All these http_access rules do _not_ match:


http_access allow localnet whitelist
http_access deny localnet whitelist_https !https_port
http_access deny localnet whitelist_transparent_https !https_port



And then this next rule matches and allows traffic through:


http_access allow https_port



This last http_access rule is not reached:


http_access deny all



N.B. The above analysis assumes that your https_port ACL is explicitly
defined in your squid.conf to match all traffic received at https_port.
If you do not have such an ACL defined, then you need to fix that
problem as well. I recommend naming ACLs differently from directive
names (e.g., "toHttpsPort" rather than "https_port").


Please note that Squid v4 is not supported by the Squid Project and is
very buggy. I recommend using Squid v6 or later.


HTH,

Alex.
[1] Here, "banned" means "_not_ matching whitelist ACL".


Re: [squid-users] Unable to access internal resources via hostname

2024-08-29 Thread Alex Rousskov

On 2024-08-29 13:08, Piana, Josh wrote:


In regards to the debugging, do you still want me to do All,9 or is
All,6 acceptable? I am reading that 9 is... huge.


I want ALL,9. I can usually handle its size. If you cannot, then adjust 
as needed, but my willingness to help will decrease (because it often 
takes me a lot longer to grok partial logs correctly). FWIW, the ALL,9 
snippet size for a single-transaction test is usually quite manageable 
(under 1GB).




I did eventually find that it was the other log when I browsed 
/var/log/squid/...
The log was dated for today, which is good. But I'm not sure how to generate it 
on purpose?


Squid writes to cache.log as it runs. You can remove the old log while 
Squid does not run and Squid will create a new file, or, better, look 
for "tail" hints on that FAQ page.


When you run your test after configuring (and restarting) Squid with 
"debug_options ALL,9", Squid will write debugging info to its cache.log. 
There are more hints on that FAQ page.




Just so we're on the same understanding, I've only been using Linux
for 2 months, this project is my first experience with it. I do
apologize if I miss a lot of "common sense" stuff. This was tasked
for me to figure out and I've been having a pretty fun learning
experience so far.


I sympathize, but do not have the time to offer enough guidance for 
basic sysadmin tasks. Others on the mailing list may be able to help 
with specific questions or, better, see if you can find a local Linux 
person who can help you with this task. If they know what they are 
doing, it should take them less than 30 minutes to produce the requested 
debugging cache.log snippet for the test transaction, even if they have 
never heard about Squid before. And you will learn a few new tricks...




I'll update those logs and wait for your response to this before
sending them or sending you a personal drop link.


A link usually works best.


Thank you,

Alex.



-Original Message-
From: squid-users  On Behalf Of Alex 
Rousskov
Sent: Thursday, August 29, 2024 12:00 PM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Unable to access internal resources via hostname

Caution: This email originated from outside of Hexcel. Do not click links or 
open attachments unless you recognize the sender and know the content is safe.


On 2024-08-29 10:34, Piana, Josh wrote:


I've added the following to my squid.conf file # logformat custom
%err_code/%err_detail # access_log daemon:/var/log/squid/access,log
custom


I assume you are using comment character (#) for email presentation purposes 
(i.e. it does not actually exist in your squid.conf).

I assume that comma in your access log file name above is a typo that does not 
actually exist in your squid.conf.



I'm not sure ... extra log details were added correctly

The above access_log configuration should reduce all access.log records to 
exactly two fields separated by a slash. Clearly, it does not! Thus, you are 
either not testing with the new configuration file (e.g., you have not fully 
shut down and then started Squid) or there are other serious problems (e.g., 
malformed squid.conf that confuses Squid v5). We should solve this mystery 
before moving any further.



I've also enabled debugging options:
# debug_options ALL,9


Thank you. Please note that this debugging info goes into cache.log, not 
access.log. I trust you have examined your cache.log for errors and warnings 
_before_ enabling debugging -- finding them among debugging noise can be 
challenging! ALL,9 debugging is for Squid developers to study.

My recommendation for the next step remains the same. Look for "The best 
option" in my previous response.


HTH,

Alex.



I've also cleaned up our ACL's to better reflect what is going on:
# acl src_self src 127.0.0.0/8
# acl src_self src 10.46.11.69
# acl dst_self dst 127.0.0.0/8
# acl dst_self dst 10.46.11.69

# acl from_arc src 10.46.0.0/15

# acl local_dst_addr dst 10.0.0.0/8
# acl local_dst_addr dst 172.0.0.0/8

When accessing internal devices via IP, it works.
I think this is because of this ACL:
# http_access   allow local_dst_addr
# http_reply_access allow local_dst_addr

But when I access those same internal devices via hostname, it fails.
I'm using this ACL, which I just created a separate file for:
acl local_dst_dom dstdomain "etc/squid/local_dst_dom"
http_access   allow local_dst_dom
http_reply_access allow local_dst_dom

I added an ACL file because I figure it will be more extensive than the more 
simple IP range ACL's.
I've added a few local websites to the "local_dst_dom" list, but I'm unable to 
get this to resolve without using IP.

For instance: "https://hexcelssp/"; is the name of one of our internal websites 
for our company.
This doesn't load under our current config but "

Re: [squid-users] Unable to access internal resources via hostname

2024-08-29 Thread Alex Rousskov

On 2024-08-29 10:34, Piana, Josh wrote:


I've added the following to my squid.conf file
# logformat custom %err_code/%err_detail
# access_log daemon:/var/log/squid/access,log custom


I assume you are using comment character (#) for email presentation 
purposes (i.e. it does not actually exist in your squid.conf).


I assume that comma in your access log file name above is a typo that 
does not actually exist in your squid.conf.




I'm not sure ... extra log details were added correctly
The above access_log configuration should reduce all access.log records 
to exactly two fields separated by a slash. Clearly, it does not! Thus, 
you are either not testing with the new configuration file (e.g., you 
have not fully shut down and then started Squid) or there are other 
serious problems (e.g., malformed squid.conf that confuses Squid v5). We 
should solve this mystery before moving any further.




I've also enabled debugging options:
# debug_options ALL,9


Thank you. Please note that this debugging info goes into cache.log, not 
access.log. I trust you have examined your cache.log for errors and 
warnings _before_ enabling debugging -- finding them among debugging 
noise can be challenging! ALL,9 debugging is for Squid developers to study.


My recommendation for the next step remains the same. Look for "The best 
option" in my previous response.



HTH,

Alex.



I've also cleaned up our ACL's to better reflect what is going on:
# acl src_self src 127.0.0.0/8
# acl src_self src 10.46.11.69
# acl dst_self dst 127.0.0.0/8
# acl dst_self dst 10.46.11.69

# acl from_arc src 10.46.0.0/15

# acl local_dst_addr dst 10.0.0.0/8
# acl local_dst_addr dst 172.0.0.0/8

When accessing internal devices via IP, it works.
I think this is because of this ACL:
# http_access   allow local_dst_addr
# http_reply_access allow local_dst_addr

But when I access those same internal devices via hostname, it fails.
I'm using this ACL, which I just created a separate file for:
acl local_dst_dom dstdomain "etc/squid/local_dst_dom"
http_access   allow local_dst_dom
http_reply_access allow local_dst_dom

I added an ACL file because I figure it will be more extensive than the more 
simple IP range ACL's.
I've added a few local websites to the "local_dst_dom" list, but I'm unable to 
get this to resolve without using IP.

For instance: "https://hexcelssp/"; is the name of one of our internal websites 
for our company.
This doesn't load under our current config but "https://10.96.14.174/"; DOES 
load.

I've added "hexnt2067" as well, which you can see me testing in the below logs.
Again, this DOES work with its IP, 10.96.102.67.

I'm not getting a proxy error message when I try to browse to that URL though, 
we just can't reach it.

Here's the log results after enabling debugging and better log features:

29/Aug/2024:10:25:33 -0400.380 10.46.49.190 TCP_DENIED/407 4132 CONNECT 
hexcelssp:443 - HIER_NONE/- text/html
29/Aug/2024:10:25:33 -0400.515 10.46.49.190 NONE_NONE/500 0 CONNECT hexcelssp:443 
JPIANA@AD..COM HIER_NONE/- -
29/Aug/2024:10:25:39 -0400.799 10.46.49.190 TCP_TUNNEL/200 11716 CONNECT 
contile.services.mozilla.com:443 JPIANA@AD..COM 
HIER_DIRECT/34.117.188.166 -
29/Aug/2024:10:26:05 -0400.507 10.46.49.190 TCP_DENIED/407 4474 GET 
http://hexnt2067/sites/it/help/SitePages/Home.aspx - HIER_NONE/- text/html
29/Aug/2024:10:26:05 -0400.646 10.46.49.190 TCP_MISS/500 8320 GET 
http://hexnt2067/sites/it/help/SitePages/Home.aspx JPIANA@AD..COM 
HIER_NONE/- text/html
29/Aug/2024:10:26:05 -0400.735 10.46.49.190 TCP_DENIED/403 4440 GET 
http://arcgate2.ad/..com:8080/squid-internal-static/icons/SN.png - 
HIER_NONE/- text/html
29/Aug/2024:10:26:05 -0400.779 10.46.49.190 TCP_MISS_ABORTED/500 0 GET 
http://hexnt2067/favicon.ico JPIANA@AD..COM HIER_NONE/- text/html
29/Aug/2024:10:27:13 -0400.036 10.46.49.190 TCP_DENIED/407 4474 GET 
http://hexnt2067/sites/it/help/SitePages/Home.aspx - HIER_NONE/- text/html
29/Aug/2024:10:27:13 -0400.227 10.46.49.190 TCP_MISS/500 8334 GET 
http://hexnt2067/sites/it/help/SitePages/Home.aspx JPIANA@AD..COM 
HIER_NONE/- text/html
29/Aug/2024:10:27:13 -0400.302 10.46.49.190 TCP_DENIED/403 4440 GET 
http://arcgate2.ad/..com:8080/squid-internal-static/icons/SN.png - 
HIER_NONE/- text/html
29/Aug/2024:10:27:17 -0400.376 10.46.49.190 TCP_DENIED/407 4132 CONNECT 
hexcelssp:443 - HIER_NONE/- text/html
29/Aug/2024:10:27:17 -0400.514 10.46.49.190 NONE_NONE/500 0 CONNECT hexcelssp:443 
JPIANA@AD..COM HIER_NONE/- -

I'm not sure the debugging and extra log details were added correctly, because 
these look the same.

Thanks,
Josh


-Original Message-
From: squid-users  On Behalf Of Alex 
Rousskov
Sent: Wednesday, August 28, 2024 4:01 PM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Unable to access internal resources via hostname

Caution: Thi

Re: [squid-users] Unable to access internal resources via hostname

2024-08-28 Thread Alex Rousskov
within. I also need to look into how to use the debugging features, I'm not familiar with it at all.


I apologize for the wall of text, looking forward to what you guys have to say 
about this.

Thanks,
Josh

-Original Message-
From: squid-users  On Behalf Of Alex 
Rousskov
Sent: Wednesday, August 28, 2024 2:31 PM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Unable to access internal resources via hostname

Caution: This email originated from outside of Hexcel. Do not click links or 
open attachments unless you recognize the sender and know the content is safe.


On 2024-08-28 14:18, Alex Rousskov wrote:

On 2024-08-28 11:24, Piana, Josh wrote:


Here's the log and (I think) relevant ACL's?


According to your access.log, Squid denies problematic CONNECT
requests with HTTP 407 errors responses. Usually, that means those
requests match an "http_access deny" rule. Clearly, you expect an
"allow" outcome instead, but it is difficult (for me) to figure out
where your expectations mismatch reality; there are no rules that
explicitly mention hexcelssp domain, for example: Which "http_access
allow" rule do you expect those denied requests to match?


Sorry, I probably misinterpreted those access.log records: It looks like the 
denied (TCP_DENIED/407) access is something you actually expect because you 
want that test request to be authenticated. The client supplies the necessary 
credentials in the second request, and then that second request fails with a 
(rather generic) HTTP 500 error code, without contacting the origin server.

I am guessing that you are concerned about that second request/transaction 
rather than the first one.

Squid generates HTTP 500 errors for a variety of different reasons. Are there 
any messages in cache.log (at default debugging level) that correspond to these 
failing test transactions? If there are none, please add %err_code/%err_detail 
to your access_log logformat so that Squid logs more information about the 
problem to access.log (see logformat and access_log directives in 
squid.conf.documented for details).


Thank you,

Alex.




Also, does mgr:ipcache cache manager query confirm that Squid has read
your /etc/hosts file and cached the record you expect it to use?

Alex.



-
--
# /var/log/squid/access.log results for internal conflicts

28/Aug/2024:10:57:17 -0400.234 10.46.49.190 TCP_DENIED/407 4132
CONNECT hexcelssp:443 - HIER_NONE/- text/html
28/Aug/2024:10:57:17 -0400.253 10.46.49.190 NONE_NONE/500 0 CONNECT
hexcelssp:443 JPIANA@AD..COM HIER_NONE/- -
28/Aug/2024:10:57:17 -0400.380 10.46.49.190 TCP_DENIED/407 4132
CONNECT hexcelssp:443 - HIER_NONE/- text/html
28/Aug/2024:10:57:17 -0400.399 10.46.49.190 NONE_NONE/500 0 CONNECT
hexcelssp:443 JPIANA@AD..COM HIER_NONE/- -
-
--

# acl all src all

acl src_self src 127.0.0.0/8
acl src_self src 10.46.11.69

acl dst_self dst 127.0.0.0/8
acl dst_self dst 10.46.11.69

acl from_arc src 10.46.0.0/15

acl local_dst_addr dst 10.0.0.0/8
acl local_dst_addr dst 172.0.0.0/8
acl local_dst_addr dst bldg3..com acl local_dst_addr dst
bldg5..com

# these keep URLs of popular local servers from being forwarded acl
local_dst_dom dstdomain arcgate

# allow connects to local destinations without authentication # by
domain name from URL
http_access   allow local_dst_dom
http_reply_access allow local_dst_dom

# by IP address name resolves to
http_access   allow local_dst_addr
http_reply_access allow local_dst_addr

# allow trusted hosts without authentication # these are just ip's on
the 10.46.11.x network acl authless_src src "/etc/squid/authless_src"
http_access   allow authless_src
http_reply_access allow authless_src
-
--

-Original Message-
From: squid-users  On
Behalf Of Matus UHLAR - fantomas
Sent: Wednesday, August 28, 2024 10:47 AM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Unable to access internal resources via
hostname

Caution: This email originated from outside of Hexcel. Do not click
links or open attachments unless you recognize the sender and know
the content is safe.


On 28.08.24 14:20, Piana, Josh wrote:

Hello Squid Support,


This squid user forum FYI


We are unable to get to internal resources via hostname but using
the IP address works fine.  Immediately, I thought this was DNS but
when I checked the /etc/resolv.conf/ file it was pointing correctly
to our Windows DNS server and we can ping all devices using their
hostname, just not when browsing to it.  This leads me to believe
something may be wrong with our squid config.


hard to guess without seeing logs or ACL's.


--
Matus UHLAR - fa

Re: [squid-users] Unable to access internal resources via hostname

2024-08-28 Thread Alex Rousskov

On 2024-08-28 14:18, Alex Rousskov wrote:

On 2024-08-28 11:24, Piana, Josh wrote:


Here's the log and (I think) relevant ACL's?


According to your access.log, Squid denies problematic CONNECT requests 
with HTTP 407 errors responses. Usually, that means those requests match 
an "http_access deny" rule. Clearly, you expect an "allow" outcome 
instead, but it is difficult (for me) to figure out where your 
expectations mismatch reality; there are no rules that explicitly 
mention hexcelssp domain, for example: Which "http_access allow" rule do 
you expect those denied requests to match?


Sorry, I probably misinterpreted those access.log records: It looks like 
the denied (TCP_DENIED/407) access is something you actually expect 
because you want that test request to be authenticated. The client 
supplies the necessary credentials in the second request, and then that 
second request fails with a (rather generic) HTTP 500 error code, 
without contacting the origin server.


I am guessing that you are concerned about that second 
request/transaction rather than the first one.


Squid generates HTTP 500 errors for a variety of different reasons. Are 
there any messages in cache.log (at default debugging level) that 
correspond to these failing test transactions? If there are none, please 
add %err_code/%err_detail to your access_log logformat so that Squid 
logs more information about the problem to access.log (see logformat and 
access_log directives in squid.conf.documented for details).



Thank you,

Alex.



Also, does mgr:ipcache cache manager query confirm that Squid has read 
your /etc/hosts file and cached the record you expect it to use?


Alex.



---
# /var/log/squid/access.log results for internal conflicts

28/Aug/2024:10:57:17 -0400.234 10.46.49.190 TCP_DENIED/407 4132 
CONNECT hexcelssp:443 - HIER_NONE/- text/html
28/Aug/2024:10:57:17 -0400.253 10.46.49.190 NONE_NONE/500 0 CONNECT 
hexcelssp:443 JPIANA@AD..COM HIER_NONE/- -
28/Aug/2024:10:57:17 -0400.380 10.46.49.190 TCP_DENIED/407 4132 
CONNECT hexcelssp:443 - HIER_NONE/- text/html
28/Aug/2024:10:57:17 -0400.399 10.46.49.190 NONE_NONE/500 0 CONNECT 
hexcelssp:443 JPIANA@AD..COM HIER_NONE/- -

---

# acl all src all

acl src_self src 127.0.0.0/8
acl src_self src 10.46.11.69

acl dst_self dst 127.0.0.0/8
acl dst_self dst 10.46.11.69

acl from_arc src 10.46.0.0/15

acl local_dst_addr dst 10.0.0.0/8
acl local_dst_addr dst 172.0.0.0/8
acl local_dst_addr dst bldg3..com
acl local_dst_addr dst bldg5..com

# these keep URLs of popular local servers from being forwarded
acl local_dst_dom dstdomain arcgate

# allow connects to local destinations without authentication
# by domain name from URL
http_access   allow local_dst_dom
http_reply_access allow local_dst_dom

# by IP address name resolves to
http_access   allow local_dst_addr
http_reply_access allow local_dst_addr

# allow trusted hosts without authentication
# these are just ip's on the 10.46.11.x network
acl authless_src src "/etc/squid/authless_src"
http_access   allow authless_src
http_reply_access allow authless_src
---

-Original Message-
From: squid-users  On 
Behalf Of Matus UHLAR - fantomas

Sent: Wednesday, August 28, 2024 10:47 AM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Unable to access internal resources via 
hostname


Caution: This email originated from outside of Hexcel. Do not click 
links or open attachments unless you recognize the sender and know the 
content is safe.



On 28.08.24 14:20, Piana, Josh wrote:

Hello Squid Support,


This squid user forum FYI


We are unable to get to internal resources via hostname but using the
IP address works fine.  Immediately, I thought this was DNS but when I
checked the /etc/resolv.conf/ file it was pointing correctly to our
Windows DNS server and we can ping all devices using their hostname,
just not when browsing to it.  This leads me to believe something may
be wrong with our squid config.


hard to guess without seeing logs or ACL's.


--
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
It's now safe to throw off your computer.
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


___

Re: [squid-users] Unable to access internal resources via hostname

2024-08-28 Thread Alex Rousskov

On 2024-08-28 11:24, Piana, Josh wrote:


Here's the log and (I think) relevant ACL's?


According to your access.log, Squid denies problematic CONNECT requests 
with HTTP 407 errors responses. Usually, that means those requests match 
an "http_access deny" rule. Clearly, you expect an "allow" outcome 
instead, but it is difficult (for me) to figure out where your 
expectations mismatch reality; there are no rules that explicitly 
mention hexcelssp domain, for example: Which "http_access allow" rule do 
you expect those denied requests to match?


Also, does mgr:ipcache cache manager query confirm that Squid has read 
your /etc/hosts file and cached the record you expect it to use?


Alex.



---
# /var/log/squid/access.log results for internal conflicts

28/Aug/2024:10:57:17 -0400.234 10.46.49.190 TCP_DENIED/407 4132 CONNECT 
hexcelssp:443 - HIER_NONE/- text/html
28/Aug/2024:10:57:17 -0400.253 10.46.49.190 NONE_NONE/500 0 CONNECT hexcelssp:443 
JPIANA@AD..COM HIER_NONE/- -
28/Aug/2024:10:57:17 -0400.380 10.46.49.190 TCP_DENIED/407 4132 CONNECT 
hexcelssp:443 - HIER_NONE/- text/html
28/Aug/2024:10:57:17 -0400.399 10.46.49.190 NONE_NONE/500 0 CONNECT hexcelssp:443 
JPIANA@AD..COM HIER_NONE/- -
---

# acl all src all

acl src_self src 127.0.0.0/8
acl src_self src 10.46.11.69

acl dst_self dst 127.0.0.0/8
acl dst_self dst 10.46.11.69

acl from_arc src 10.46.0.0/15

acl local_dst_addr dst 10.0.0.0/8
acl local_dst_addr dst 172.0.0.0/8
acl local_dst_addr dst bldg3..com
acl local_dst_addr dst bldg5..com

# these keep URLs of popular local servers from being forwarded
acl local_dst_dom dstdomain arcgate

# allow connects to local destinations without authentication
# by domain name from URL
http_access   allow local_dst_dom
http_reply_access allow local_dst_dom

# by IP address name resolves to
http_access   allow local_dst_addr
http_reply_access allow local_dst_addr

# allow trusted hosts without authentication
# these are just ip's on the 10.46.11.x network
acl authless_src src "/etc/squid/authless_src"
http_access   allow authless_src
http_reply_access allow authless_src
---

-Original Message-
From: squid-users  On Behalf Of 
Matus UHLAR - fantomas
Sent: Wednesday, August 28, 2024 10:47 AM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Unable to access internal resources via hostname

Caution: This email originated from outside of Hexcel. Do not click links or 
open attachments unless you recognize the sender and know the content is safe.


On 28.08.24 14:20, Piana, Josh wrote:

Hello Squid Support,


This squid user forum FYI


We are unable to get to internal resources via hostname but using the
IP address works fine.  Immediately, I thought this was DNS but when I
checked the /etc/resolv.conf/ file it was pointing correctly to our
Windows DNS server and we can ping all devices using their hostname,
just not when browsing to it.  This leads me to believe something may
be wrong with our squid config.


hard to guess without seeing logs or ACL's.


--
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
It's now safe to throw off your computer.
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid traffic paths

2024-08-28 Thread Alex Rousskov

On 2024-08-28 08:52, Scott Bates wrote:

Alex: What protocol do those external services use in problematic 
use cases?>> Does Squid see the corresponding requests from VMs?

Squid can only proxy HTTP and FTP...



http and https only


Does Squid log the corresponding problematic transactions to its access.log?


The weird thing is I have an android test phone that also goes through 
squid and that device shows the correct IP on the online services.


My working theory is that (a) android test phone goes through Squid 
(i.e. uses Squid as an HTTP proxy) while (b) the problematic test 
traffic does not (i.e. goes directly to the external service).


The first guess can be confirmed using access.log (should be trivial in 
an isolated test environment). The second guess can be confirmed by 
packet capture analysis (may not be trivial in a virtualized environment 
and on Windows).



HTH,

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid traffic paths

2024-08-27 Thread Alex Rousskov

On 2024-08-27 14:07, Scott Bates wrote:

My lab is setup as such:
Hypervisor host
Squid VM
Test VM 1 (windows)
Test VM 2 (windows)
Test VM 3 (windows)

I have my proxies setup in the squid config. On the test vms I have the 
windows proxy settings pointing to the squid IP and port. If I check the 
public IP on that vm it shows up as the proxy IP. And in the proxy logs 
I see traffic going out.


The issue I'm having is that some external services are seeing the hosts 
public IP for the test vms and not the proxy ip.


What protocol do those external services use in problematic use cases? 
Does Squid see the corresponding requests from VMs? Squid can only proxy 
HTTP and FTP...



> I'm not exactly sure how squid handles all dns traffic.

Squid generates DNS queries (if needed) and, naturally, receives DNS 
answers for the queries it generates. Squid does not receive and, hence, 
does not forward/proxy DNS queries. There is no dns_port option in 
squid.conf; only http(s)_port and ftp_port.


> never_direct allow port3127_acl
> never_direct allow all

Pick one. The first (more restrictive) rule is not needed if you are 
going to allow all.



HTH,

Alex.



Squid config:
*# First proxy
http_port 3127
acl port3127_acl myport 3127
cache_peer PROXYIP parent 9229 0 proxy-only no-query no-digest 
login=USERNAME:PASSWORD

cache_peer_access PROXYIP allow port3127_acl
cache_peer_access PROXYIP deny all
never_direct allow port3127_acl
never_direct allow all
http_access allow port3127_acl
# Deny caching on all proxies (optional)
cache deny all
# Default access control
http_access deny all
dns_nameservers 127.0.0.1
forwarded_for off
request_header_access X-Forwarded-For deny all*

I'm not exactly sure how squid handles all dns traffic. I feel like this 
might be a dns issue. I tried using google dns and the squid server ip 
as dns on the test vms but same issue.
I started to mess around with dnsmasq installed on squid but I'm not 
sure if I'm going down the right path or not.


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid Vulnerabilities

2024-08-26 Thread Alex Rousskov

On 2024-08-26 02:23, Alexandru Mateescu wrote:

In October 2023 the free vulnerabilities scanner of Greenbone (Openvas) 
has started reporting high vulnerabilities on squid for all versions.


When I questioned them about it they indicated 
https://megamansec.github.io/Squid-Security-Audit/ as their source of 
truth and to date they have not reduced the score of the vulnerability 
causing extensive issues for me and my security team.


I further asked them about it and they are looking for a published list 
of security advisories about these vulnerabilities.


FWIW, the official list of recent Squid advisories is at
https://github.com/squid-cache/squid/security/advisories/

Some year-2020 and earlier advisories are available at
http://www.squid-cache.org/Advisories/

Needless to say, converting the above information into a list dedicated 
to "Joshua 55" report (and to Squid v6.10) requires a lot of work.



Would it be possible to issue such a list for whichever ones are fixed 
to date in squid 6.10


Yes, it is possible. FWIW, I built a similar _unofficial_ list at
https://gist.github.com/rousskov/9af0d33d2a1f4b5b3b948b2da426e77d

Please note that any meaningful list would heavily depend on Squid build 
options and runtime configuration in this case, as detailed in a recent 
squid-users email: 
https://lists.squid-cache.org/pipermail/squid-users/2024-August/027043.html



HTH,

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Unable to access local addresses

2024-08-23 Thread Alex Rousskov

On 2024-08-23 12:07, Piana, Josh wrote:

The problem we’re having now is that we’re unable to access local 
resources on different subnets. For instance, our “main” networks are 
10.46.x.x and 10.47.x.x, but the proxy is blocking us when we try to get 
to 172.26.x.x as well as 10.96.x.x. 


Blocking how? What kind of error page does the client get? What does 
Squid log to access.log (consider adding %err_code/%err_detail to your 
custom logformat definition if you have not already).


If access is blocked by an "http_access deny" rule you do not expect to 
match, then which "http_access allow" rule (located above the matched 
deny rule[^1]) do you expect to match those blocked transactions? If 
access is prevented due to connection establishment errors, then you may 
need to adjust your IP routing rules/etc. outside of Squid. And there 
are many other possibilities here...


[1]: If access is blocked by an "http_access deny" rule, you may be able 
to figure out which deny rule has matched by enabling ALL,2 debugging 
(see debug_options in squid.conf.documented), reproducing the problem 
using a single transaction, and searching cache.log for "last ACL 
checked". Older Squids have more bugs in determining that ACL name, but, 
if you are lucky, you will get the right answer.



When comparing our current config to 
the old, they are very nearly identical, and the old config works with 
no issue. Is there some change from 2.5 -> 5.5 that would stop some of 
our allow/deny rules from working as expected?


There are too many corner cases for me to give a reliable answer to that 
question, especially without knowing which http_access rule is denying 
access (assuming it is one of those rules). I recommend focusing on a 
specific matching/mismatching ACL/rule instead of trying to make a 
general statement regarding numerous v2-vs-v5 differences.



Or maybe I need to open 
up the ACL’s a bit more or define those subnets explicitly now?


If your existing rules already reflect your business logic and do not 
trigger any Squid configuration errors/warnings in cache.log, then it is 
unlikely that you will need to add explicit rules for previously working 
subnets. On the other hand, you are upgrading from an ancient Squid 
version that most do not remember much about to an already unsupported 
Squid version, so all bets are off.


> acl from_arc src 10.46.0.0/16
> acl from_arc src 10.46.0.0/15

Remove the line you do not need (I do not know which one, but the 
combination does not make sense because one subnet includes the other).



> dsacl methods_std method GET HEAD POST PUT DELETE

If this is not a copy-paste typo, then edit this line to use a supported 
directive name like "acl". "dsacl" is not a supported directive name.



HTH,

Alex.


I can post our config below, I’ll skip the sections that most likely 
don’t pertain to the issue.


##

# Authentication

##

auth_param negotiate program /usr/lib64/squid/negotiate_kerberos_auth -k 
/etc/squid/HTTP.keytab -s HTTP/.ad..com@AD..COM


auth_param negotiate children 10

auth_param negotiate keep_alive on

acl kerb-auth proxy_auth REQUIRED

##

# Access control - shared/common ACL definitions

##

# acl all src all

acl src_self src 127.0.0.0/8

acl src_self src 10.46.11.69

acl dst_self dst 127.0.0.0/8

acl dst_self dst 10.46.11.69

acl from_arc src 10.46.0.0/16

acl from_arc src 10.46.0.0/15

acl local_dst_addr dst 10.0.0.0/8

acl local_dst_addr dst bldg3..com

acl local_dst_addr dst bldg5..com

# not sure what this does

acl local_dst_dom dstdomain 

# protocols

acl proto_FTP proto FTP

acl proto_HTTP proto HTTP

# TCP ports for HTTP

acl http_ports port 80

acl http_ports port 81

# TCP ports for HTTPS

acl ssl_ports port 443

acl ssh_ports port 22

acl ftp_ports port 21

# HTTP methods

acl method_CONNECT method CONNECT

# what are these used for?

dsacl methods_std method GET HEAD POST PUT DELETE

acl methods_std method TRACE OPTIONS

#

# Access control - general proxy

##

http_access deny dst_self

http_access deny src_self

http_access deny !from_arc

http_access   allow local_dst_dom

http_reply_access   allow local_dst_dom

http_access   allow local_dst_addr

http_reply_access   allow local_dst_addr

acl authless_src src "/etc/squid/authless_src"

http_access   allow authless_src

http_reply_access   allow authless_src

acl authless_dst dstdomain "/etc/squid/authless_dst"

http_access   allow authless_dst

http_reply_access   allow authless_dst

acl bad_do

Re: [squid-users] Squid 6.10 on Fedora 40 cannot intercept and bump SSL Traffic

2024-08-23 Thread Alex Rousskov

On 2024-08-23 06:29, ngtech1...@gmail.com wrote:

OK so the issue was that:

The http_port was used for ssl bump with intercept


I would not phrase it that way because "bump" is a red herring here. I 
would instead say that the issue was that "http_port was used for 
intercepted TLS traffic" or "intercepted TLS traffic was directed to 
http_port".



while the only port which can really intercept ssl connections is: 
https_port


Correct (for some definition of "ssl connections").



so I believe that ...
When there is http_port and intercept and ssl_bump there should be a 
warning.


When configuration X does not work for use case Y, there are several 
scenarios to consider when deciding whether Squid should warn about 
configuration X, including these three:


* When configuration X does not work at all, Squid should reject that 
configuration as invalid. It is not a warning; it is an error. This is 
not the case we are discussing (AFAICT) because "http_port intercept 
ssl-bump" does work in some cases.


* When configuration X does not work for use case Y, Squid should reject 
that configuration as invalid _if_ Squid can detect that it is being 
used for use case Y. This is not the case we are discussing (AFAICT) 
because Squid cannot detect (at configuration time) what traffic you 
intend to intercept and redirect to a given Squid port: It could be TLS. 
It could be plain HTTP. It could be a mix. Squid cannot tell.


* When an unusual configuration X does not work for common use cases, 
Squid may warn about it while giving the admin an ability to turn the 
warning off (to accommodate admins that utilize that configuration for 
some uncommon but valid use cases). One can argue that this is the case 
we are discussing: "http_port intercept ssl-bump" configuration in 
question is unusual, does not work for common TLS interception cases, 
but can be used (AFAICT) to bump traffic between a client and an HTTP 
proxy. Quality pull requests (that take this email considerations into 
account) are welcome.



HTH,

Alex.



*From:* NgTech LTD 
*Sent:* Monday, August 19, 2024 10:48 AM
*To:* Squid Users 
*Subject:* Squid 6.10 on Fedora 40 cannot intercept and bump SSL Traffic

I am testing Squid 6.10 on Fedora 40 (their package).
And it seems that Squid is unable to bump clients (ESNI/ECH)?

I had couple iterations of pek stare and bump and I am not sure what is 
the reason for that:

shutdown_lifetime 3 seconds
external_acl_type whitelist-lookup-helper ipv4 ttl=10 children-max=10 
children-startup=2 \
         children-idle=2 concurrency=10 %URI %SRC 
/usr/local/bin/squid-conf-url-lookup.rb

acl whitelist-lookup external  whitelist-lookup-helper
acl ytmethods method POST GET
acl localnet src 0.0.0.1-0.255.255.255  # RFC 1122 "this" network (LAN)
acl localnet src 10.0.0.0/8              # RFC 1918 
local private network (LAN)
acl localnet src 100.64.0.0/10           # RFC 
6598 shared address space (CGN)
acl localnet src 169.254.0.0/16          # RFC 
3927 link-local (directly plugged) machines
acl localnet src 172.16.0.0/12           # RFC 
1918 local private network (LAN)
acl localnet src 192.168.0.0/16          # RFC 
1918 local private network (LAN)
acl localnet src fc00::/7               # RFC 4193 local private network 
range
acl localnet src fe80::/10              # RFC 4291 link-local (directly 
plugged) machines

acl SSL_ports port 443
acl Safe_ports port 80          # http
acl Safe_ports port 21          # ftp
acl Safe_ports port 443         # https
acl Safe_ports port 70          # gopher
acl Safe_ports port 210         # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280         # http-mgmt
acl Safe_ports port 488         # gss-http
acl Safe_ports port 591         # filemaker
acl Safe_ports port 777         # multiling http
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost manager
http_access deny manager
http_access allow localhost
http_access deny to_localhost
http_access deny to_linklocal
acl tubedoms dstdomain .ytimg.com  .youtube.com 
 .youtu.be 

http_access allow ytmethods localnet tubedoms whitelist-lookup
http_access allow localnet
http_access deny all
http_port 3128
http_port 13128 ssl-bump tls-cert=/etc/squid/ssl/cert.pem 
tls-key=/etc/squid/ssl/key.pem \

         generate-host-certificates=on dynamic_cert_mem_cache_size=4MB
http_port 23128 tproxy ssl-bump tls-cert=/etc/squid/ssl/cert.pem 
tls-key=/etc/squid/ssl/key.pem \

         generate-host-certificates=on dynamic_cert_mem_cache_size=4MB
http_port 33128 intercept ssl-bump tls-cert=/etc/squid/ssl/cert.pem 
tls-key=/etc/squid/ssl/key.pem \

         generate-host-certificates=on dynamic_cert_mem_cache_size=4MB
sslcrtd_program /usr/lib64/squid/security_file_certgen -s 
/var/spool/squid/ssl_db -M 4MB

ss

Re: [squid-users] Squid 6.10 on Fedora 40 cannot intercept and bump SSL Traffic

2024-08-22 Thread Alex Rousskov

On 2024-08-22 15:40, ngtech1...@gmail.com wrote:


I believe there should be something that will indicate it to some
degree 


Sorry, I do not know what you mean by "it" here.



since ssl_bump and intercept on the same port should be "impossible"


It is possible. An http_port can apply SslBump to intercepted traffic. 
Same for https_port. However, each port is meant to handle its own kind 
of traffic: http_port is for plain HTTP (including CONNECT tunnels) 
while https_port is for TLS (usually with HTTP inside that TLS).


All of these six configuration sketches are valid for some use cases 
(and different things are expected to happen at each of these ports):


* http_port
* http_port intercept
* http_port ssl-bump
* http_port intercept ssl-bump

* https_port
* https_port intercept ssl-bump



or that the http_port should be enabled to do ssl
interception.


http_port does not need to be enabled to do TLS interception.
http_port is pretty much unrelated to TLS interception.
http_port may be configured to do plain HTTP interception.



Am I wrong about my assumption?


It feels like it, but it could be that your assumptions just need to be 
stated more accurately to become valid.



HTH,

Alex.




-Original Message-
From: squid-users  On Behalf Of Alex 
Rousskov
Sent: Thursday, August 22, 2024 9:21 PM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Squid 6.10 on Fedora 40 cannot intercept and bump 
SSL Traffic

On 2024-08-20 15:57, Alex Rousskov wrote:

On 2024-08-19 19:32, NgTech LTD wrote:

  > curl -v -k https://www.youtube.com/


1724109013.783  0 192.168.78.252 NONE_NONE/000 0 -
error:invalid-request - HIER_NONE/- - -


OK, let's focus on this curl interception test. It feels like your
Squid is receiving some bytes it cannot fully grok. We should figure
out what those bytes are. Please share (privately if needed) a pointer
to compressed ALL,9 cache.log snippet collected while reproducing this
single test.


Thank you, Eliezer, for sharing the requested logs!

There is a simple explanation for the above error:invalid-request access.log 
record: The test intercepts TLS traffic using plain text http_port. It should 
intercept TLS traffic using https_port instead.



acl foreignProtocol squid_error ERR_PROTOCOL_UNKNOWN ...
on_unsupported_protocol tunnel foreignProtocol


When a plain text port is used, Squid correctly declares ERR_PROTOCOL_UNKNOWN, 
triggers on_unsupported_protocol handling, and then tunnels intercepted HTTPS 
bytes, as configured. Judging by the lack of curl errors, that blind TCP tunnel 
works correctly as well.

SslBump is not applied to tunneled "unknown protocol" bytes, of course.


HTH,

Alex.




N.B. The other tests confirm that your Squid can "bump clients",
invalidating one of the assertions at the beginning of this email
thread. The problem appears to be specific to the interception part of
the test rather than basic SslBump operation.


Cheers,

Alex.



1724109015.407   1623 192.168.78.252 TCP_TUNNEL/200 534674 -
142.250.185.110:443 <http://142.250.185.110:443> -
ORIGINAL_DST/142.250.185.110 <http://142.250.185.110> - -

PS C:\Users\eliez> curl -v -k https://www.youtube.com/
<https://www.youtube.com/>  -o 1.txt
% Total% Received % Xferd  Average Speed   TimeTime
Time   Current
   Dload  Upload   Total   Spent
  Left   Speed
0 00 00 0  0  0 --:--:-- --:--:--
--:--:-- 0* Host www.youtube.com:443 <http://www.youtube.com:443>
was resolved.
* IPv6: 2a00:1450:4001:80b::200e, 2a00:1450:4001:828::200e,
2a00:1450:4001:803::200e, 2a00:1450:4001:830::200e
* IPv4: 142.250.185.110, 216.58.206.78, 142.250.186.142,
216.58.206.46, 172.217.18.110, 142.250.185.142, 142.250.186.174,
142.250.184.206, 142.250.185.174, 142.250.184.238, 172.217.18.14,
142.250.186.110, 172.217.16.206, 172.217.23.110, 216.58.212.142,
142.250.185.78
*   Trying [2a00:1450:4001:80b::200e]:443...
*   Trying 142.250.185.110:443...
* Connected to www.youtube.com <http://www.youtube.com>
(142.250.185.110) port 443
* schannel: disabled automatic use of client certificate
* ALPN: curl offers http/1.1
* ALPN: server accepted http/1.1
* using HTTP/1.x
  > GET / HTTP/1.1
  > Host: www.youtube.com <http://www.youtube.com>
  > User-Agent: curl/8.8.0
  > Accept: */*
  >
* Request completely sent off
* schannel: remote party requests renegotiation
* schannel: renegotiating SSL/TLS connection
* schannel: SSL/TLS connection renegotiated < HTTP/1.1 200 OK <
Content-Type: text/html; charset=utf-8 < X-Content-Type-Options:
nosniff < Cache-Control: no-cache, no-store, max-age=0,
must-revalidate < Pragma: no-cache < Expires: Mon, 01 Jan 1990
00:00:00 GMT < Date: Mon, 19 Aug 2024 23:10:13 GMT <
Strict-Transport-Security: max-age=31536000 < X-Frame-Options:
SAMEORIGIN < Permissions-Policy: ch-ua

Re: [squid-users] Squid 6.10 on Fedora 40 cannot intercept and bump SSL Traffic

2024-08-22 Thread Alex Rousskov

On 2024-08-20 15:57, Alex Rousskov wrote:

On 2024-08-19 19:32, NgTech LTD wrote:

 > curl -v -k https://www.youtube.com/

1724109013.783      0 192.168.78.252 NONE_NONE/000 0 - 
error:invalid-request - HIER_NONE/- - -


OK, let's focus on this curl interception test. It feels like your Squid 
is receiving some bytes it cannot fully grok. We should figure out what 
those bytes are. Please share (privately if needed) a pointer to 
compressed ALL,9 cache.log snippet collected while reproducing this 
single test.


Thank you, Eliezer, for sharing the requested logs!

There is a simple explanation for the above error:invalid-request 
access.log record: The test intercepts TLS traffic using plain text 
http_port. It should intercept TLS traffic using https_port instead.




acl foreignProtocol squid_error ERR_PROTOCOL_UNKNOWN ...
on_unsupported_protocol tunnel foreignProtocol


When a plain text port is used, Squid correctly declares 
ERR_PROTOCOL_UNKNOWN, triggers on_unsupported_protocol handling, and 
then tunnels intercepted HTTPS bytes, as configured. Judging by the lack 
of curl errors, that blind TCP tunnel works correctly as well.


SslBump is not applied to tunneled "unknown protocol" bytes, of course.


HTH,

Alex.



N.B. The other tests confirm that your Squid can "bump clients", 
invalidating one of the assertions at the beginning of this email 
thread. The problem appears to be specific to the interception part of 
the test rather than basic SslBump operation.



Cheers,

Alex.


1724109015.407   1623 192.168.78.252 TCP_TUNNEL/200 534674 - 
142.250.185.110:443 <http://142.250.185.110:443> - 
ORIGINAL_DST/142.250.185.110 <http://142.250.185.110> - -


PS C:\Users\eliez> curl -v -k https://www.youtube.com/ 
<https://www.youtube.com/>  -o 1.txt
   % Total    % Received % Xferd  Average Speed   Time    Time 
Time   Current
                                  Dload  Upload   Total   Spent   
 Left   Speed
   0     0    0     0    0     0      0      0 --:--:-- --:--:-- 
--:--:--     0* Host www.youtube.com:443 <http://www.youtube.com:443> 
was resolved.
* IPv6: 2a00:1450:4001:80b::200e, 2a00:1450:4001:828::200e, 
2a00:1450:4001:803::200e, 2a00:1450:4001:830::200e
* IPv4: 142.250.185.110, 216.58.206.78, 142.250.186.142, 
216.58.206.46, 172.217.18.110, 142.250.185.142, 142.250.186.174, 
142.250.184.206, 142.250.185.174, 142.250.184.238, 172.217.18.14, 
142.250.186.110, 172.217.16.206, 172.217.23.110, 216.58.212.142, 
142.250.185.78

*   Trying [2a00:1450:4001:80b::200e]:443...
*   Trying 142.250.185.110:443...
* Connected to www.youtube.com <http://www.youtube.com> 
(142.250.185.110) port 443

* schannel: disabled automatic use of client certificate
* ALPN: curl offers http/1.1
* ALPN: server accepted http/1.1
* using HTTP/1.x
 > GET / HTTP/1.1
 > Host: www.youtube.com <http://www.youtube.com>
 > User-Agent: curl/8.8.0
 > Accept: */*
 >
* Request completely sent off
* schannel: remote party requests renegotiation
* schannel: renegotiating SSL/TLS connection
* schannel: SSL/TLS connection renegotiated
< HTTP/1.1 200 OK
< Content-Type: text/html; charset=utf-8
< X-Content-Type-Options: nosniff
< Cache-Control: no-cache, no-store, max-age=0, must-revalidate
< Pragma: no-cache
< Expires: Mon, 01 Jan 1990 00:00:00 GMT
< Date: Mon, 19 Aug 2024 23:10:13 GMT
< Strict-Transport-Security: max-age=31536000
< X-Frame-Options: SAMEORIGIN
< Permissions-Policy: ch-ua-arch=*, ch-ua-bitness=*, 
ch-ua-full-version=*, ch-ua-full-version-list=*, ch-ua-model=*, 
ch-ua-wow64=*, ch-ua-form-factors=*, ch-ua-platform=*, 
ch-ua-platform-version=*

< Content-Security-Policy: require-trusted-types-for 'script'
< Origin-Trial: 
AmhMBR6zCLzDDxpW+HfpP67BqwIknWnyMOXOQGfzYswFmJe+fgaI6XZgAzcxOrzNtP7hEDsOo1jdjFnVr2IdxQ4AAAB4eyJvcmlnaW4iOiJodHRwczovL3lvdXR1YmUuY29tOjQ0MyIsImZlYXR1cmUiOiJXZWJWaWV3WFJlcXVlc3RlZFdpdGhEZXByZWNhdGlvbiIsImV4cGlyeSI6MTc1ODA2NzE5OSwiaXNTdWJkb21haW4iOnRydWV9
< Report-To: 
{"group":"youtube_main","max_age":2592000,"endpoints":[{"url":"https://csp.withgoogle.com/csp/report-to/youtube_main <https://csp.withgoogle.com/csp/report-to/youtube_main>"}]}
< Cross-Origin-Opener-Policy: same-origin-allow-popups; 
report-to="youtube_main"
< P3P: CP="This is not a P3P policy! See 
http://support.google.com/accounts/answer/151657?hl=en 
<http://support.google.com/accounts/answer/151657?hl=en> for more info."

< Server: ESF
< X-XSS-Protection: 0
< Set-Cookie: GPS=1; Domain=.youtube.com <http://youtube.com>; 
Expires=Mon, 19-Aug-2024 23:40:13 GMT; Path=/; Secure; HttpOnly
< Set-Cookie: YSC=FASh7dosPqA; Domain=.youtube.com 
<http://youtube.com>; Path=/; Secure; HttpOnly; SameSite=none
< Set-Cookie: VISITOR_INFO1_LIVE=B6cXjIhTk2Q; Domain=.youtube.com 
<http://youtube.com&g

Re: [squid-users] squid 6.10 - Debian 12 undefined reference to `EVP_MD_type' in ssl-crtd

2024-08-21 Thread Alex Rousskov

On 2024-08-21 09:37, David Touzeau wrote:

Configure:
./configure --prefix=/usr --build=x86_64-linux-gnu --includedir=/include 
--mandir=/share/man --infodir=/share/info --localstatedir=/var 
--libexecdir=/lib/squid3 --disable-maintainer-mode 
--disable-dependency-tracking --datadir=/usr/share/squid3 
--sysconfdir=/etc/squid3 --enable-gnuregex --enable-removal-policy=heap 
--enable-follow-x-forwarded-for --disable-cache-digests 
--enable-http-violations --enable-removal-policies=lru,heap 
--enable-arp-acl --enable-truncate --with-large-files --with-pthreads 
--enable-esi --enable-storeio=aufs,diskd,ufs,rock 
--enable-x-accelerator-vary --with-dl--enable-linux-netfilter 
--with-netfilter-conntrack --enable-wccpv2 --enable-eui --enable-auth 
--enable-auth-basic --enable-snmp --enable-icmp --enable-auth-digest 
--enable-log-daemon-helpers --enable-url-rewrite-helpers 
--enable-auth-ntlm --with-default-user=squid --enable-icap-client 
--disable-cache-digests --enable-poll --enable-epoll 
--enable-async-io=128 --enable-zph-qos --enable-delay-pools 
--enable-http-violations --enable-url-maps --enable-ecap --enable-ssl 
--with-openssl --enable-ssl-crtd --enable-xmalloc-statistics 
--enable-ident-lookups --with-filedescriptors=163840 
--with-aufs-threads=128 --disable-arch-native 
--with-logdir=/var/log/squid --with-pidfile=/var/run/squid/squid.pid 
--with-swapdir=/var/cache/squid



Hi, after the make install got

/bin/bash ../../../../libtool  --tag=CXX   --mode=link g++ -std=c++17 
-Wall -Wextra -Wimplicit-fallthrough=5 -Wpointer-arith -Wwrite-strings 
-Wcomments -Wshadow -Wmissing-declarations -Woverloaded-virtual -Werror 
-pipe -D_REENTRANT -m64 -I/usr/include/p11-kit-1    -g -O2  -m64   -g -o 
security_file_certgen certificate_db.o security_file_certgen.o 
../../../../src/ssl/libsslutil.la ../../../../src/sbuf/libsbuf.la 
../../../../src/debug/libdebug.la ../../../../src/error/liberror.la 
../../../../src/comm/libminimal.la ../../../../src/mem/libminimal.la 
../../../../src/base/libbase.la ../../../../src/time/libtime.la -lssl 
-lcrypto   -lgnutls   ../../../../compat/libcompatsquid.la
libtool: link: g++ -std=c++17 -Wall -Wextra -Wimplicit-fallthrough=5 
-Wpointer-arith -Wwrite-strings -Wcomments -Wshadow 
-Wmissing-declarations -Woverloaded-virtual -Werror -pipe -D_REENTRANT 
-m64 -I/usr/include/p11-kit-1 -g -O2 -m64 -g -o security_file_certgen 
certificate_db.o security_file_certgen.o 
../../../../src/ssl/.libs/libsslutil.a 
../../../../src/sbuf/.libs/libsbuf.a 
../../../../src/debug/.libs/libdebug.a 
../../../../src/error/.libs/liberror.a 
../../../../src/comm/.libs/libminimal.a 
../../../../src/mem/.libs/libminimal.a 
../../../../src/base/.libs/libbase.a 
../../../../src/time/.libs/libtime.a -lssl -lcrypto -lgnutls 
../../../../compat/.libs/libcompatsquid.a
/usr/bin/ld: ../../../../src/ssl/.libs/libsslutil.a(crtd_message.o): in 
function `Ssl::CrtdMessage::composeRequest(Ssl::CertificateProperties 
const&)':
/root/squid-6.10/src/ssl/crtd_message.cc:248: undefined reference to 
`EVP_MD_type'


How i can fix it ?


It looks like your Squid is being built with both GnuTLS (bad default) 
and OpenSSL (explicitly requested) support enabled at the same time. I 
doubt Squid code handles that combination well.


As a workaround, try adding --without-gnutls to ./configure options.


HTH,

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 6.10 on Fedora 40 cannot intercept and bump SSL Traffic

2024-08-20 Thread Alex Rousskov
12.174, 216.58.206.46, 172.217.18.110, 
172.217.23.110

*   Trying [2a00:1450:4001:80e::200e]:443...
*   Trying 142.250.185.78:443...
* Connected to www.youtube.com <http://www.youtube.com> (142.250.185.78) 
port 443

* schannel: disabled automatic use of client certificate
* ALPN: curl offers http/1.1
* ALPN: server accepted http/1.1
* using HTTP/1.x
 > GET / HTTP/1.1
 > Host: www.youtube.com <http://www.youtube.com>
 > User-Agent: curl/8.8.0
 > Accept: */*
 >
* Request completely sent off
* schannel: remote party requests renegotiation
* schannel: renegotiating SSL/TLS connection
* schannel: SSL/TLS connection renegotiated
* schannel: failed to decrypt data, need more data
< HTTP/1.1 200 OK
< Content-Type: text/html; charset=utf-8
< X-Content-Type-Options: nosniff
< Cache-Control: no-cache, no-store, max-age=0, must-revalidate
< Pragma: no-cache
< Expires: Mon, 01 Jan 1990 00:00:00 GMT
< Date: Mon, 19 Aug 2024 23:28:00 GMT
< X-Frame-Options: SAMEORIGIN
< Strict-Transport-Security: max-age=31536000
< Cross-Origin-Opener-Policy: same-origin-allow-popups; 
report-to="youtube_main"

< Content-Security-Policy: require-trusted-types-for 'script'
< Report-To: 
{"group":"youtube_main","max_age":2592000,"endpoints":[{"url":"https://csp.withgoogle.com/csp/report-to/youtube_main <https://csp.withgoogle.com/csp/report-to/youtube_main>"}]}
< Origin-Trial: 
AmhMBR6zCLzDDxpW+HfpP67BqwIknWnyMOXOQGfzYswFmJe+fgaI6XZgAzcxOrzNtP7hEDsOo1jdjFnVr2IdxQ4AAAB4eyJvcmlnaW4iOiJodHRwczovL3lvdXR1YmUuY29tOjQ0MyIsImZlYXR1cmUiOiJXZWJWaWV3WFJlcXVlc3RlZFdpdGhEZXByZWNhdGlvbiIsImV4cGlyeSI6MTc1ODA2NzE5OSwiaXNTdWJkb21haW4iOnRydWV9
< Permissions-Policy: ch-ua-arch=*, ch-ua-bitness=*, 
ch-ua-full-version=*, ch-ua-full-version-list=*, ch-ua-model=*, 
ch-ua-wow64=*, ch-ua-form-factors=*, ch-ua-platform=*, 
ch-ua-platform-version=*
< P3P: CP="This is not a P3P policy! See 
http://support.google.com/accounts/answer/151657?hl=en 
<http://support.google.com/accounts/answer/151657?hl=en> for more info."

< Server: ESF
< X-XSS-Protection: 0
< Set-Cookie: GPS=1; Domain=.youtube.com <http://youtube.com>; 
Expires=Mon, 19-Aug-2024 23:58:00 GMT; Path=/; Secure; HttpOnly
< Set-Cookie: YSC=zO-uFscIOFo; Domain=.youtube.com <http://youtube.com>; 
Path=/; Secure; HttpOnly; SameSite=none
< Set-Cookie: VISITOR_INFO1_LIVE=ewGx6w06928; Domain=.youtube.com 
<http://youtube.com>; Expires=Sat, 15-Feb-2025 23:28:00 GMT; Path=/; 
Secure; HttpOnly; SameSite=none
< Set-Cookie: VISITOR_PRIVACY_METADATA=CgJJTBIEGgAgFA%3D%3D; 
Domain=.youtube.com <http://youtube.com>; Expires=Sat, 15-Feb-2025 
23:28:00 GMT; Path=/; Secure; HttpOnly; SameSite=none

< Alt-Svc: h3=":443"; ma=2592000,h3-29=":443"; ma=2592000
< Accept-Ranges: none
< Vary: Accept-Encoding
< Transfer-Encoding: chunked
<
{ [3674 bytes data]
100 16366    0 16366    0     0  23561      0 --:--:-- --:--:-- --:--:-- 
23684* schannel: failed to decrypt data, need more data

{ [1082 bytes data]
* schannel: failed to decrypt data, need more data
{ [2756 bytes data]
* schannel: failed to decrypt data, need more data
{ [2756 bytes data]
* schannel: failed to decrypt data, need more data
{ [2756 bytes data]
* schannel: failed to decrypt data, need more data
{ [2756 bytes data]
* schannel: failed to decrypt data, need more data
{ [2756 bytes data]
* schannel: failed to decrypt data, need more data
{ [4134 bytes data]
* schannel: failed to decrypt data, need more data
{ [2436 bytes data]
* schannel: failed to decrypt data, need more data
{ [2756 bytes data]
* schannel: failed to decrypt data, need more data
{ [2756 bytes data]
* schannel: failed to decrypt data, need more data
{ [20670 bytes data]
* schannel: failed to decrypt data, need more data
{ [25896 bytes data]
* schannel: failed to decrypt data, need more data
{ [20670 bytes data]
* schannel: failed to decrypt data, need more data
{ [2756 bytes data]
* schannel: failed to decrypt data, need more data
{ [6580 bytes data]
* schannel: failed to decrypt data, need more data
{ [11024 bytes data]
* schannel: failed to decrypt data, need more data
{ [5193 bytes data]
* schannel: failed to decrypt data, need more data
{ [5512 bytes data]
* schannel: failed to decrypt data, need more data
{ [11024 bytes data]
* schannel: failed to decrypt data, need more data
{ [2756 bytes data]
* schannel: failed to decrypt data, need more data
{ [9646 bytes data]
* schannel: failed to decrypt data, need more data
{ [2756 bytes data]
* schannel: failed to decrypt data, need more data
{ [9336 bytes data]
* schannel: failed to decrypt data, need more data
{ [2756 bytes data]
* schannel: failed to decrypt data, need more data
{ [13803 bytes data]
* schannel: failed to decrypt data, need more data
{ [6890 bytes data]
* schannel: failed to decrypt data, need more data
{ [21746 bytes data]
* schannel: failed to decrypt data, need more data
{ [56204 bytes dat

Re: [squid-users] Squid 6.10 on Fedora 40 cannot intercept and bump SSL Traffic

2024-08-20 Thread Alex Rousskov

On 2024-08-20 06:35, ngtech1...@gmail.com wrote:


Attached a link for the pcap file that might shed some light on the
issue from a technical perspective


That link does not work for me: Nothing is shown but a page with generic 
background and a "get your own free account" signature at the bottom; no 
download starts. Same result after creating an account with nextcloud.


Please feel free to send another link off list if needed.

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] How to add extra header in response in squid

2024-08-20 Thread Alex Rousskov

On 2024-08-20 06:27, nikhil deshpande wrote:


We have two open ports in squid.

Port 1 is transparent proxy with configuration like below
http_port 127.0.0.1:12121


I assume you are using one of the interception modes (i.e. either 
"intercept" or "tproxy") in the above port configuration.



Port 2 is cache proxy which cache all GET calls  with configuration like 
below


http_port 127.0.0.1:12122 ssl-bump ...



I want to add a response header in my response which we give from squid. 
We are using below directive for the same


reply_header_add header_name header_value

It seems the below configuration is working fine with port # 2 (cache 
port) however it does not work for port #1 (tunnel proxy). Can someone 
tell how can I add custom response header for my all open ports?



Squid should apply reply_header_add to all[1] final HTTP responses and 
HTTP 1xx control messages (that Squid sees/sends), regardless of the 
port the corresponding HTTP requests were received on.


[1] The only known exception is CONNECT responses. Quality pull requests 
adding reply_header_add support for Squid-generated CONNECT responses 
are welcome. An _interception_ http_port usually does not get CONNECT 
requests, so I assume you are not dealing with this exception in your tests.


If your Squid responds to an HTTP GET request without honoring 
reply_header_add, it is a Squid bug. If you are using Squid v6 or later, 
_and_ can demonstrate that Squid actually handles the GET transaction in 
question, then please report that bug to Squid Bugzilla.



HTH,

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid.conf Issues

2024-08-19 Thread Alex Rousskov

On 2024-08-19 13:48, Piana, Josh wrote:


I added "http_access allow kerb-auth" as part of the generic
authentication settings.

We still want authenticated users to access but we want the rules and
ACL's prior to that to catch them first.


If you want http_access rule X to be checked before http_access rule Y, 
you have to list X above Y in squid.conf: Squid checks all http_access 
directives one by one, top to bottom. At the first match, Squid applies 
the matched action ("allow" or "deny") and does not check any 
other/lower http_access rules.




I apologize, I'm quite new with Linux. Should I move that parameter
to near the end of the config file or remove it all together?


FWIW, these access controls are not Linux-specific. I cannot tell you 
what http_access order is correct because I do not know what "We want 
authenticated users to access but we want to catch them first" means to 
you in terms of actual access rules. There are many ways to interpret 
that phrase...


In general, rules that do not depend on whether a user is authenticated 
should go above the rules that do depend (or require) authentication. 
This principle avoids needless authentication of potentially malicious 
requests. FWIW, squid.conf.default has http_access order template that 
work for most use cases; you may want to start with that template rather 
than starting from scratch.


For example, if an "allow authless_dst" rule is meant to apply to both 
already-authenticated and not-yet-authenticated requests, then it 
probably should go above the "allow kerb-auth" rule (which triggers 
authentication), but there are many other ways these two rules may 
interact. I cannot tell which order matches your access policies/desires.



HTH,

Alex.



-----Original Message-
From: squid-users  On Behalf Of Alex 
Rousskov
Sent: Monday, August 19, 2024 12:12 PM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Squid.conf Issues

Caution: This email originated from outside of Hexcel. Do not click links or 
open attachments unless you recognize the sender and know the content is safe.


On 2024-08-19 11:16, Piana, Josh wrote:


After setting up the backend using realmD, sssd, with Kerberos
authentication, I tested with a Windows “squidaduser” account. I can
verify the user accounts connection to the proxy, and it is passing
traffic. The issue is, it’s not being blocked by ANY of the acl’s we
have in place. I was hoping to reach out to help me identify the issue
with the squid.conf file. This is my assumption to be the issue but I
am pretty new at using Linux and completely unfamiliar with setting up
a web proxy.


In most cases, when Squid does not block, it allows. Squid allows when an "http_access 
allow" rule matches. Now look at _all_ of your http_access rules and ask yourself: Which 
"http_access allow" rule matches in my test case?

I do not know enough about your test logic, so I can only speculate that the answer to 
that question is "It is the very first http_access rule!":

  http_access allow kerb-auth

In other words, your configuration allows all authenticated clients. In other 
words, it does not block any authenticated clients. Is that what you want?


HTH,

Alex.




Environment:

Squid Cache: Version 5.5

RHEL 9.4 on a HyperV VM

Linux Client Proxy in a Windows AD environment

Below I will post the config and attempt to edit out any relevant
company/personal information:

##


# General

##


max_filedesc 4096

cache_mgr arcitad...@hexcel.com

cache_effective_user squid

cache_effective_group squid

shutdown_lifetime 5 seconds

##


# Logging

##


# this makes the logs readable to humans

logformat custom %tl.%03tu %>a %Ss/%03>Hs %.ad..com@AD..COM

auth_param negotiate children 10

auth_param negotiate keep_alive on

acl kerb-auth proxy_auth REQUIRED

http_access allow kerb-auth

##


# Access control - shared/common ACL definitions

##


# acl all src all

acl src_self src 127.0.0.0/8

acl src_self src 10.46.11.69

acl dst_self dst 127.0.0.0/8

acl dst_self dst 10.46.11.69

acl from_arc src 10.46.0.0/15

acl local_dst_addr dst 10.0.0.0/8

acl local_dst_addr dst bldg3..com

acl local_dst_addr dst bldg5..com

acl local_dst_dom dstdomain 

acl proto_FTP proto FTP

acl proto_HTTP proto HTTP

acl localnet src 10.46.49.0/24

acl localnet src 10.47.49.0/24

acl http_ports port 80

acl http_ports port 81

acl http_ports port 8001

acl http_ports port 

Re: [squid-users] Squid 6.10 on Fedora 40 cannot intercept and bump SSL Traffic

2024-08-19 Thread Alex Rousskov

On 2024-08-19 15:27, ngtech1...@gmail.com wrote:


I see that there is a SNI so I am not sure how to look at the issue.


FWIW, as the next step, I still recommend answering the remaining open 
questions. Everything else makes a facinating read but is less likely to 
help us make progress (and may obscure/hide actual answers and test 
results). I will restate those remaining questions for your convenience:


* Do all those 12 access.log records correspond to a single curl 
request? If not, please only share access.log record(s) that do correspond.


2. Does everything work for non-intercept ports? Use the same curl test 
you have shared below, but specify proxy address for curl to use.


4. Does everything work when you remove "ssl-bump" and related options 
from intercepting http_port 33128 (and use that intercepted port in the 
same curl test)?


5. Does everything work when you use "ssl_bump splice all" instead of 
your current ssl_bump rule? Same curl parameters as in Q4.


6. Does everything work when you use "ssl_bump peek all" instead of your 
current ssl_bump rule? Same curl parameters as in Q4.



While going through the above list, top to bottom, if you find a test 
that does _not_ work, pause: There is no need to proceed with other, 
more complicated tests if an earlier simpler/basic test fails.



HTH,

Alex.



I was thinking that maybe it's something with the OpenSSL version (3.x.x) on 
Fedora but then I installed both 5.9 and 6.10 on Almalinux 8 and the result is 
the same.

I will describe my setup which might give some background.
I have a very big lab...
In the front of the Internet connection there are couple NGFW devices and 
RouterOS.
Mikrotik RouterOS is the edge and all the others are used with PBR accordingly.
The proxy sits in a deferent segment on the network and I have tried couple 
methods to intercept the traffic with squid.
The only one which works with Squid and the existing equipment and do not cause 
some weird loop is ethernet level tunnel ie not:
* GRE
* IPIP
And couple others.

The only ones which works fine are:
* EoIP (Mikrotik which is based on GRE0
* VxLAN

There are two methods to intercept the traffic:
* PBR+DNAT on the squid box
* PBR+TPROXY on the squid box

Since the intercept method terminates the connection and creates a new one with 
the ip of the proxy it's very simple to even use gre and ipip.
But, with tproxy to allow the traffic being identified currently as a packet 
which is not still in the routing stack we the linux OS need to tag it somehow.
To do that the default "Salt" for the packet hash in the routing stack is the 
source and destination mac address.
Due to this the only methods which are allowing to use tproxy are the above 
mentioned tunnels. (Maybe I will post a video on it later on with a demo)

The Mikrotik RouterOS device re-routes the traffic from the LAN interface into 
the VxLAN interface directly to the proxy machine which has a
static or dynamic route to the LAN subnet via the other side of the VxLAN 
tunnel which is the edge RouterOS device.
I want to gather a set of configurations and tests for this configurations to 
verify what might cause this issue and if possible to resolve it.
For me it seems that if my FortiGate and CheckPoint devices are able to intercept the 
traffic and "Bump" it, there is no reason why squid should
be able to do that the same way.

I will later on send you a private link to the pcaps in a zip file so you would 
be able to inspect this issue in the network level and to see if there is
some details which can help us understand what cause this specific issue.

I want to say that bumping works fine on non-intercepted connections and that I 
have tested the interception with the two available methods ie:
* DNAT Redirect
* Tproxy

Thanks,
Eliezer Croitoru

-Original Message-
From: Alex Rousskov 
Sent: Monday, August 19, 2024 7:18 PM
To: NgTech LTD 
Subject: Re: [squid-users] Squid 6.10 on Fedora 40 cannot intercept and bump 
SSL Traffic

Eliezer, please move this thread back to squid-users mailing list
instead of emailing me personally. When you do so, please clarify
whether all 12 access.log records correspond to this single curl request
(if not, please only share access.log record(s) that do correspond). --Alex.

On 2024-08-19 12:03, NgTech LTD wrote:

This is the output of curl on windows 11 desktop:
C:\Users\USER>curl https://www.youtube.com/ -k -v -o 1.txt
% Total% Received % Xferd  Average Speed   TimeTime Time
   Current
   Dload  Upload   Total   SpentLeft
   Speed
0 00 00 0  0  0 --:--:-- --:--:--
--:--:-- 0* Host www.youtube.com:443 <http://www.youtube.com:443>
was resolved.
* IPv6: 2a00:1450:4001:800::200e, 2a00:1450:4001:80e::200e,
2a00:1450:4001:81c::200e, 2a00:1450:4001:809::200e
* IPv4: 142.250.185.78, 142.250.185.110, 142.250.185.14

Re: [squid-users] Squid.conf Issues

2024-08-19 Thread Alex Rousskov

On 2024-08-19 11:16, Piana, Josh wrote:

After setting up the backend using realmD, sssd, with Kerberos 
authentication, I tested with a Windows “squidaduser” account. I can 
verify the user accounts connection to the proxy, and it is passing 
traffic. The issue is, it’s not being blocked by ANY of the acl’s we 
have in place. I was hoping to reach out to help me identify the issue 
with the squid.conf file. This is my assumption to be the issue but I am 
pretty new at using Linux and completely unfamiliar with setting up a 
web proxy.


In most cases, when Squid does not block, it allows. Squid allows when 
an "http_access allow" rule matches. Now look at _all_ of your 
http_access rules and ask yourself: Which "http_access allow" rule 
matches in my test case?


I do not know enough about your test logic, so I can only speculate that 
the answer to that question is "It is the very first http_access rule!":


http_access allow kerb-auth

In other words, your configuration allows all authenticated clients. In 
other words, it does not block any authenticated clients. Is that what 
you want?



HTH,

Alex.




Environment:

Squid Cache: Version 5.5

RHEL 9.4 on a HyperV VM

Linux Client Proxy in a Windows AD environment

Below I will post the config and attempt to edit out any relevant 
company/personal information:


##

# General

##

max_filedesc 4096

cache_mgr arcitad...@hexcel.com

cache_effective_user squid

cache_effective_group squid

shutdown_lifetime 5 seconds

##

# Logging

##

# this makes the logs readable to humans

logformat custom %tl.%03tu %>a %Ss/%03>Hs %auth_param negotiate program /usr/lib64/squid/negotiate_kerberos_auth -k 
/etc/squid/HTTP.keytab -s HTTP/.ad..com@AD..COM


auth_param negotiate children 10

auth_param negotiate keep_alive on

acl kerb-auth proxy_auth REQUIRED

http_access allow kerb-auth

##

# Access control - shared/common ACL definitions

##

# acl all src all

acl src_self src 127.0.0.0/8

acl src_self src 10.46.11.69

acl dst_self dst 127.0.0.0/8

acl dst_self dst 10.46.11.69

acl from_arc src 10.46.0.0/15

acl local_dst_addr dst 10.0.0.0/8

acl local_dst_addr dst bldg3..com

acl local_dst_addr dst bldg5..com

acl local_dst_dom dstdomain 

acl proto_FTP proto FTP

acl proto_HTTP proto HTTP

acl localnet src 10.46.49.0/24

acl localnet src 10.47.49.0/24

acl http_ports port 80

acl http_ports port 81

acl http_ports port 8001

acl http_ports port 8080

acl Ssl_ports port 443

acl Ssl_ports port 9571

acl SSL_ports port 443

acl Safe_ports port 80

acl Safe_ports port 21

acl Safe_ports port 443

acl ssh_ports port 22

acl ftp_ports port 21

http_access deny !Safe_ports

acl method_CONNECT method CONNECT

dsacl methods_std method GET HEAD POST PUT DELETE

acl methods_std method TRACE OPTIONS

##

# Access control - maintenance

##

acl purge method PURGE

http_access allow purge src_self

http_access deny purge

acl cache_manager proto cache_object

cachemgr_passwd disabled shutdown offline_toggle

cachemgr_passwd none all

http_access allow cache_manager src_self

http_access deny cache_manager

#

# Access control - general proxy

##

http_access deny dst_self

http_access deny src_self

http_access deny !from_arc

http_access   allow local_dst_dom

http_reply_access   allow local_dst_dom

http_access   allow local_dst_addr

http_reply_access   allow local_dst_addr

acl authless_src src "/etc/squid/authless_src"

http_access   allow authless_src

http_reply_access   allow authless_src

acl authless_dst dstdomain "/etc/squid/authless_dst"

http_access   allow authless_dst

http_reply_access   allow authless_dst

acl bad_domains_preauth dstdomain "/etc/squid/bad_domains_preauth"

http_access deny bad_domains_preauth

acl block_user proxy_auth_regex -i "/etc/squid/block_user"

http_access deny block_user

acl bad_exception_urls url_regex -i "/etc/squid/bad_exception_urls"

acl exec_files url_regex -i "/etc/squid/exec_files"

acl exec_users proxy_auth_regex -i "/etc/squid/exec_users"

http_access deny !bad_exception_urls !exec_users exec_files

deny_info ERR_BLOCK_TYPE exec_files

acl mmedia_users proxy_auth_regex -i "/etc/squid/mmedia_users"

acl mmedia_sites dstdomain 

Re: [squid-users] SQUID 6.10 vulnerabilities

2024-08-19 Thread Alex Rousskov

On 2024-08-19 07:37, Guy Tzudkevitz wrote:


I'm running Squid on Ubuntu 22.04

I ran a vulnerability scan on this server and got a result from the 
vendor that this version is vulnerable. See. Is there any way to fix it?


There is, but we cannot fix that scanner. Please contact the vendor that 
provided you with that scanner. As far as Squid is concerned:


* Squid v6.10 is not vulnerable to some of the vulnerabilities listed 
below. For example, Squid v6.10 is not vulnerable to "X-Forwarded-For 
Stack Overflow" and "Chunked Encoding Stack Overflow". I only checked a 
few, so I cannot give you an exact count of misleading "insight" entries 
in the dump of vulnerability names you have shared.


* No reasonable Squid build/configuration is vulnerable to most of the 
vulnerabilities listed below. For example, reasonable Squid builds 
should not enable (or, in older Squid versions, should explicitly 
disable) ESI support at ./configure time; reasonable Squid 
configurations should not enable pipeline_prefetch. Just these two 
(default in Squid v6.10!) precautions would address 15+ vulnerabilities.


* Certain Squid builds/configurations are still vulnerable to a few of 
those reported vulnerabilities because nobody volunteered Squid changes 
to address them. In most cases (e.g., ESI and pipeline_prefetch), nobody 
who can develop (or pay for) a quality fix is affected by those 
vulnerabilities. I do not know whether those vulnerabilities affect 
_your_ Squid installations. If they do, please see

https://wiki.squid-cache.org/SquidFaq/AboutSquid#how-to-add-a-new-squid-feature-enhance-of-fix-something

* IMO, Squid Project has screwed up its official response to the 
surprise publication of those vulnerabilities in 2023: AFAIK, there is 
still no concise summary of vulnerabilities remaining in the latest 
supported Squid release and their corresponding workarounds (if any). 
There is some useful info at the URL below, but it is incomplete and 
converting that info to such a summary requires significant effort:

https://github.com/squid-cache/squid/security/advisories/


HTH,

Alex.



Vulnerability Details
Name
Squid Multiple 0-Day Vulnerabilities (Oct 2023)
Found On
X.X.X.X
Insight


The following flaws have been reported in 2021 to the vendor and seems 
to be not fixed yet: - Use-After-Free in TRACE Requests - 
X-Forwarded-For Stack Overflow - Chunked Encoding Stack Overflow - 
Use-After-Free in Cache Manager Errors - Memory Leak in HTTP Response 
Parsing - Memory Leak in ESI Error Processing - 1-Byte Buffer OverRead 
in RFC 1123 date/time Handling GHSA-8w9r-p88v-mmx9 - One-Byte Buffer 
OverRead in HTTP Request Header Parsing - strlen(NULL) Crash Using 
Digest Authentication GHSA-254c-93q9-cp53 - Assertion in ESI Header 
Handling - Gopher Assertion Crash - Whois Assertion Crash - RFC 2141 / 
2169 (URN) Assertion Crash - Assertion in Negotiate/NTLM Authentication 
Using Pipeline Prefetching - Assertion on IPv6 Host Requests with 
--disable-ipv6 - Assertion Crash on Unexpected 'HTTP/1.1 100 Continue' 
Response Header - Pipeline Prefetch Assertion With Double 
'Expect:100-continue' Request Headers - Pipeline Prefetch Assertion With 
Invalid Headers - Assertion Crash in Deferred Requests - Assertion in 
Digest Authentication - FTP Authentication Crash - Assertion Crash In 
HTTP Response Headers Handling - Implicit Assertion in Stream Handling - 
Use-After-Free in ESI 'Try' (and 'Choose') Processing - Use-After-Free 
in ESI Expression Evaluation - Buffer Underflow in ESI 
GHSA-wgvf-q977-9xjg - Assertion in Squid 'Helper' Process Creator 
GHSA-xggx-9329-3c27 - Assertion Due to 0 ESI 'when' Checking 
GHSA-4g88-277m-q89r - Assertion Using ESI's When Directive 
GHSA-4g88-277m-q89r - Assertion in ESI Variable Assignment (String) - 
Assertion in ESI Variable Assignment - Null Pointer Dereference In ESI's 
esi:include and esi:when Note: Various GHSA advisories have been 
provided by the security researcher but are not published / available yet.



___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 6.10 on Fedora 40 cannot intercept and bump SSL Traffic

2024-08-19 Thread Alex Rousskov

On 2024-08-19 03:47, NgTech LTD wrote:

I am testing Squid 6.10 on Fedora 40 (their package).
And it seems that Squid is unable to bump clients (ESNI/ECH)?

I had couple iterations of pek stare and bump and I am not sure what is 
the reason for that:


What do you use as a client? Judging by the number of 
error:invalid-request entries in your access.log, that client may not be 
speaking HTTP/1 inside those CONNECT tunnels.


Does everything work for non-intercept ports?

Does everything work in a basic curl or wget test?

Does everything work when you remove "ssl-bump" and related options from 
intercepting http_port 33128?


Does everything work when you use "ssl_bump splice all" instead of your 
current ssl_bump rule?


Does everything work when you use "ssl_bump peek all" instead of your 
current ssl_bump rule?


Alex.



shutdown_lifetime 3 seconds
external_acl_type whitelist-lookup-helper ipv4 ttl=10 children-max=10 
children-startup=2 \
         children-idle=2 concurrency=10 %URI %SRC 
/usr/local/bin/squid-conf-url-lookup.rb

acl whitelist-lookup external  whitelist-lookup-helper
acl ytmethods method POST GET
acl localnet src 0.0.0.1-0.255.255.255  # RFC 1122 "this" network (LAN)
acl localnet src 10.0.0.0/8              # RFC 1918 
local private network (LAN)
acl localnet src 100.64.0.0/10           # RFC 
6598 shared address space (CGN)
acl localnet src 169.254.0.0/16          # RFC 
3927 link-local (directly plugged) machines
acl localnet src 172.16.0.0/12           # RFC 
1918 local private network (LAN)
acl localnet src 192.168.0.0/16          # RFC 
1918 local private network (LAN)
acl localnet src fc00::/7               # RFC 4193 local private network 
range
acl localnet src fe80::/10              # RFC 4291 link-local (directly 
plugged) machines

acl SSL_ports port 443
acl Safe_ports port 80          # http
acl Safe_ports port 21          # ftp
acl Safe_ports port 443         # https
acl Safe_ports port 70          # gopher
acl Safe_ports port 210         # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280         # http-mgmt
acl Safe_ports port 488         # gss-http
acl Safe_ports port 591         # filemaker
acl Safe_ports port 777         # multiling http
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost manager
http_access deny manager
http_access allow localhost
http_access deny to_localhost
http_access deny to_linklocal
acl tubedoms dstdomain .ytimg.com  .youtube.com 
 .youtu.be 

http_access allow ytmethods localnet tubedoms whitelist-lookup
http_access allow localnet
http_access deny all
http_port 3128
http_port 13128 ssl-bump tls-cert=/etc/squid/ssl/cert.pem 
tls-key=/etc/squid/ssl/key.pem \

         generate-host-certificates=on dynamic_cert_mem_cache_size=4MB
http_port 23128 tproxy ssl-bump tls-cert=/etc/squid/ssl/cert.pem 
tls-key=/etc/squid/ssl/key.pem \

         generate-host-certificates=on dynamic_cert_mem_cache_size=4MB
http_port 33128 intercept ssl-bump tls-cert=/etc/squid/ssl/cert.pem 
tls-key=/etc/squid/ssl/key.pem \

         generate-host-certificates=on dynamic_cert_mem_cache_size=4MB
sslcrtd_program /usr/lib64/squid/security_file_certgen -s 
/var/spool/squid/ssl_db -M 4MB

sslcrtd_children 5
acl foreignProtocol squid_error ERR_PROTOCOL_UNKNOWN ERR_TOO_BIG
acl serverTalksFirstProtocol squid_error ERR_REQUEST_START_TIMEOUT
on_unsupported_protocol tunnel foreignProtocol
on_unsupported_protocol tunnel serverTalksFirstProtocol
on_unsupported_protocol respond all
acl monitoredSites ssl::server_name .youtube.com  
.ytimg.com 

acl monitoredSitesRegex ssl::server_name_regex \.youtube\.com \.ytimg\.com
acl serverIsBank ssl::server_name .visa.com 
acl step1 at_step SslBump1
acl step2 at_step SslBump2
acl step3 at_step SslBump3
ssl_bump bump all
strip_query_terms off
coredump_dir /var/spool/squid
refresh_pattern ^ftp:           1440    20%     10080
refresh_pattern -i (/cgi-bin/|\?) 0     0%      0
refresh_pattern .               0       20%     4320
logformat ssl_custom_format %ts.%03tu %6tr %>a %Ss/%03>Hs %%[un %Sh/%sni

access_log daemon:/var/log/squid/access.log ssl_custom_format
##EOF

access.log from before:
1724028804.797    486 192.168.78.15 TCP_TUNNEL/200 17764 CONNECT 
40.126.31.73:443  - ORIGINAL_DST/40.126.31.73 
 - -
1724028805.413      0 192.168.78.15 NONE_NONE/000 0 - 
error:invalid-request - HIER_NONE/- - -
1724028806.028      0 192.168.78.15 NONE_NONE/000 0 - 
error:invalid-request - HIER_NONE/- - -
1724028806.028      0 192.168.78.15 NONE_NONE/000 0 - 
error:invalid-request - HIER_NONE/- - -
1724028806.029      0 192.168.78.15 NONE_NONE/000 0 - 
error:invalid-request - HIER_NONE/- - -
1724028806.030      0 192.168.78.15 N

Re: [squid-users] squid 5.7 FTP upload fails

2024-08-13 Thread Alex Rousskov

On 2024-08-13 09:53, Matus UHLAR - fantomas wrote:

On 2024-08-09 09:03, Matus UHLAR - fantomas wrote:
FTR, I sent the info privately, but we can continue 
discussing/solving it here.


maybeMakeSpaceAvailable: request buffer full: 
client_request_buffer_max_size=524288

ReadNow: ... size 0, retval 0, errno 0
terminateAll: 1/1 after ERR_CLIENT_GONE/WITH_CLIENT



On 09.08.24 09:51, Alex Rousskov wrote:
AFAICT, you are suffering from a Squid bug that goes back to 2015 
commit 4d1376d7 (that was fixing Squid bug 4206) or even earlier 
code. Red flags were raised back in 2015, but nobody followed 
through. We need to refactor 
Server::maybeMakeSpaceAvailable()-related code to fix this bug.


FWIW, we did similar work for Squid-to-peer communication in commit 
cc8b26f9 and commit 50c5af88. Similar principles should apply to 
client-Squid communication. Unfortunately, I do not have enough free 
time to volunteer to fix this client-Squid bug now.


At the expense of using more memory for some upload cases, you may be 
able to work around the bug in some (but not all!) cases by making 
client_request_buffer_max_size sufficiently large. For example, 
setting client_request_buffer_max_size to 16 megabytes may allow you 
to pass the test case you have privately shared.


On 12.08.24 11:27, Matus UHLAR - fantomas wrote:

I have submitted bug 5405 so it's documented:

https://bugs.squid-cache.org/show_bug.cgi?id=5405


Does this bug apply for POST requests as well?


I speculate that the answer is "yes": IIRC, the underlying bug may 
affect any request with a large-enough body in _combination_ with other 
events/factors that affect buffered data consumption. I have not 
checked, but one of those factors could be, for example, the receiving 
port protocol (i.e. ftp_port and FTP in your test case) and/or REQMOD 
adaptation. I speculate that those "other" events rarely happen in a 
typical Squid installation (explaining why most large requests do not 
get stuck), but can be reproduced under controlled conditions with 
http_port POSTs.



HTH,

Alex.

Because the biggest PUT is about 4MB so far, but I've seen POST requests 
nearly 54MB big.


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid 5.7 FTP upload fails

2024-08-09 Thread Alex Rousskov

On 2024-08-09 09:03, Matus UHLAR - fantomas wrote:

FTR, I sent the info privately, but we can continue discussing/solving 
it here.



maybeMakeSpaceAvailable: request buffer full: 
client_request_buffer_max_size=524288
ReadNow: ... size 0, retval 0, errno 0
terminateAll: 1/1 after ERR_CLIENT_GONE/WITH_CLIENT


AFAICT, you are suffering from a Squid bug that goes back to 2015 commit 
4d1376d7 (that was fixing Squid bug 4206) or even earlier code. Red 
flags were raised back in 2015, but nobody followed through. We need to 
refactor Server::maybeMakeSpaceAvailable()-related code to fix this bug.


FWIW, we did similar work for Squid-to-peer communication in commit 
cc8b26f9 and commit 50c5af88. Similar principles should apply to 
client-Squid communication. Unfortunately, I do not have enough free 
time to volunteer to fix this client-Squid bug now.


At the expense of using more memory for some upload cases, you may be 
able to work around the bug in some (but not all!) cases by making 
client_request_buffer_max_size sufficiently large. For example, setting 
client_request_buffer_max_size to 16 megabytes may allow you to pass the 
test case you have privately shared.



HTH,

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid 5.7 FTP upload fails

2024-08-08 Thread Alex Rousskov

On 2024-08-08 10:53, Matus UHLAR - fantomas wrote:

On 2024-08-08 06:19, Matus UHLAR - fantomas wrote:

Perhaps configuring proper debug_options, but which?


On 08.08.24 08:15, Alex Rousskov wrote:
Yes, we should escalate triage to debugging log analysis. I am willing 
to study your ALL,9 cache.log collected from Squid v6 while 
reproducing the problem using a single transaction (or as few 
transactions as possible). Please share a pointer to compressed 
cache.log file (privately of needed). You probably do not need them, 
but there are a few relevant hints at 
https://wiki.squid-cache.org/SquidFaq/BugReporting#debugging-a-single-transaction



The bad news is that "debug_options ALL,9" caused test upload to succeed
(multiple attempts). After commenting it out it fails again.


You are probably dealing with a race condition of some kind, but I would 
prefer not to speculate about the nature of this bug at this time.


Please try "ALL,3 33,5 9,5". Slowly decrease "5"s until you can 
reproduce. Share the log from that test.


Please note that we do not need to have a reliable test at this point. 
We only need a single test run that reproduces the problem (with enough 
debugging enabled). Once we know more about the bug, we may be able to 
either fix it or adjust debugging instructions to ease reproduction...




however the "* We are completely uploaded and fine" message is new here
(but doesn't appear all the time)


Ideally, we want debugging cache.log from a failed test without that 
message, but we can start with a test that does have one if that is the 
only one you can produce right now.


Cheers,

Alex.



I also tried 9,9 without success.


Could errors like this be caused by something like lacking fflush() on 
socket?




# curl -v --no-progress-meter -x 192.168.A.B:3128 -T cruft.out.gz -u 
"anonymous:du...@example.net" ftp://example.net/incoming/

*   Trying 192.168.A.B:3128...
* Connected to 192.168.A.B (192.168.A.B) port 3128 (#0)
* Server auth using Basic with user 'anonymous'
PUT 
ftp://anonymous:dummy%40example@example.net/incoming/cruft.out.gz 
HTTP/1.1

Host: example.net:21
Authorization: Basic YW5vbnltb3VzOmR1bW15QGV4YW1wbGUubmV0
User-Agent: curl/7.88.1
Accept: */*
Proxy-Connection: Keep-Alive
Content-Length: 10438601
Expect: 100-continue


* Done waiting for 100-continue
} [65536 bytes data]
* We are completely uploaded and fine
* Recv failure: Connection reset by peer
* Closing connection 0
curl: (56) Recv failure: Connection reset by peer






___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Questions about Squid configuration

2024-08-08 Thread Alex Rousskov

On 2024-08-06 20:59, にば wrote:


When using Squid transparently, is it possible to control the
whitelist of the domain to connect to and inspect the Host field in
the request header together?


Short answer: Yes.



According to the verification results, the Host field can be inspected
by "host_verify_strict on" in squid-transparent.conf, but it seems
that the whitelist is not controlled.


AFAICT, the configuration you have shared allows all banned[1] traffic 
to/through https_port. For the problematic test case #5:


All these http_access rules do _not_ match:


http_access allow localnet whitelist
http_access deny localnet whitelist_https !https_port
http_access deny localnet whitelist_transparent_https !https_port



And then this next rule matches and allows traffic through:


http_access allow https_port



This last http_access rule is not reached:


http_access deny all



N.B. The above analysis assumes that your https_port ACL is explicitly 
defined in your squid.conf to match all traffic received at https_port. 
If you do not have such an ACL defined, then you need to fix that 
problem as well. I recommend naming ACLs differently from directive 
names (e.g., "toHttpsPort" rather than "https_port").



Please note that Squid v4 is not supported by the Squid Project and is 
very buggy. I recommend using Squid v6 or later.



HTH,

Alex.
[1] Here, "banned" means "_not_ matching whitelist ACL".



■Configuration Details
〇squid-transparent.conf(Excerpts)
#Whitelist
acl whitelist dstdomain "/etc/squid/whitelist"
acl whitelist dstdomain "/etc/squid/whitelist_transparent"
acl whitelist_https dstdomain "/etc/squid/whitelist_https"
acl whitelist_transparent_https dstdomain
"/etc/squid/whitelist_transparent_https"

proxy_protocol_access allow localnet
proxy_protocol_access deny all
http_access allow localnet whitelist
http_access deny localnet whitelist_https !https_port
http_access deny localnet whitelist_transparent_https !https_port

# Handling HTTP requests
http_port 3129 intercept
# Handling HTTPS requests
https_port 3130 intercept tcpkeepalive=60,30,3 ssl-bump
generate-host-certificates=on dynamic_cert_mem_cache_size=20MB
tls-cert=/etc/squid/ssl/squid.crt tls-key=/etc/squid/ssl/squid.key
cipher=HIGH:MEDIUM:!LOW:!RC4:!SEED:!IDEA:!3DES:!MD5:!EXP:!PSK:!DSS
options=NO_TLSv1,NO_SSLv3,SINGLE_DH_USE,SINGLE_ECDH_USE
tls-dh=prime256v1:/etc/squid/ssl/bump_dhparam.pem
# Start up for squid process
http_port 3131
http_access allow https_port
acl allowed_https_sites ssl::server_name "/etc/squid/whitelist"
acl allowed_https_sites ssl::server_name "/etc/squid/whitelist_transparent"
acl allowed_https_sites ssl::server_name "/etc/squid/whitelist_https"
acl allowed_https_sites ssl::server_name
"/etc/squid/whitelist_transparent_https"

http_access deny all

# strict setting
host_verify_strict on

# SSL_BUMP
sslcrtd_program /usr/lib64/squid/security_file_certgen -s
/var/lib/squid/ssl_db -M 20MB
acl step1 at_step SslBump1
acl step2 at_step SslBump2
acl step3 at_step SslBump3

ssl_bump bump all




■Verification of Settings
I ran the curl command from each of the client environments that use Squid.
1. if SNI, Destination IP, and HeaderHost are correct, the user should
be able to connect to pypi.org
Command:
date;curl https://pypi.org/ -v --cacert squid_2.crt -k
Result: OK

2. rejection of communication to pypi.org if SNI is correct but
destination IP and HeaderHost are incorrect
Command:
date;curl https://pypi.org/ --resolve pypi.org:443:182.22.24.252 -H
"Host: www.yahoo.co.jp"  -v --cacert squid_2.crt -k
Result: OK (409 Conflict is returned)

3. rejection of communication to pypi.org if SNI and destination IP
are correct and HeaderHost is incorrect
Command:
date;curl https://pypi.org/ -H "Host: www.yahoo.co.jp" -v --cacert
squid_2.crt -k
Result: OK (409 Confilic returned)

4. rejection of communication to pypi.org if SNI and HeaderHost are
correct but destination IP is incorrect
Command:
date;curl https://pypi.org/ --resolve pypi.org:443:182.22.24.252 -v
--cacert squid_2.crt -k
Result: OK (409 Confilic returned)

5. if SNI, destination IP, and HeaderHost are all invalid (yahoo.co.jp
not registered in whitelist), communication will be rejected
Command:
date;curl https://yahoo.co.jp/ -v --cacert squid_2.crt -k
Result: NG (301 Moved Permanently is returned, but it appears that the
communication is reaching yahoo.co.jp)





___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid 5.7 FTP upload fails

2024-08-08 Thread Alex Rousskov

On 2024-08-08 06:19, Matus UHLAR - fantomas wrote:

On 2024-08-07 13:05, Matus UHLAR - fantomas wrote:

after we upgraded squid 4.13 to squid 5.7 (debian 11 to 12)
our user reported that attempring to uploading bigger files fails.


On 07.08.24 14:31, Alex Rousskov wrote:
Thank you for sharing access.log records. Any relevant messages in 
cache.log?


No. 


Thank you for answering all of my questions.



Perhaps configuring proper debug_options, but which?


Yes, we should escalate triage to debugging log analysis. I am willing 
to study your ALL,9 cache.log collected from Squid v6 while reproducing 
the problem using a single transaction (or as few transactions as 
possible). Please share a pointer to compressed cache.log file 
(privately of needed). You probably do not need them, but there are a 
few relevant hints at 
https://wiki.squid-cache.org/SquidFaq/BugReporting#debugging-a-single-transaction



Thank you,

Alex.


There were a few bug fixes related to FTP and uploads, but I do not 
see some of them in (unsupported) v5. Does your test work with Squid v6+?


I tried with squid 6.10 (manually backported from debian unstable).
The same behaviour.

# time curl -v --no-progress-meter -x 10.X.Y.Z:port -T 
gnome-cards-data_3.22.23-1_all.deb -u "anonymous:pro...@example.com" 
ftp://example.net/incoming/

*   Trying 10.X.Y.Z:port...
* Connected to 10.X.Y.Z (10.X.Y.Z) port port (#0)
* Server auth using Basic with user 'anonymous'
PUT 
ftp://anonymous:proxy1%40example@example.net/incoming/gnome-cards-data_3.22.23-1_all.deb HTTP/1.1

Host: example.net:21
Authorization: Basic YW5vbnltb3VzOnByb3h5MUB2c3pwLnNr
User-Agent: curl/7.74.0
Accept: */*
Proxy-Connection: Keep-Alive
Content-Length: 10526800
Expect: 100-continue


* Done waiting for 100-continue
} [65536 bytes data]
* Recv failure: Connection reset by peer
* Closing connection 0
curl: (56) Recv failure: Connection reset by peer

real    0m1.043s
user    0m0.004s
sys 0m0.012s




___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid 5.7 FTP upload fails

2024-08-07 Thread Alex Rousskov

On 2024-08-07 13:05, Matus UHLAR - fantomas wrote:


after we upgraded squid 4.13 to squid 5.7 (debian 11 to 12)
our user reported that attempring to uploading bigger files fails.


Thank you for sharing access.log records. Any relevant messages in 
cache.log?


There were a few bug fixes related to FTP and uploads, but I do not see 
some of them in (unsupported) v5. Does your test work with Squid v6+?


Alex.



I was able to reproduce this behaviour:

# time curl --no-progress-meter -x 10.X.Y.Z:port -T 
gnome-cards-data_3.22.23-1_all.deb -u "anonymous:pro...@example.com" 
ftp://example.net/incoming/

curl: (56) Recv failure: Connection reset by peer

real    0m1.041s
user    0m0.008s
sys 0m0.008s

-rw-r- 1 ftpd ftpd 1468896 Aug  7 18:21 
gnome-cards-data_3.22.23-1_all.deb


# time curl --no-progress-meter -x 10.X.Y.Z:port -T 
gnome-cards-data_3.22.23-1_all.deb -u "anonymous:pro...@example.com" 
ftp://example.net/incoming/

curl: (56) Recv failure: Connection reset by peer

real    0m1.044s
user    0m0.008s
sys 0m0.008s

-rw-r- 1 ftpd ftpd 1768488 Aug  7 18:21 
gnome-cards-data_3.22.23-1_all.deb


Using squid 4.13 (same network, debian 11 server) works.

1723047707.274   1031 10.X.Y.W TCP_MISS_ABORTED/000 0 PUT 
ftp://anonymous:proxy1%40example@example.net/incoming/gnome-cards-data_3.22.23-1_all.deb - HIER_DIRECT/192.0.2.1 - "curl/7.74.0"



I tried verbose curl output did not help:

# time curl -v --no-progress-meter -x 10.X.Y.Z:port -T 
gnome-cards-data_3.22.23-1_all.deb -u "anonymous:pro...@example.com" 
ftp://example.net/incoming/

*   Trying 10.X.Y.Z:port...
* Connected to 10.X.Y.Z (10.X.Y.Z) port port (#0)
* Server auth using Basic with user 'anonymous'
PUT 
ftp://anonymous:proxy1%40example@example.net/incoming/gnome-cards-data_3.22.23-1_all.deb HTTP/1.1

Host: example.net:21
Authorization: Basic YW5vbnltb3VzOnByb3h5MUB2c3pwLnNr
User-Agent: curl/7.74.0
Accept: */*
Proxy-Connection: Keep-Alive
Content-Length: 10526800
Expect: 100-continue


* Done waiting for 100-continue
} [65536 bytes data]
* Recv failure: Connection reset by peer
* Closing connection 0
curl: (56) Recv failure: Connection reset by peer

real    0m1.043s
user    0m0.004s
sys 0m0.012s



I have tried to extend the 100-continue timeout for curl, no change.
(approximately same file sizes, same error only after 10 seconds)

configuring "force_request_body_continuation allow all" did not help,
the error occured immediately:

# time curl -v --no-progress-meter -x 10.X.Y.Z:port -T 
gnome-cards-data_3.22.23-1_all.deb -u "anonymous:pro...@example.com" 
ftp://example.net/incoming/

*   Trying 10.X.Y.Z:port...
* Connected to 10.X.Y.Z (10.X.Y.Z) port port (#0)
* Server auth using Basic with user 'anonymous'
PUT 
ftp://anonymous:proxy1%40example@example.net/incoming/gnome-cards-data_3.22.23-1_all.deb HTTP/1.1

Host: example.net:21
Authorization: Basic YW5vbnltb3VzOnByb3h5MUB2c3pwLnNr
User-Agent: curl/7.74.0
Accept: */*
Proxy-Connection: Keep-Alive
Content-Length: 10526800
Expect: 100-continue


* Mark bundle as not supporting multiuse
< HTTP/1.1 100 Continue
< Connection: keep-alive
} [65536 bytes data]
* Send failure: Broken pipe
* Closing connection 0
curl: (55) Send failure: Broken pipe

real    0m0.152s
user    0m0.005s
sys 0m0.009s

1723050086.883    139 10.X.Y.W TCP_MISS_ABORTED/100 0 PUT 
ftp://anonymous:proxy1%40example@example.net/incoming/gnome-cards-data_3.22.23-1_all.deb - HIER_DIRECT/192.0.2.1 - "curl/7.74.0"


Any idea what can be wrong?
CURL 8.8.0 from debian backports did not help as well.



___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SQUID - WINDBIND - very slow internet speed

2024-07-26 Thread Alex Rousskov

On 2024-07-26 03:31, Francesco Chemolli wrote:

Have you considered
https://wiki.squid-cache.org/Features/HelperMultiplexer 


Just in case you do not know how to find the actual helper program 
described on the above page, it is installed as libexec/helper-mux. That 
helper has a manual page.



HTH,

Alex.


On Fri, 26 Jul 2024 at 8:23 AM, Andrey K wrote:

Hello, Andre,


 > How to know if the helper supports concurrent requests?
You are using /usr/bin/ntlm_auth, and, as far as I know, it does not
support concurrency. But I do not know other ntlm-authentication
helpers.

 > winbindd: Exceeding 500 client connections, no idle connection found
 > I will increase this value to check if help to settle the issue
I think it will only hide the problem.
In my opinion, it is betterto followthe Alex's adviceandreducethe
numberof ntlm-helpers. It should prevent exceeding the maximum
winbind client connections error messages.
The actual number of required ntlm-helpers can be obtained during
the working day.
ps -ef | grep ntlm_auth | grep -v wrapper | grep -v basic | wc -l
You can divide this number by the number of workers and add some
spare ones.

When the problem appears again, you can follow the advice of Francesco:
> In order to bisect the problem, could you try using `wbinfo -a` on one
> of the affected machiens to authenticate against Active Directory and
>see if the performance is on the winbindd <-> AD side of the equation
> on on the squid <-> ntlm_auth side?
sudo wbinfo -t
sudo wbinfo -a "DOMAIN\username%password"
Kind regards,
Ankor.




чт, 25 июл. 2024 г. в 17:43, Andre Bolinhas
mailto:andre.bolin...@articatech.com>>:

__

Hi
We have 5 squid workers, we need to handle around 8k concurrent
users.

Based on this, what's the auth_param values that you recommend
for children, idle and startup?
How to know if the helper supports concurrent requests?


winbindd: Exceeding 500 client connections, no idle connection
found 

I will increase this value to check if help to settle the issue


On 25/07/2024 14:28, Alex Rousskov wrote:

On 2024-07-23 19:20, Andre Bolinhas wrote:

winbindd: Exceeding 500 client connections, no idle
connection found



auth_param ntlm children 500 ...


I know virtually nothing about WINDBIND and the authentication
helper you are using, but configuring Squid to have 500 helper
processes is usually a mistake, even with a single Squid
worker. YMMV, but I would try to use a lot fewer helpers
(e.g., 10) and increase that number only if such an increase
actually improves things.

If possible, use a helper that supports concurrent requests.

If your Squid is not competing for resources with other
applications on the server, then I also recommend keeping a
_constant_ number of helper processes (instead of asking Squid
to start many new helper processes at the worse possible time
-- when the load on Squid increases). To do that, make startup
and idle parameters the same as the maximum number of children.


HTH,

Alex.
P.S. The credit for highlighting the correlation between
winbindd errors and "auth_param ntlm children 500" goes to
Andrey K.

___
squid-users mailing list
squid-users@lists.squid-cache.org
<mailto:squid-users@lists.squid-cache.org>
https://lists.squid-cache.org/listinfo/squid-users
<https://lists.squid-cache.org/listinfo/squid-users>
 



___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SQUID - WINDBIND - very slow internet speed

2024-07-26 Thread Alex Rousskov

On 2024-07-26, Andre wrote:


How to know if the helper supports concurrent requests?


Good question! You need to consult helper documentation. If that does 
not exist or does not document concurrency, one can analyze helper 
source code and/or test concurrency support, but those two activities 
require specialized skills. Testing is especially difficult because a 
helper may not violently/visibly reject "concurrent" protocol messages: 
Many helpers were written under the false assumption that they will 
never receive invalid traffic.


Asking here (and then improving helper documentation!) may be the best 
option.



HTH,

Alex.


чт, 25 июл. 2024 г. в 17:43, Andre Bolinhas 
mailto:andre.bolin...@articatech.com>>:


__

Hi
We have 5 squid workers, we need to handle around 8k concurrent users.

Based on this, what's the auth_param values that you recommend for
children, idle and startup?
How to know if the helper supports concurrent requests?

winbindd: Exceeding 500 client connections, no idle connection found 

I will increase this value to check if help to settle the issue


On 25/07/2024 14:28, Alex Rousskov wrote:

On 2024-07-23 19:20, Andre Bolinhas wrote:

winbindd: Exceeding 500 client connections, no idle connection found



auth_param ntlm children 500 ...


I know virtually nothing about WINDBIND and the authentication
helper you are using, but configuring Squid to have 500 helper
processes is usually a mistake, even with a single Squid worker.
YMMV, but I would try to use a lot fewer helpers (e.g., 10) and
increase that number only if such an increase actually improves
things.

If possible, use a helper that supports concurrent requests.

If your Squid is not competing for resources with other
applications on the server, then I also recommend keeping a
_constant_ number of helper processes (instead of asking Squid to
start many new helper processes at the worse possible time -- when
the load on Squid increases). To do that, make startup and idle
parameters the same as the maximum number of children.


HTH,

Alex.
P.S. The credit for highlighting the correlation between winbindd
errors and "auth_param ntlm children 500" goes to Andrey K.

___
squid-users mailing list
squid-users@lists.squid-cache.org
<mailto:squid-users@lists.squid-cache.org>
https://lists.squid-cache.org/listinfo/squid-users
<https://lists.squid-cache.org/listinfo/squid-users>
 



___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SQUID - WINDBIND - very slow internet speed

2024-07-25 Thread Alex Rousskov

On 2024-07-23 19:20, Andre Bolinhas wrote:

winbindd: Exceeding 500 client connections, no idle connection found



auth_param ntlm children 500 ...


I know virtually nothing about WINDBIND and the authentication helper 
you are using, but configuring Squid to have 500 helper processes is 
usually a mistake, even with a single Squid worker. YMMV, but I would 
try to use a lot fewer helpers (e.g., 10) and increase that number only 
if such an increase actually improves things.


If possible, use a helper that supports concurrent requests.

If your Squid is not competing for resources with other applications on 
the server, then I also recommend keeping a _constant_ number of helper 
processes (instead of asking Squid to start many new helper processes at 
the worse possible time -- when the load on Squid increases). To do 
that, make startup and idle parameters the same as the maximum number of 
children.



HTH,

Alex.
P.S. The credit for highlighting the correlation between winbindd errors 
and "auth_param ntlm children 500" goes to Andrey K.


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid on Freebsd

2024-07-23 Thread Alex Rousskov

On 2024-07-23 13:34, Anton Kornexl wrote:


Squid starts, shows a segmentation fault and continues working normally.
Squid forks a worker child and probably this child works, but the parent 
process dies with segmentation fault. There is no sign of this 
segmention fault in the cache log.


You may catch parent Squid in the act of crashing if you start Squid 
from gdb, but telling gdb which process to watch may not be trivial, and 
I would leave that experiment for later and focus on an easier target first:




The segmentation fault occurs even when calling squid -k parse.

Should i try to produce a coredump of a squid -k parse run?


Yes, please: `gdb --args squid -k parse ...` or a similar command may 
work to tell gdb how to start Squid. After that, inside gdb, a "run" 
command should start Squid and produce a stack trace we need to triage 
this problem further.


Alternatively, enable coredump file generation in your OS (if needed) 
and examine the dump file using gdb as detailed on the same page that 
Francesco has referenced earlier:

https://wiki.squid-cache.org/SquidFaq/BugReporting#crashes-and-core-dumps


HTH,

Alex.



Am 23.07.2024 09:03 schrieb Francesco Chemolli :

Hi Anton,
   no, segmentation fault shouldn't happen at any time.
Could you try to follow the instructions at
https://wiki.squid-cache.org/SquidFaq/BugReporting#crashes-and-core-dumps
?
What are the last lines in the cache.log when the segmentation fault
happens?
Thanks

On Tue, Jul 23, 2024 at 3:12 AM Anton Kornexl
 wrote:
 >
 > Hello,
 >
 > I have tested the two installations further
 >
 > Opnsense 23.x with squid 6.6 on freebsd 13.2-Release-p9 produces the
 > same segmentation fault, but it does not popup as red window in the
 > dashboard.
 >
 > I have set "debug_options ALL,5" in squid.conf:
 >
 > I have found the following lines in cache.log (grep _suid cache.log)
 >
 > 2024/07/22 17:26:52.186 kid1| 21,3| tools.cc(625) enter_suid:
 > enter_suid: PID 29145 taking root privileges
 > 2024/07/22 17:26:52.186 kid1| 21,31 tools.cc(629) enter_suid:
 > enter_suid: setresuid failed: (1) Operation not permitted
 > 2024/07/22 17:26:52.186 kid1| 21,3| tools.cc(561) leave suid: leave
 > suid: PID 29145 called
 > 2024/07/22 17:26:52.187 kid1| 21,31 tools.cc(625) enter_suid:
 > enter_suid: PID 29145 taking root privileges
 > 2024/07/22 17:26:52.187 kid1| 21,3| tools.cc(629) enter_suid:
 > enter_suid: setresuid failed: (1) Operation not permitted
 > 2024/07/22 17:26:52.187 kid1| 21,31 tools.cc(561) leave_suid: leave
 > suid: PID 29145 called
 > 2024/07/22 17:26:52.187 kid1| 21,31 tools.cc(625) enter_suid:
 > enter_suid: PID 29145 taking root privileges
 > 2024/07/22 17:26:52.187 kid1l 21,31 tools.cc(629) enter_suid:
 > enter_suid: setresuid failed: (1) Operation not permitted
 > 2024/07/22 17:26:52.187 kid1| 21,31 tools.cc(561) leave_suid: leave
 > suid: PID 29145 called
 > 2024/07/22 17:26:52.187 kid1l 21,31 tools.cc(561) leave_suid:
 > leave_suid: PID 29648 called
 > 2024/07/22 17:26:52.187 kid1l 21,31 tools.cc(651) no_suid:
no_suid: PID
 > 29648 giving up root privileges forever
 >
 > maybe this is the cause of the "segmentation fault".
 >
 > The difference between the installations 23.x and 24.x is the
alerting
 > of this segmentaion fault in the dashboard of opnsense.
 >
 > But what ist the cause of this "Operation not permitted"
 >
 > yours
 >
 > Anton Kornexl
 >
 > Am 22.07.2024 um 11:03 schrieb Anton Kornexl:
 > >  Hello
 > >
 > > i try to use squid (6.10)  with opnsense 24.x on freebsd
 > > 13-2-Release-p11.
 > >
 > > It produces a "segmentation fault" at start and restart but the
 > > process runs.
 > >
 > > The "segmentation fault" occurs even with squid -k parse.
 > >
 > > A "service squid reload" runs OK, but a "service squid restart"
 > > produces this Segmentation fault.
 > >
 > > The problem did not exist with opnsense 23.x and an older squid.
 > >
 > > How can I debug this error probably in the parse part?
 > >
 > > yours
 > >
 > > Anton Kornexl



___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid "make check" error

2024-07-19 Thread Alex Rousskov

On 2024-07-19 09:20, Rafał Stanilewicz wrote:

Thank you. It worked.


Glad to hear that!


I incorrectly assumed all dependencies would be captured by aptitude 
build-dep squid and ./configure.


Your assumption is not wrong for dependencies that are necessary to 
build and install Squid. Testing Squid, including testing with "make 
check", is (evidently) considered a separate activity that "squid" 
package build dependencies (installed by "apt build-dep") do not have to 
accommodate.


Should "squid" package build dependencies accommodate "make check"?

FWIW, packaging is not my area of expertise, and I could not find a 
definitive answer in Debian packaging policy documents[1] or on the web. 
I speculate that the default answer is "yes". If that answer is correct, 
then "squid" package configuration should be adjusted to include 
libcppunit-dev as a build dependency. Otherwise, one should not expect 
"make check" to succeed after "apt build-dep squid".


[1] https://www.debian.org/doc/debian-policy/


Cheers,

Alex.


pt., 19 lip 2024 o 13:59 Alex Rousskov <mailto:rouss...@measurement-factory.com>> napisał(a):


On 2024-07-19 05:04, Rafał Stanilewicz wrote:

 > Next step was make check, and it failed with this error:
 > ../include/unitTestMain.h:16:10: fatal error:
 > cppunit/BriefTestProgressListener.h: No such file or directory


 > I found out that I need to do
 > apt install libcppunit-dev
 > So i did it.
 >
 > I re-ran "make check" , but then things went south completely:
 >
/root/squid-7.0.0-VCS/lib/../include/unitTestMain.h:61:(.text+0x42b):
 > undefined reference to
 >

`CppUnit::TestResult::TestResult(CppUnit::SynchronizedObject::SynchronizationObject*)'


If you have not run ./configure after installing libcppunit-dev,
then go
back to that step _before_ running "make" and "make check" again:

      ./configure ... && make && make check

BTW, if you are building Squid v7+ on a system with N CPU cores, run
"make -jN" instead of just "make" to speed things up.


HTH,

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
<mailto:squid-users@lists.squid-cache.org>
https://lists.squid-cache.org/listinfo/squid-users
<https://lists.squid-cache.org/listinfo/squid-users>



--
Zanim wydrukujesz, pomyśl o środowisku.


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid "make check" error

2024-07-19 Thread Alex Rousskov

On 2024-07-19 05:04, Rafał Stanilewicz wrote:


Next step was make check, and it failed with this error:
../include/unitTestMain.h:16:10: fatal error: 
cppunit/BriefTestProgressListener.h: No such file or directory




I found out that I need to do
apt install libcppunit-dev
So i did it.

I re-ran "make check" , but then things went south completely:
/root/squid-7.0.0-VCS/lib/../include/unitTestMain.h:61:(.text+0x42b): 
undefined reference to 
`CppUnit::TestResult::TestResult(CppUnit::SynchronizedObject::SynchronizationObject*)'



If you have not run ./configure after installing libcppunit-dev, then go 
back to that step _before_ running "make" and "make check" again:


./configure ... && make && make check

BTW, if you are building Squid v7+ on a system with N CPU cores, run 
"make -jN" instead of just "make" to speed things up.



HTH,

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squidclient -h 127.0.0.1 -p 3128 mgr:info shows access denined

2024-07-18 Thread Alex Rousskov

On 2024-07-18 00:55, Jonathan Lee wrote:

curl http://localhost:3128/squid-internal-mgr/info 


Where would I place the password?


See "man curl" or online manual pages for curl. They will point you to 
two relevant options: --user and --proxy-user. AFAICT, your particular 
cache manager requests are sent _to_ the proxy (as if it were an origin 
server) rather than _through_ the proxy. Thus, you should use --user.


As I keep saying on this thread, due to Squid complications related to 
Bug 5283, specifying seemingly correct client parameters may not be 
enough to convince Squid to accept the cache manager request. I 
recommend the following procedure:


1. List the corresponding http_port directive first, before any other 
http_port, https_port, and ftp_port directives. Do not use interception 
of any kind for this cache manager port.


2. Use curl with absolute squid-internal-mgr URLs with http scheme (like 
you show above). Do _not_ use "curl --proxy" or similar. Do not use 
https scheme.


3. In that absolute mgr URL, use the host name that matches 
visible_hostname in squid.conf. If you do not have visible_hostname in 
squid.conf, add it. This is not required, but, due to Squid bugs, it is 
often much easier to get this to work with visible_hostname than without it.


4. Make (passwordless) mgr:info use case working first, before trying to 
get password-protected pages working.


5. When you do specify a username and a password, remember that you are 
sending this request to an (equivalent of) a service running on an 
origin server, _not_ a proxy (hence --user rather than --proxy-user).



If you cannot figure it out despite carefully going through the above 
steps, share (privately if needed) a pointer to compressed ALL,9 
cache.log while reproducing the problem with throw-away credentials on 
an idle Squid with a single curl request. Mention which step you got 
stuck on.



HTH,

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 6.6 cache_dir rock questions

2024-07-18 Thread Alex Rousskov

On 2024-07-18 00:33, Jonathan Lee wrote:


What would be the correct way to convert cache_dir disks to rock?


One cannot convert a cache_dir of another type to rock cache_dir. You 
will need to start from scratch, using a rock-dedicated cache_dir path 
(initialized by running "squid -z" after updating squid.conf). If you 
want to reuse the same path, then move or remove the old/diskd contents 
first.




cache_dir diskd /var/squid/cache 64000 256 256

Would it be as simple as..

cache_dir rock /var/squid/cache 64000 256 256?


See cache_dir directive in squid.conf.documented; diskd and rock 
cache_dirs have _different_ configuration syntax because rock cache_dirs 
do not have/need L1 and L2 parameters required for diskd cache_dirs:


cache_dir diskd Directory-Name Mbytes L1 L2 [options]
cache_dir rock Directory-Name Mbytes [options]

HTH,

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 6.6 shows configuration failure: requires TPROXY feature to be enabled by ./configure

2024-07-18 Thread Alex Rousskov

On 2024-07-18 00:25, Jonathan Lee wrote:
How do we enable tproxy in Squid 



2024/07/17 21:22:41| Processing: http_port 127.0.0.1:3128 tproxy ...
...
2024/07/17 21:22:41| ERROR: configuration failure: requires TPROXY feature to 
be enabled by ./configure


As strongly implied by the error message, TPROXY support has to be 
enabled by using Squid ./configure parameters (among other things). 
Running ./configure --help does not, unfortunately, contain the word 
"TPROXY", but searching for "proxy" reveals the following relevant 
./configure options:




  --enable-ipfw-transparent
  Enable Transparent Proxy support for systems using
  FreeBSD IPFW-style firewalling.
  --enable-ipf-transparent
  Enable Transparent Proxy support using
  IPFilter-style firewalling
  --enable-pf-transparent Enable Transparent Proxy support for systems using
  PF network address redirection.
  --enable-linux-netfilter
  Enable Transparent Proxy support for Linux
  (Netfilter)


Pick the one matching your environment and check ./configure output for 
relevant lines, while keeping in mind that Squid still has a lot of text 
inconsistencies (e.g., "TPROXY" vs. "tproxy" vs. "Transparent Proxy" vs. 
"transparent proxying") that require relaxed searching rules. For example:



FreeBSD IPFW-based transparent proxying enabled: no
IPF-based transparent proxying requested: no
PF-based transparent proxying requested: no
IPF-based transparent proxying enabled: no



Searching squid.conf.documented for similar terms may be useful as well.


HTH,

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Prefer or force ipv6 usage on dual stack interface

2024-07-17 Thread Alex Rousskov

On 2024-07-17 02:22, Rasmus Horndrup wrote:


why it went with the ipv4 conn over ipv6 in the second case.


Squid went with IPv4 because Squid established the corresponding 
TCP/IPv4 connection before it could establish the corresponding TCP/IPv6 
connection. Squid started with an IPv4 connection establishment attempt 
because DNS A query (QID 0xd80b) was answered before DNS  query (QID 
0xacdf) was.


The logs are not detailed enough for me to be sure, but I suspect that 
Squid did not even try to establish an TCP/IPv6 connection in this 
particular case (see happy_eyeballs_connect_timeout).




As I understood, it should prefer ipv6?


Squid does not prefer IPv6. Whether it _should_ is a complicated 
question I would rather not answer until it becomes really necessary.



N.B. Squid still sends DNS A (IPv4) queries just before sending DNS  
(IPv6) queries. The two queries are sent one after another, without any 
wait or artificial delays between them, but this hard-coded query 
sending order does give IPv4 an advantage over IPv6 (with all other 
factors being equal).



HTH,

Alex.



On 16 Jul 2024, at 20.46, Alex Rousskov  
wrote:

On 2024-07-16 09:31, Rasmus Horndrup wrote:

how can I basically force squid to use IPv6?


One can modify Squid source code to enforce that rule OR

* ban requests targeting raw IPv4 addresses _and_
* ensure your /etc/hosts is not in the way _and_
* use a DNS resolver that never sends IPv4 addresses to Squid.


HTH,

Alex.


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Prefer or force ipv6 usage on dual stack interface

2024-07-16 Thread Alex Rousskov

On 2024-07-16 09:31, Rasmus Horndrup wrote:

how can I basically force squid to use IPv6?


One can modify Squid source code to enforce that rule OR

* ban requests targeting raw IPv4 addresses _and_
* ensure your /etc/hosts is not in the way _and_
* use a DNS resolver that never sends IPv4 addresses to Squid.


HTH,

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Rewriting HTTP to HTTPS for generic package proxy

2024-07-15 Thread Alex Rousskov

On 2024-07-15 17:19, Amos Jeffries wrote:

On 12/07/24 10:10, Alex Rousskov wrote:

On 2024-07-11 17:03, Amos Jeffries wrote:

On 11/07/24 00:49, Alex Rousskov wrote:

On 2024-07-09 18:25, Fiehe, Christoph wrote:

I hope that somebody has an idea, what I am doing wrong. 


AFAICT from the debugging log, it is your parent proxy that returns 
an ERR_SECURE_CONNECT_FAIL error page in response to a seemingly 
valid "HEAD https://..."; request. Can you ask their admin to 
investigate? You may also recommend that they upgrade from Squid v4 
that has many known security vulnerabiities.


If parent is uncooperative, you can try to reproduce the problem by 
temporary installing your own parent Squid instance and configuring 
your child Squid to use that instead.


HTH,

Alex.
P.S. Unlike Amos, I do not see serious conceptual problems with 
rewriting request target scheme (as a temporary compatibility 
measure). It may not always work, for various reasons, but it does 
not necessarily make things worse (and may make things better).




To which I refer you to:


None of the weaknesses below are applicable to request target scheme 
rewriting (assuming both proxies in question are 
implemented/configured correctly, of course). Specific 
non-applicability reasons are given below for each weakness URL:



https://cwe.mitre.org/data/definitions/311.html


The above "The product does not encrypt sensitive or critical 
information before storage or transmission" case is not applicable: 
All connections can be encrypted as needed after the scheme rewrite.




Reminder, OP requirement is to cache the responses and send un-encrypted.


The client does not support TLS so what happens between the client and 
Squid is irrelevant to this discussion -- a correctly 
configured/implemented Squid is not going to make things worse there. 
Squid is a part of the "product" in the above definition; client is not. 
The only relevant communication part is between Squid and origin server 
(possibly via a parent). All those network segments can be configured to 
be encrypted "before storage or transmission", avoiding the above weakness.




https://cwe.mitre.org/data/definitions/312.html


The above "The product stores sensitive information in cleartext 
within a resource that might be accessible to another control sphere." 
case is not applicable: Squid does not store information in such an 
accessible resource.



Reminder, Squid does cache both https:// and http:// traffic.


I do not see how that assertion is relevant. Everything Squid caches is 
_not_ stored in an "accessible resource" described in that weakness.




https://cwe.mitre.org/data/definitions/319.html


The above "The product transmits sensitive or security-critical data 
in cleartext in a communication channel that can be sniffed by 
unauthorized actors." case is not applicable: All connections can be 
encrypted as needed after the scheme rewrite.


The relevant sensitive data is in the Responses, which are absolutely 
transmitted un-encrypted per the OP requirements.


See 311.html case above: Responses are encrypted on the relevant network 
segments.


Alex.


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TCP_MISS_ABORTED/502

2024-07-15 Thread Alex Rousskov

On 2024-07-13 16:02, Ben Toms wrote:


with debug_options ALL,4 set.. the cache.log shows:


2024/07/13 18:55:03.595 kid1| 5,3| Read.cc(93) ReadNow: conn17 
local=squid.cache.ip:37046 remote=origin.server.ip:443 FIRSTUP_PARENT FD 
14 flags=1, size 65536, retval -28, errno 0


2024/07/13 18:55:03.595 kid1| 17,3| FwdState.cc(471) fail: 
ERR_READ_ERROR "Bad Gateway"



Still need to dig in more.. but the true error seems to be: 
ERR_READ_ERROR "Bad Gateway"


AFAICT, the underlying error happens a bit earlier (probably at TLS 
layer), just before the "retval -28" line above. Official high-level 
Squid code that produced the above log lines does not detail those TLS 
errors. I do not know what went wrong between Squid and Apache.


Going forward, I see four options:

A) Examine origin logs. It is likely that Apache logs what is going 
wrong with that TLS session from httpd point of view.


B) (Privately) examine Squid ALL,9 logs. Squid OpenSSL integration code 
might log something relevant to this context.


C) Examine Squid-origin packet capture. If you supply TLS master keys to 
Wireshark or a similar tool, you may be able to see a relevant TLS alert 
in that TLS stream.


D) Find somebody to patch Squid source code to add more debugging info 
if (B) did not produce enough new/usable hints.



HTH,

Alex.


*From: *Ben Toms 
*Date: *Saturday, 13 July 2024 at 13:04
*To: *Alex Rousskov 
*Subject: *Re: [squid-users] TCP_MISS_ABORTED/502

Well.. tried with cache-control headers added to the apache servers 
responses.. and still no luck (header response below).


Date: Sat, 13 Jul 2024 12:00:02 GMT

Server: Apache

Last-Modified: Thu, 20 Jun 2024 13:57:21 GMT

ETag: "152c-61b52b19bbd2a"

Accept-Ranges: bytes

Content-Length: 5420

Cache-Control: max-age=84600, public

Connection: close

I’ve tried a few other sites and the issue seems to be when attempting 
to cache an item which requires authentication. Which is bizarre, as the 
apache server is showing files are being downloaded.. yet squid-cache is 
still erroring with TCP_MISS_ABORTED/502.


Regards,

Ben.

*From: *Alex Rousskov 
*Date: *Friday, 12 July 2024 at 22:54
*To: *Ben Toms 
*Subject: *Re: [squid-users] TCP_MISS_ABORTED/502

On 2024-07-12 14:31, Ben Toms wrote:

So this squid cache is the parent (which might speak to me 
misconfiguring squid).


It’s setup as an accelerator for the public server.


Ah, I see. Sorry I forgot or misinterpreted that part. Too many balls in
the air.

Right now, it sounds like origin sent 200 OK, but Squid could not even
parse that response header, which is rather unusual/rare. However, that
theory is based on your interpretation of ALL,2 logs, so there may be
more to the story here.



When I curl the public server direct, there are no cache control headers.


Understood. I suspect Squid will not cache such authenticated responses
by default (even after Squid starts to receive them), but I have not
checked all the relevant details.


Cheers,

Alex.



On Fri, 12 Jul 2024 at 19:15, Alex Rousskov  wrote:

 On 2024-07-12 13:38, Ben Toms wrote:

  > Where would I find those headers?

 If you have access to the parent Squid proxy, they will be in its
 debugging cache.log. You can also get them by capturing network packets
 between the parent Squid and origin, but for HTTPS traffic that
 requires
 giving Wireshark the associated master keys, which may be possible with
 Squid v6, but not trivial (see tls_key_log in Squid; Apache may have
 better support for this). Finally, one can configure Apache to log them
 (sorry, I do not remember the details).

 Again, the child Squid does not see these headers yet (AFAICT), so they
 are not the reason things do not currently "work" in your tests.


  > Looking at the origin servers apache logs.. it’s sending a 200
 response.

 I am aware. We need the headers that go with that 200 OK response. For
 example, if it has Cache-Control:public, then Squid may be able to
 cache
 it despite authentication.


 HTH,

 Alex.


  > On Fri, 12 Jul 2024 at 18:26, Alex Rousskov wrote:
  >
  >     On 2024-07-12 13:03, Ben Toms wrote:
  >
  >      > So the issue seems to be caching content that requires
 authentication
  >
  >     The client is getting an error response from Squid. That error is
  >     probably not related to caching decisions. I do not recommend
 focusing
  >     on caching at this stage of triage. I recommend addressing that
  >     error first.
  >
  >
  >      > The question here is, can squid cache items that require
  >     authentication
  >      > to access?
  >
  >     Yes, in some cases. To know whether your case qualifies, I
 asked for
  >     the
  >     response headers. That led to 

Re: [squid-users] TCP_MISS_ABORTED/502

2024-07-12 Thread Alex Rousskov

On 2024-07-12 13:38, Ben Toms wrote:


Where would I find those headers?


If you have access to the parent Squid proxy, they will be in its 
debugging cache.log. You can also get them by capturing network packets 
between the parent Squid and origin, but for HTTPS traffic that requires 
giving Wireshark the associated master keys, which may be possible with 
Squid v6, but not trivial (see tls_key_log in Squid; Apache may have 
better support for this). Finally, one can configure Apache to log them 
(sorry, I do not remember the details).


Again, the child Squid does not see these headers yet (AFAICT), so they 
are not the reason things do not currently "work" in your tests.




Looking at the origin servers apache logs.. it’s sending a 200 response.


I am aware. We need the headers that go with that 200 OK response. For 
example, if it has Cache-Control:public, then Squid may be able to cache 
it despite authentication.



HTH,

Alex.



On Fri, 12 Jul 2024 at 18:26, Alex Rousskov wrote:

On 2024-07-12 13:03, Ben Toms wrote:

 > So the issue seems to be caching content that requires authentication

The client is getting an error response from Squid. That error is
probably not related to caching decisions. I do not recommend focusing
on caching at this stage of triage. I recommend addressing that
error first.


 > The question here is, can squid cache items that require
authentication
 > to access?

Yes, in some cases. To know whether your case qualifies, I asked for
the
response headers. That led to the discovery that there are none (from
child Squid point of view). If you really want to investigate the
caching angle in parallel with solving ERR_READ_ERROR/WITH_SERVER, then
try to obtain HTTP response headers that the origin server responds (to
the parent cache) with.


HTH,

Alex.


 > *From: *Ben Toms mailto:b...@macmule.com>>
 > *Date: *Friday, 12 July 2024 at 17:56
 > *To: *Alex Rousskov mailto:rouss...@measurement-factory.com>>,
 > squid-users@lists.squid-cache.org
<mailto:squid-users@lists.squid-cache.org>
mailto:squid-users@lists.squid-cache.org>>
 > *Subject: *Re: [squid-users] TCP_MISS_ABORTED/502
 >
 > So, with the below config:
 >
 > https_port 443 accel protocol=HTTPS
tls-cert=/usr/local/squid/client.pem
 > tls-key=/usr/local/squid/client.key
 >
 > cache_peer public.server.fqdn parent 443 0 no-query originserver
 > no-digest no-netdb-exchange tls login=PASSTHRU name=myAccel
 > forceddomain=public.server.fqdn
 >
 > acl our_sites dstdomain local.server.fqdn
 >
 > http_access allow our_sites
 >
 > cache_peer_access myAccel allow our_sites
 >
 > cache_peer_access myAccel deny all
 >
 > cache_dir ufs /usr/local/squid/var/cache 10 16 256
 >
 > cache_mem 500 MB
 >
 > maximum_object_size_in_memory 5 KB
 >
 > refresh_pattern .   0   20% 4320
 >
 > debug_options 11,2
 >
 > I can see the below in /var/log/squid/cache.log
 >
 > --
 >
 > 2024/07/12 16:49:57.056 kid1| 11,2| http.cc(1263) readReply: conn12
 > local=client.ip:56670 remote=public.ip.of.public.server:443
 > FIRSTUP_PARENT FD 14 flags=1: read failure: (0) No error.
 >
 > 2024/07/12 16:49:57.056 kid1| 11,2| Stream.cc(273)
sendStartOfMessage:
 > HTTP Client conn9 local=client.ip:443
remote=local.server.ip:59158 FD 13
 > flags=1
 >
 > 2024/07/12 16:49:57.056 kid1| 11,2| Stream.cc(274)
sendStartOfMessage:
 > HTTP Client REPLY:
 >
 > -
 >
 > HTTP/1.1 502 Bad Gateway
 >
 > Server: squid/6.6
 >
 > Mime-Version: 1.0
 >
 > Date: Fri, 12 Jul 2024 16:49:57 GMT
 >
 > Content-Type: text/html;charset=utf-8
 >
 > Content-Length: 3629
 >
 > X-Squid-Error: ERR_READ_ERROR 0
 >
 > Vary: Accept-Language
 >
 > Content-Language: en
 >
 > Cache-Status: local.server;detail=mismatch
 >
 > Via: 1.1 local.server (squid/6.6)
 >
 > Connection: keep-alive
 >
 > --
 >
 > The apache server still shows a 200 for the request:
 >
 > [12/Jul/2024:17:49:57 +0100] "GET /path/to/file HTTP/1.1" 200
10465 "-"
 > "curl/8.7.1"
 >
 > And this is when testing via:
     >
 > curl -D - https://local.server.fqdn/path/to/file
<https://local.server.fqdn/path/to/file>
 > <https://local.server.fqdn/path/to/file
<https://local.server.fqdn/path/to/file>> -H "

Re: [squid-users] TCP_MISS_ABORTED/502

2024-07-12 Thread Alex Rousskov

On 2024-07-12 13:03, Ben Toms wrote:


So the issue seems to be caching content that requires authentication


The client is getting an error response from Squid. That error is 
probably not related to caching decisions. I do not recommend focusing 
on caching at this stage of triage. I recommend addressing that error first.



The question here is, can squid cache items that require authentication 
to access?


Yes, in some cases. To know whether your case qualifies, I asked for the 
response headers. That led to the discovery that there are none (from 
child Squid point of view). If you really want to investigate the 
caching angle in parallel with solving ERR_READ_ERROR/WITH_SERVER, then 
try to obtain HTTP response headers that the origin server responds (to 
the parent cache) with.



HTH,

Alex.



*From: *Ben Toms 
*Date: *Friday, 12 July 2024 at 17:56
*To: *Alex Rousskov , 
squid-users@lists.squid-cache.org 

*Subject: *Re: [squid-users] TCP_MISS_ABORTED/502

So, with the below config:

https_port 443 accel protocol=HTTPS tls-cert=/usr/local/squid/client.pem 
tls-key=/usr/local/squid/client.key


cache_peer public.server.fqdn parent 443 0 no-query originserver 
no-digest no-netdb-exchange tls login=PASSTHRU name=myAccel 
forceddomain=public.server.fqdn


acl our_sites dstdomain local.server.fqdn

http_access allow our_sites

cache_peer_access myAccel allow our_sites

cache_peer_access myAccel deny all

cache_dir ufs /usr/local/squid/var/cache 10 16 256

cache_mem 500 MB

maximum_object_size_in_memory 5 KB

refresh_pattern .   0   20% 4320

debug_options 11,2

I can see the below in /var/log/squid/cache.log

--

2024/07/12 16:49:57.056 kid1| 11,2| http.cc(1263) readReply: conn12 
local=client.ip:56670 remote=public.ip.of.public.server:443 
FIRSTUP_PARENT FD 14 flags=1: read failure: (0) No error.


2024/07/12 16:49:57.056 kid1| 11,2| Stream.cc(273) sendStartOfMessage: 
HTTP Client conn9 local=client.ip:443 remote=local.server.ip:59158 FD 13 
flags=1


2024/07/12 16:49:57.056 kid1| 11,2| Stream.cc(274) sendStartOfMessage: 
HTTP Client REPLY:


-

HTTP/1.1 502 Bad Gateway

Server: squid/6.6

Mime-Version: 1.0

Date: Fri, 12 Jul 2024 16:49:57 GMT

Content-Type: text/html;charset=utf-8

Content-Length: 3629

X-Squid-Error: ERR_READ_ERROR 0

Vary: Accept-Language

Content-Language: en

Cache-Status: local.server;detail=mismatch

Via: 1.1 local.server (squid/6.6)

Connection: keep-alive

--

The apache server still shows a 200 for the request:

[12/Jul/2024:17:49:57 +0100] "GET /path/to/file HTTP/1.1" 200 10465 "-" 
"curl/8.7.1"


And this is when testing via:

curl -D - https://local.server.fqdn/path/to/file 
<https://local.server.fqdn/path/to/file> -H "Authorization: Basic 
base64auth" -o /dev/null


Regards,

Ben.

*From: *Alex Rousskov 
*Date: *Friday, 12 July 2024 at 17:36
*To: *Ben Toms , squid-users@lists.squid-cache.org 


*Subject: *Re: [squid-users] TCP_MISS_ABORTED/502

On 2024-07-12 12:14, Ben Toms wrote:


Which log should those be found?


cache.log (if they are present)



Can’t see “HTTP Server RESPONSE” in the access.log or cache.log.


Sigh. This is one of the reasons I avoid asking folks to study logs
themselves, even ALL,2 logs...

If that line is not in cache.log, then child Squid probably did not
receive a response from parent Squid, or could not parse that response.
A full debugging log should give us more information.

Alex.


*From: *squid-users  on 
behalf of Alex Rousskov 

*Date: *Friday, 12 July 2024 at 17:11
*To: *squid-users@lists.squid-cache.org 
*Subject: *Re: [squid-users] TCP_MISS_ABORTED/502

On 2024-07-12 11:38, Ben Toms wrote:

Think I made the changes Alex requested:

12/Jul/2024:15:36:31 +.640 local.server.ip TCP_MISS_ABORTED/502 3974 
GET https://local.server.fqdn/path/to/file 

<https://local.server.fqdn/path/to/file>
<https://local.server.fqdn/path/to/file 

<https://local.server.fqdn/path/to/file>> -
FIRSTUP_PARENT/public.ip.of.public.server text/html 
ERR_READ_ERROR/WITH_SERVER


Thank you for using Squid v6 for this test.

Unfortunately, due to Squid logging bugs, ERR_READ_ERROR/WITH_SERVER
does not always mean what it says. For example, parent Squid could have
closed the child-parent connection prematurely, but there could be other
reasons. A full debugging log should give us more information.


2024/07/12 14:57:08.678 kid1| 11,2| Stream.cc(274) sendStartOfMessage: 
HTTP Client REPLY:


This is a child proxy response to the client. We need parent response to
the child proxy. Look for "HTTP Server RESPONSE" lines instead.


HTH,

Alex.




-

HTTP/1.1 502 Bad Gateway

Server: squid/6.6

Mime-Version: 1.0

Date: Fri, 12 Jul 2024 14:57:08 GMT

Content-Type: text/html;charset=utf-8

Content-Length: 3629

X-Squid-Error: ERR_READ_ERROR 0

Vary: Accept-Language

Content-Language: en

Cache-Status: squid.host;detail=mismatch

Vi

Re: [squid-users] TCP_MISS_ABORTED/502

2024-07-12 Thread Alex Rousskov

On 2024-07-12 12:14, Ben Toms wrote:


Which log should those be found?


cache.log (if they are present)



Can’t see “HTTP Server RESPONSE” in the access.log or cache.log.


Sigh. This is one of the reasons I avoid asking folks to study logs 
themselves, even ALL,2 logs...


If that line is not in cache.log, then child Squid probably did not 
receive a response from parent Squid, or could not parse that response. 
A full debugging log should give us more information.


Alex.


*From: *squid-users  on 
behalf of Alex Rousskov 

*Date: *Friday, 12 July 2024 at 17:11
*To: *squid-users@lists.squid-cache.org 
*Subject: *Re: [squid-users] TCP_MISS_ABORTED/502

On 2024-07-12 11:38, Ben Toms wrote:

Think I made the changes Alex requested:

12/Jul/2024:15:36:31 +.640 local.server.ip TCP_MISS_ABORTED/502 3974 
GET https://local.server.fqdn/path/to/file 

<https://local.server.fqdn/path/to/file> -
FIRSTUP_PARENT/public.ip.of.public.server text/html 
ERR_READ_ERROR/WITH_SERVER


Thank you for using Squid v6 for this test.

Unfortunately, due to Squid logging bugs, ERR_READ_ERROR/WITH_SERVER
does not always mean what it says. For example, parent Squid could have
closed the child-parent connection prematurely, but there could be other
reasons. A full debugging log should give us more information.


2024/07/12 14:57:08.678 kid1| 11,2| Stream.cc(274) sendStartOfMessage: 
HTTP Client REPLY:


This is a child proxy response to the client. We need parent response to
the child proxy. Look for "HTTP Server RESPONSE" lines instead.


HTH,

Alex.




-

HTTP/1.1 502 Bad Gateway

Server: squid/6.6

Mime-Version: 1.0

Date: Fri, 12 Jul 2024 14:57:08 GMT

Content-Type: text/html;charset=utf-8

Content-Length: 3629

X-Squid-Error: ERR_READ_ERROR 0

Vary: Accept-Language

Content-Language: en

Cache-Status: squid.host;detail=mismatch

Via: 1.1 squid.host (squid/6.6)

Connection: keep-alive

--

Regards,

Ben.

*From: *squid-users  on 
behalf of Amos Jeffries 

*Date: *Friday, 12 July 2024 at 15:22
*To: *squid-users@lists.squid-cache.org 
*Subject: *Re: [squid-users] TCP_MISS_ABORTED/502


On 13/07/24 01:52, Alex Rousskov wrote:

On 2024-07-12 08:06, Ben Toms wrote:
Seems that my issue is similar to - 
https://serverfault.com/questions/1104330/squid-cache-items-behind-basic-authentication <https://serverfault.com/questions/1104330/squid-cache-items-behind-basic-authentication> <https://serverfault.com/questions/1104330/squid-cache-items-behind-basic-authentication <https://serverfault.com/questions/1104330/squid-cache-items-behind-basic-authentication>>


You are facing up to two problems:

1. Some authenticated responses are not cachable by Squid. Please share 
HTTP headers of the response in question.




FYI, those can be obtained by configuring squid.conf with

     debug_options 11,2


Cheers
Amos


2. TCP_MISS_ABORTED/502 errors may delete a being-cached response. These 
can be bogus errors (essentially Squid logging bugs) or real ones (e.g., 
due to communication bugs, misconfiguration, or compatibility problems). 
I recommend adding %err_code/%err_detail to your logformat and sharing 
the corresponding access.log lines (obfuscated as needed).


Sharing (privately if needed) a pointer to compressed ALL,9 cache.log 
while reproducing the issue using a single transaction may help us 
resolve all the unknowns:


https://wiki.squid-cache.org/SquidFaq/BugReporting#debugging-a-single-transaction 
<https://wiki.squid-cache.org/SquidFaq/BugReporting#debugging-a-single-transaction> 
<https://wiki.squid-cache.org/SquidFaq/BugReporting#debugging-a-single-transaction 
<https://wiki.squid-cache.org/SquidFaq/BugReporting#debugging-a-single-transaction>>


HTH,

Alex.





___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users 

<https://lists.squid-cache.org/listinfo/squid-users>
<https://lists.squid-cache.org/listinfo/squid-users 

<https://lists.squid-cache.org/listinfo/squid-users>>



___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users 

<https://lists.squid-cache.org/listinfo/squid-users>

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users 
<https://lists.squid-cache.org/listinfo/squid-users>




___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] cachemgr.cgi isn't mgr:info ?

2024-07-12 Thread Alex Rousskov

On 2024-07-12 11:18, Brian Cook wrote:

Picking up squid again and trying to look at what's going on inside..

Squid on OpenWRT.. wanted to look at mgr:info for file desc, etc..

trying to access the cachemgr.cgi.. as this looks like the new squidclient


FWIW, I do not recommend using cachemgr.cgi and squidclient. For various 
reasons, both were recently removed from Squid master/v7. Squidclient 
can be replaced with curl or wget. The best cachemgr.cgi replacement 
depends on many factors; a static HTML file may be the best solution in 
some cases!


Without squidclient, you will need to use absolute URLs like this one:

http://correct-host-name-or-ip:port/squid-internal-mgr/info

See recent discussions on this mailing list or Bug 5283 for discussions 
about what correct-host-name-or-ip:port to use for these URLs. And if 
you got it working with squidclient, you can see what your squidclient 
was sending, of course.

https://bugs.squid-cache.org/show_bug.cgi?id=5283



Wasn't working etc..



Q: So I added 3128 to the Safe_ports.. and then it works..


That solution is probably wrong, even if it works. If you share your 
http_access rules, others may be able to suggest a better solution.



Q: no password set for cachemgr_passwd.. cachemgr.cgi just open to the 
world? unsecured?


I cannot help you with cachemgr.cgi (see above). Openness of cache 
manager interface itself depends on your http_access rules (and 
cachemgr_passwd).




and is Process Filedescriptor Allocation the closest thing?


If you want to know more about a subset of file and socket descriptors 
used by Squid, with per-descriptor information, then, yes, 
mgr:filedescriptors is the right cache manager report.



I (think) I remember something like max, in use, and something else.. 
being in mgr:info


Yes, it is still there:


File descriptor usage for squid:
Maximum number of file descriptors:   16000
Largest file desc currently in use: 41
Number of file desc currently in use:   24
Files queued for open:   0
Available number of file descriptors: 15976
Reserved number of file descriptors:   100
Store Disk files open:   0



HTH,

Alex.




fwiw openwrt starts squid with like 4096 max files..

needed something like this:

..
         procd_set_param file $CONFIGFILE
         procd_set_param limits nofile="262140 262140"
         procd_set_param respawn
..

to set the hard and soft limits..

any better practice than adding 3128 to the 'Safe_ports'? (can't keep 
that in place..)


and setting a cachemgr_passwd would be the only thing to secure the cgi?

(am I missing something else?)

Thank you in advance.

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TCP_MISS_ABORTED/502

2024-07-12 Thread Alex Rousskov

On 2024-07-12 11:38, Ben Toms wrote:

Think I made the changes Alex requested:

12/Jul/2024:15:36:31 +.640 local.server.ip TCP_MISS_ABORTED/502 3974 
GET https://local.server.fqdn/path/to/file - 
FIRSTUP_PARENT/public.ip.of.public.server text/html 
ERR_READ_ERROR/WITH_SERVER


Thank you for using Squid v6 for this test.

Unfortunately, due to Squid logging bugs, ERR_READ_ERROR/WITH_SERVER 
does not always mean what it says. For example, parent Squid could have 
closed the child-parent connection prematurely, but there could be other 
reasons. A full debugging log should give us more information.



2024/07/12 14:57:08.678 kid1| 11,2| Stream.cc(274) sendStartOfMessage: 
HTTP Client REPLY:


This is a child proxy response to the client. We need parent response to 
the child proxy. Look for "HTTP Server RESPONSE" lines instead.



HTH,

Alex.




-

HTTP/1.1 502 Bad Gateway

Server: squid/6.6

Mime-Version: 1.0

Date: Fri, 12 Jul 2024 14:57:08 GMT

Content-Type: text/html;charset=utf-8

Content-Length: 3629

X-Squid-Error: ERR_READ_ERROR 0

Vary: Accept-Language

Content-Language: en

Cache-Status: squid.host;detail=mismatch

Via: 1.1 squid.host (squid/6.6)

Connection: keep-alive

--

Regards,

Ben.

*From: *squid-users  on 
behalf of Amos Jeffries 

*Date: *Friday, 12 July 2024 at 15:22
*To: *squid-users@lists.squid-cache.org 
*Subject: *Re: [squid-users] TCP_MISS_ABORTED/502


On 13/07/24 01:52, Alex Rousskov wrote:

On 2024-07-12 08:06, Ben Toms wrote:
Seems that my issue is similar to - 
https://serverfault.com/questions/1104330/squid-cache-items-behind-basic-authentication <https://serverfault.com/questions/1104330/squid-cache-items-behind-basic-authentication>


You are facing up to two problems:

1. Some authenticated responses are not cachable by Squid. Please share 
HTTP headers of the response in question.




FYI, those can be obtained by configuring squid.conf with

    debug_options 11,2


Cheers
Amos


2. TCP_MISS_ABORTED/502 errors may delete a being-cached response. These 
can be bogus errors (essentially Squid logging bugs) or real ones (e.g., 
due to communication bugs, misconfiguration, or compatibility problems). 
I recommend adding %err_code/%err_detail to your logformat and sharing 
the corresponding access.log lines (obfuscated as needed).


Sharing (privately if needed) a pointer to compressed ALL,9 cache.log 
while reproducing the issue using a single transaction may help us 
resolve all the unknowns:


https://wiki.squid-cache.org/SquidFaq/BugReporting#debugging-a-single-transaction 
<https://wiki.squid-cache.org/SquidFaq/BugReporting#debugging-a-single-transaction>


HTH,

Alex.





___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users 
<https://lists.squid-cache.org/listinfo/squid-users>



___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TCP_MISS_ABORTED/502

2024-07-12 Thread Alex Rousskov

On 2024-07-12 08:06, Ben Toms wrote:
Seems that my issue is similar to - 
https://serverfault.com/questions/1104330/squid-cache-items-behind-basic-authentication 


You are facing up to two problems:

1. Some authenticated responses are not cachable by Squid. Please share 
HTTP headers of the response in question.


2. TCP_MISS_ABORTED/502 errors may delete a being-cached response. These 
can be bogus errors (essentially Squid logging bugs) or real ones (e.g., 
due to communication bugs, misconfiguration, or compatibility problems). 
I recommend adding %err_code/%err_detail to your logformat and sharing 
the corresponding access.log lines (obfuscated as needed).


Sharing (privately if needed) a pointer to compressed ALL,9 cache.log 
while reproducing the issue using a single transaction may help us 
resolve all the unknowns:


https://wiki.squid-cache.org/SquidFaq/BugReporting#debugging-a-single-transaction


HTH,

Alex.



*From: *Ben Toms 
*Date: *Friday, 12 July 2024 at 12:07
*To: *squid-users@lists.squid-cache.org 
*Subject: *Re: TCP_MISS_ABORTED/502

To test, I changed the parent url to my blog.. and was able to download 
an item there via squid-cache.. so the issue seems to be when 
downloading from a parent which requires authentication.


Regards,

Ben.

*From: *Ben Toms 
*Date: *Friday, 12 July 2024 at 10:29
*To: *squid-users@lists.squid-cache.org 
*Subject: *TCP_MISS_ABORTED/502

Hi Amos,

I made the changes suggested, biut still getting TCP_MISS_ABORTED/502.

The test I’m performing is via a simple curl:

curl https://local.server.fqdn/some/file/path 
 -H "Authorization: Basic 
base64_auth" -o ~/Downloads/test


The Apache logs for the parent (public.server.fqdn), show:

[12/Jul/2024:10:16:09 +0100] "GET /some/file/path HTTP/1.1" 200 10465 
"-" "curl/8.7.1"


So, Apache on the parent is responding with a 200.. and if I mess around 
with the curl commands base64_auth I get 401’s as expected in the 
parents Apache logs.


However, squids access.log still shows:

1720775769.417 49 192.168.0.156 TCP_MISS_ABORTED/502 3974 GET 
https://local.server.fqdn/some/file/path 
 - 
FIRSTUP_PARENT/public.ip.of.public.server text/html


Squid.conf is now:

https_port 443 accel protocol=HTTPS tls-cert=/usr/local/squid/client.pem 
tls-key=/usr/local/squid/client.key


cache_peer public.server.fqdn parent 443 0 no-query originserver 
no-digest no-netdb-exchange tls login=PASSTHRU name=myAccel 
forceddomain=uk-dist-a.datajar.mobi


acl our_sites dstdomain local.server.fqdn

http_access allow our_sites

cache_peer_access myAccel allow our_sites

cache_peer_access myAccel deny all

refresh_pattern -i public.server.fqdn/* 3600    80% 14400

cache_dir ufs /usr/local/squid/var/cache 10 16 256

The file I’m attempting to cache with the above curl command is 6.5kb 
only.. have tried others to no avail.


It seems like squid doesn’t want to cache, and it’s not advising the 
client to wait as it caches.



___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Socket handle leak?

2024-07-12 Thread Alex Rousskov

On 2024-07-12 06:58, paolo.pr...@gmail.com wrote:

We are having some stability issues with our squid farms after a recent 
upgrade from Centos/Squid 3.5.x to Ubuntu/Squid 5.7/6.9.


In short, after running for a certain period the servers run out of file 
descriptors. We see a slowly growing number of TCP or TCPv6 socket 
handles


Assuming that your Squids are not under an ever-increasing load, what 
you describe sounds like a Squid bug. I do not see any obviously related 
fixes in the latest official code, so it is possible that this bug is 
unknown to Squid developers and is still present in v6+. I recommend the 
following steps:


1. Forget about Squid v5. Aim to upgrade to Squid v6.

2. Collect a few mgr:filedescriptors cache manager snapshots from a 
problematic Squid in hope to discover a common theme among leaked 
descriptors metadata. Share your findings (and/or a pointer to 
compressed snapshots).


3. Check cache.log for frequent (or at least persistent) ERROR and 
WARNING messages and report your findings.


4. Does your Squid grow its resident memory usage as well? Descriptor 
leaks are often (but not always!) accompanied by memory leaks. The 
latter are sometimes easier to pinpoint. If (and only if) your Squid is 
leaking a lot of memory, then collect a few dozen mgr:mem snapshots 
(e.g., one every busy hour) and share a pointer to a compressed snapshot 
 archive for analysis by Squid developers. There is at least one v6 
memory leak fixed in master/v7 (Bug 5322), but, hopefully, you are not 
suffering from that memory leak (otherwise the noise from that leak may 
obscure what we are looking for).



You may continue this triage on this mailing list or file a bug report 
at https://bugs.squid-cache.org/enter_bug.cgi?product=Squid



Thank you,

Alex.



It is somewhat similar to what reported under 
https://access.redhat.com/solutions/3362211 
 . They state that


  * /If an application fails to |close()| it's socket descriptors and
continues to allocate new sockets then it can use up all the system
memory on TCP(v6) slab objects./
  * /Note some of these sockets will not show up in
|/proc/net/sockstat(6)|. Sockets that still have a file descriptor
but are in the |TCP_CLOSE| state will consume a slab object. But
will not be accounted for in |/proc/net/sockstat(6)| or "ss" or
"netstat"./
  * It can be determined whether this is an application sockets leak, by
stopping the application processes that are consuming sockets. If
the slab objects in |/proc/slabinfo| are freed then the application
is responsible. As that means that destructor routines have found
open file descriptors to sockets in the process.

/
/
/"/This is most likely to be a case of the application not handling 
error conditions correctly and not calling |close()| to free the FD and 
socket."/

/


For example, on a server with squid 5.7, unmodified package:

list of open files;

lsof |wc -l
56963


of which 35K in TCPv6:

lsof |grep proxy |grep TCPv6 |wc -l

 35301

under /proc I see less objects
     cat  /proc/net/tcp6 |wc -l
 3095

but the number of objects in the slabs is high
 cat /proc/slabinfo |grep TCPv6
 MPTCPv6                0      0   2048   16    8 : tunables    0
0    0 : slabdata      0      0      0
 tw_sock_TCPv6       1155   1155    248   33    2 : tunables    0
0    0 : slabdata     35     35      0
 request_sock_TCPv6      0      0    304   26    2 : tunables    0  
   0    0 : slabdata      0      0      0
 TCPv6 *38519  38519*   2432   13    8 : tunables    0    0    0 : 
slabdata   2963   2963      0


I have 35K of lines like this
 lsof |grep proxy |grep TCPv6 |more
 squid        1049              proxy   13u     sock
0,8        0t0    5428173 protocol: TCPv6
 squid        1049              proxy   14u     sock
0,8        0t0   27941608 protocol: TCPv6
 squid        1049              proxy   24u     sock
0,8        0t0   45124047 protocol: TCPv6
 squid        1049              proxy   25u     sock
0,8        0t0   50689821 protocol: TCPv6

...


We thought maybe this is a weird IPv6 thing, as we only route IPv4, so 
we compiled a more recent version of squid with no v6 support. The thing 
just moved to TCP4..


lsof |wc -l
120313

cat /proc/slabinfo |grep TCP
MPTCPv6                0      0   2048   16    8 : tunables    0    0
0 : slabdata      0      0      0
tw_sock_TCPv6          0      0    248   33    2 : tunables    0    0
0 : slabdata      0      0      0
request_sock_TCPv6      0      0    304   26    2 : tunables    0    0  
   0 : slabdata      0      0      0
TCPv6                208    208   2432   13    8 : tunables    0    0
0 : slabdata     16     16      0
MPTCP                  0      0   1856   17    8 : tunables    0    0
0 : slabdata      0      0      0

Re: [squid-users] Rewriting HTTP to HTTPS for generic package proxy

2024-07-11 Thread Alex Rousskov

On 2024-07-11 17:03, Amos Jeffries wrote:

On 11/07/24 00:49, Alex Rousskov wrote:

On 2024-07-09 18:25, Fiehe, Christoph wrote:

I hope that somebody has an idea, what I am doing wrong. 


AFAICT from the debugging log, it is your parent proxy that returns an 
ERR_SECURE_CONNECT_FAIL error page in response to a seemingly valid 
"HEAD https://..."; request. Can you ask their admin to investigate? 
You may also recommend that they upgrade from Squid v4 that has many 
known security vulnerabiities.


If parent is uncooperative, you can try to reproduce the problem by 
temporary installing your own parent Squid instance and configuring 
your child Squid to use that instead.


HTH,

Alex.
P.S. Unlike Amos, I do not see serious conceptual problems with 
rewriting request target scheme (as a temporary compatibility 
measure). It may not always work, for various reasons, but it does not 
necessarily make things worse (and may make things better).




To which I refer you to:


None of the weaknesses below are applicable to request target scheme 
rewriting (assuming both proxies in question are implemented/configured 
correctly, of course). Specific non-applicability reasons are given 
below for each weakness URL:



https://cwe.mitre.org/data/definitions/311.html


The above "The product does not encrypt sensitive or critical 
information before storage or transmission" case is not applicable: All 
connections can be encrypted as needed after the scheme rewrite.




https://cwe.mitre.org/data/definitions/312.html


The above "The product stores sensitive information in cleartext within 
a resource that might be accessible to another control sphere." case is 
not applicable: Squid does not store information in such an accessible 
resource.




https://cwe.mitre.org/data/definitions/319.html


The above "The product transmits sensitive or security-critical data in 
cleartext in a communication channel that can be sniffed by unauthorized 
actors." case is not applicable: All connections can be encrypted as 
needed after the scheme rewrite.


Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Rewriting HTTP to HTTPS for generic package proxy

2024-07-11 Thread Alex Rousskov

On 2024-07-11 13:37, Fiehe, Christoph wrote:

My proxy (the child proxy) already uses the OpenSSL library:


Good.



The parent proxy was compiled ... '--with-gnutls'


The GnuTLS exception is thrown at my parent proxy. 


Thank you for reminding me of that fact; I did not notice or have 
forgotten about it. I assume you cannot rebuild your parent proxy to use 
OpenSSL.


I see the following choice:

A) Continue with the current no-CONNECT setup: Find somebody who can 
help you get Squid+GnuTLS code path working on the parent proxy. It 
might be impossible to get this working without making build or 
configuration changes at the parent proxy. Moreover, please note that 
your current no-CONNECT setup lacks encryption on the child-parent 
segment. If that was not intentional, then fixing that will increase 
TLS-related work for the parent, potentially triggering more problems there.


B) Switch to a CONNECT-based setup: Find somebody who can enhance Squid 
code to establish a CONNECT tunnel through parent proxy when dealing 
with a GET-https request. Today, Squid will not do that AFAICT[^1].


https://wiki.squid-cache.org/SquidFaq/AboutSquid#how-to-add-a-new-squid-feature-enhance-of-fix-something


[^1]: AFAICT: Today, there are two primary Squid code conditions for 
establishing a CONNECT tunnel on a caching code path: Request method is 
CONNECT or SslBump is in use. Neither matches your GET-https request 
scenario. Squid current behavior is not "wrong" (as detailed in my 
earlier email about CONNECT and no-CONNECT scenarios), so, to make these 
changes official, the author will need to add a configuration option to 
let admins enable this behavior. The corresponding code changes feel 
straightforward to me, but I have not studied any details.



HTH,

Alex.




Unfortunately, I cannot make any changes here. So yes, I trust my parent proxy, 
but not using a tunnel between child and parent does not seem to work and 
results in the TLS exception on the parent proxy.

I have not find a way to tell my child proxy to always setup a tunnel through 
the parent proxy, when the target server talks HTTPS. Do you know, how to 
achieve that? It would be a promising approach.

Thank you very much help and your patience, Alex.

Regards,
Christoph



-Ursprüngliche Nachricht-
Von: squid-users  Im Auftrag von 
Alex Rousskov
Gesendet: Donnerstag, 11. Juli 2024 18:15
An: squid-users@lists.squid-cache.org
Betreff: Re: [squid-users] Rewriting HTTP to HTTPS for generic package proxy

On 2024-07-10 16:57, Fiehe, Christoph wrote:


I am just trying to find something that helps to narrow down the
problem. What I want to achieve is, that a client can use HTTP in the
LAN, so that Squid can cache distribution packages without making use
of SSL intercepting when repos are only accessible via HTTPS.


OK.



In that case the secure connection must start at the proxy and end on
the target server with or without any upstream proxies in betweem.


It depends on whether you trust the parent proxy:

If you trust the parent proxy, then you can use two secure connections:

1.1. child - parent (TLS; no CONNECT)
1.2. parent - origin (TLS; no CONNECT)

If you do not trust the parent proxy, then, yes, you will need a tunnel:

2.1. child - parent (CONNECT)
2.2. child - origin (TLS inside the CONNECT tunnel)

N.B. CONNECT request in 2.1 may be plain text (common) or encrypted
(rare); I am ignoring the difference between those two subcases for now.



We have the following setup:

client -> downstream proxy -> upstream proxy -> https://download.docker.com

Now let us assume the client wants to retrieve the following resource

http://download.docker.com/linux/ubuntu/dists/jammy/InRelease from the upstream 
proxy.


The client initiates a HTTP GET request and sends it to the downstream proxy. 
Now, the

URL gets rewritten.

OK.



It indicates to use a HTTPS connection instead in order to talk to the target 
server, in

our case the result is 
https://download.docker.com/linux/ubuntu/dists/jammy/InRelease.

Yes, but HTTPS scheme does not imply that the child Squid has to use
CONNECT. There are two possible scenarios detailed above. I do not know
which of them applies to your use case.



Now comes the critical point: From my understanding – it may be
wrongof course - the downstream server now has to send a CONNECT
request to the upstream server


Yes, provided the child (downstream) proxy does not trust that parent
(upstream) proxy. That is scenario 2. Scenario 1 is different.



to advise him to establish a secure connection to the target server.


No, the CONNECT tunnel itself is just a pair of TCP connections. The
parent proxy "secures" nothing but basic TCP connectivity. It is the
child proxy that negotiates TLS (over/inside that tunnel) with the
origin server.



After creation, the downstream proxy can retrieve the resource and
send it back to the client via plain HTTP.


Yes.




I suppose, that

Re: [squid-users] Rewriting HTTP to HTTPS for generic package proxy

2024-07-11 Thread Alex Rousskov

On 2024-07-10 16:57, Fiehe, Christoph wrote:


I am just trying to find something that helps to narrow down the
problem. What I want to achieve is, that a client can use HTTP in the
LAN, so that Squid can cache distribution packages without making use
of SSL intercepting when repos are only accessible via HTTPS.


OK.



In that case the secure connection must start at the proxy and end on
the target server with or without any upstream proxies in betweem.


It depends on whether you trust the parent proxy:

If you trust the parent proxy, then you can use two secure connections:

1.1. child - parent (TLS; no CONNECT)
1.2. parent - origin (TLS; no CONNECT)

If you do not trust the parent proxy, then, yes, you will need a tunnel:

2.1. child - parent (CONNECT)
2.2. child - origin (TLS inside the CONNECT tunnel)

N.B. CONNECT request in 2.1 may be plain text (common) or encrypted 
(rare); I am ignoring the difference between those two subcases for now.




We have the following setup:

client -> downstream proxy -> upstream proxy -> https://download.docker.com

Now let us assume the client wants to retrieve the following resource 
http://download.docker.com/linux/ubuntu/dists/jammy/InRelease from the upstream 
proxy.

The client initiates a HTTP GET request and sends it to the downstream proxy. 
Now, the URL gets rewritten.


OK.



It indicates to use a HTTPS connection instead in order to talk to the target 
server, in our case the result is 
https://download.docker.com/linux/ubuntu/dists/jammy/InRelease.


Yes, but HTTPS scheme does not imply that the child Squid has to use 
CONNECT. There are two possible scenarios detailed above. I do not know 
which of them applies to your use case.



Now comes the critical point: From my understanding – it may be 
wrongof course - the downstream server now has to send a CONNECT 
request to the upstream server


Yes, provided the child (downstream) proxy does not trust that parent 
(upstream) proxy. That is scenario 2. Scenario 1 is different.




to advise him to establish a secure connection to the target server.


No, the CONNECT tunnel itself is just a pair of TCP connections. The 
parent proxy "secures" nothing but basic TCP connectivity. It is the 
child proxy that negotiates TLS (over/inside that tunnel) with the 
origin server.




After creation, the downstream proxy can retrieve the resource and
send it back to the client via plain HTTP.


Yes.




I suppose, that the GnuTLS occurs because of a missing SSL handshake
between downstream proxy and download.docker.com.


At this time, I can only say that a TLS negotiation error occurs (while 
child Squid is using the encryption library it probably should not be 
using for this). It is not yet clear to me whether child Squid is 
negotiating with the wrong hop or something goes wrong during 
negotiation with the right hop.


As the next steps, I recommend switching to OpenSSL and, if that alone 
does not help, sharing new errors and determining whether you want to 
use scenario 1 (no CONNECT), scenario 2 (CONNECT), or either (whichever 
works): Do you trust the parent Squid?



HTH,

Alex.



-Ursprüngliche Nachricht-
Von: Alex Rousskov 
Gesendet: Mittwoch, 10. Juli 2024 22:15
An: squid-users@lists.squid-cache.org
Cc: Fiehe, Christoph 
Betreff: AW: [squid-users] Rewriting HTTP to HTTPS for generic package proxy

On 2024-07-10 15:31, Fiehe, Christoph wrote:

The problem is that the proxy just forwards the client GET request to the 
upstream proxy


Why does sending a GET request to the upstream proxy represent a problem
in your use case? I cannot find anything in your prior messages on this
thread that would preclude sending a GET request to the upstream proxy.



but in that case a CONNECT is required.


Why?

Please do not interpret my response as implying that this "must send
CONNECT" requirement is wrong (or correct). At this point, I am just
trying to understand what problem(s) you are trying to solve beyond the
one you have originally described.


Thank you,

Alex.


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] After upgrade from 5.7 to 5.9 the whitelists were not listed , we had to readd them.

2024-07-11 Thread Alex Rousskov

On 2024-07-11 11:24, Alan Long wrote:


We actually go old school and use webmin to manage the squid server
and it showed an upgrade.


It sounds like this is a webmin issue rather than a Squid issue. I do 
not know much about webmin. I hope somebody else here can help you with 
webmin integration, but please consider contacting webmin folks for support.




I am thinking the squid.conf got overwritten, which caused our issue.


If it was overwritten, it was not overwritten by Squid (i.e. by anything 
shipped by the Squid Project).



HTH,

Alex.



-Original Message-
From: squid-users  On Behalf Of Alex 
Rousskov
Sent: Thursday, July 11, 2024 9:52 AM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] After upgrade from 5.7 to 5.9 the whitelists were 
not listed , we had to readd them.

** WARNING: This email originated from outside of the organization. Do not 
click links or open attachments unless you recognize the sender and know the 
content is safe. **


On 2024-07-11 10:23, Alan Long wrote:

We did an upgrade from 5.7 to 5.9 and after the upgrade the whitelists
we had were gone. We had to recreate them and set them up under the
access control section.

Anyone seen this? I have another one in queue for upgrade, and will
get more info once we run the upgrade, but wanted to ask if this is a
known issue.

Also our delay pool had to be recreated as well.


What do you use to upgrade/install Squid? Do you build Squid from sources and then run 
"make install"? Or do you use some packaging software provided by a third party?

Do you put your access rules (a.k.a. whitelists) into squid.conf? Was you 
squid.conf overwritten? With some default configuration??


FWIW, native Squid "make install" does not install or update squid.conf file 
AFAICT. It installs squid.cond.documented and squid.conf.default.


HTH,

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://protect.checkpoint.com/v2/___https://lists.squid-cache.org/listinfo/squid-users___.YzJ1OnZlc3RhY29ycG9yYXRpb246YzpvOjU5ODI0ZTdkN2QwZTBkN2RkMjVjYTEwYzhmYjRjOWVlOjY6YTk0MTpmMWUzNjA2ZjY4ZDVmYmNlYTc3OTNhNTUyMjg5Njc0YmU2NDgyZTBjNjIzMzRhMjQ0ZTRiZmNlY2MyNWIxOWZmOnA6VDpO
Notice: This e-mail message and any attachments are the property of Vesta, are 
confidential, and may contain Vesta proprietary information. If you have 
received this message in error, please notify the sender and delete this 
message immediately. Any other use of this message is strictly prohibited.


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] After upgrade from 5.7 to 5.9 the whitelists were not listed , we had to readd them.

2024-07-11 Thread Alex Rousskov

On 2024-07-11 10:23, Alan Long wrote:
We did an upgrade from 5.7 to 5.9 and after the upgrade the whitelists 
we had were gone. We had to recreate them and set them up under the 
access control section.


Anyone seen this? I have another one in queue for upgrade, and will get 
more info once we run the upgrade, but wanted to ask if this is a known 
issue.


Also our delay pool had to be recreated as well.


What do you use to upgrade/install Squid? Do you build Squid from 
sources and then run "make install"? Or do you use some packaging 
software provided by a third party?


Do you put your access rules (a.k.a. whitelists) into squid.conf? Was 
you squid.conf overwritten? With some default configuration??



FWIW, native Squid "make install" does not install or update squid.conf 
file AFAICT. It installs squid.cond.documented and squid.conf.default.



HTH,

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Rewriting HTTP to HTTPS for generic package proxy

2024-07-10 Thread Alex Rousskov

On 2024-07-10 15:31, Fiehe, Christoph wrote:

The problem is that the proxy just forwards the client GET request to the 
upstream proxy


Why does sending a GET request to the upstream proxy represent a problem 
in your use case? I cannot find anything in your prior messages on this 
thread that would preclude sending a GET request to the upstream proxy.




but in that case a CONNECT is required.


Why?

Please do not interpret my response as implying that this "must send 
CONNECT" requirement is wrong (or correct). At this point, I am just 
trying to understand what problem(s) you are trying to solve beyond the 
one you have originally described.



Thank you,

Alex.



Working case: Upstream proxy receives a CONNECT from the downstream proxy

2024/07/10 21:06:05.355 kid1| 5,2| TcpAcceptor.cc(214) doAccept: New connection 
on FD 12
2024/07/10 21:06:05.355 kid1| 5,2| TcpAcceptor.cc(316) acceptNext: connection 
on conn482169 local=[::]:3128 remote=[::] FD 12 flags=9
2024/07/10 21:06:05.355 kid1| 51,3| fd.cc(168) fd_open: fd_open() FD 16 HTTP 
Request
2024/07/10 21:06:05.355 kid1| 28,3| Eui48.cc(511) lookup: id=0x5651b3e6d558 
10.2.59.181 NOT found
2024/07/10 21:06:05.355 kid1| 17,2| QosConfig.cc(162) getNfConnmark: QOS: 
Failed to retrieve connection mark: (-1) (2) No such file or directory 
(Destination X.X.X.X:3128, source 10.2.59.181:40122)
2024/07/10 21:06:05.355 kid1| 5,3| comm.cc(599) commSetConnTimeout: conn482750 
local=X.X.X.X:3128 remote=10.2.59.181:40122 FD 16 flags=1 timeout 300
2024/07/10 21:06:05.355 kid1| 5,3| IoCallback.cc(112) finish: called for 
conn482750 local=X.X.X.X:3128 remote=10.2.59.181:40122 FD 16 flags=1 (0, 0)
2024/07/10 21:06:05.355 kid1| 5,3| Read.cc(93) ReadNow: conn482750 
local=X.X.X.X:3128 remote=10.2.59.181:40122 FD 16 flags=1, size 4096, retval 
213, errno 0
2024/07/10 21:06:05.355 kid1| 5,3| comm.cc(599) commSetConnTimeout: conn482750 
local=X.X.X.X:3128 remote=10.2.59.181:40122 FD 16 flags=1 timeout 300
2024/07/10 21:06:05.355 kid1| 33,3| Pipeline.cc(43) back: Pipeline 
0x5651b328cb80 empty
2024/07/10 21:06:05.355 kid1| 11,2| client_side.cc(1332) parseHttpRequest: HTTP 
Client conn482750 local=X.X.X.X:3128 remote=10.2.59.181:40122 FD 16 flags=1
2024/07/10 21:06:05.355 kid1| 11,2| client_side.cc(1333) parseHttpRequest: HTTP 
Client REQUEST:
-
CONNECT download.docker.com:443 HTTP/1.1
Host: download.docker.com:443
User-Agent: curl/7.81.0
Via: 1.1 pkg-proxy (squid/6.10)
X-Forwarded-For: 10.2.59.102
Cache-Control: max-age=259200
Connection: close

Not working after schema rewrite: Upstream proxy receives a GET from the proxy

2024/07/10 18:24:44.031 kid1| 5,2| TcpAcceptor.cc(214) doAccept: New connection 
on FD 12
2024/07/10 18:24:44.031 kid1| 5,2| TcpAcceptor.cc(316) acceptNext: connection 
on conn482169 local=[::]:3128 remote=[::] FD 12 flags=9
2024/07/10 18:24:44.031 kid1| 51,3| fd.cc(168) fd_open: fd_open() FD 16 HTTP 
Request
2024/07/10 18:24:44.031 kid1| 28,3| Eui48.cc(511) lookup: id=0x5651b3e6d558 
10.2.59.181 NOT found
2024/07/10 18:24:44.031 kid1| 17,2| QosConfig.cc(162) getNfConnmark: QOS: 
Failed to retrieve connection mark: (-1) (2) No such file or directory 
(Destination X.X.X.X:3128, source 10.2.59.181:59100)
2024/07/10 18:24:44.031 kid1| 5,3| comm.cc(599) commSetConnTimeout: conn482175 
local=X.X.X.X:3128 remote=10.2.59.181:59100 FD 16 flags=1 timeout 300
2024/07/10 18:24:44.031 kid1| 5,3| IoCallback.cc(112) finish: called for 
conn482175 local=X.X.X.X:3128 remote=10.2.59.181:59100 FD 16 flags=1 (0, 0)
2024/07/10 18:24:44.031 kid1| 5,3| Read.cc(93) ReadNow: conn482175 
local=X.X.X.X:3128 remote=10.2.59.181:59100 FD 16 flags=1, size 4096, retval 
293, errno 0
2024/07/10 18:24:44.031 kid1| 5,3| comm.cc(599) commSetConnTimeout: conn482175 
local=X.X.X.X:3128 remote=10.2.59.181:59100 FD 16 flags=1 timeout 300
2024/07/10 18:24:44.031 kid1| 33,3| Pipeline.cc(43) back: Pipeline 
0x5651b328cb80 empty
2024/07/10 18:24:44.031 kid1| 11,2| client_side.cc(1332) parseHttpRequest: HTTP 
Client conn482175 local=X.X.X.X:3128 remote=10.2.59.181:59100 FD 16 flags=1
2024/07/10 18:24:44.031 kid1| 11,2| client_side.cc(1333) parseHttpRequest: HTTP 
Client REQUEST:
-
GET https://download.docker.com/linux/ubuntu/dists/jammy/InRelease HTTP/1.1
Host: download.docker.com
Accept: text/*
User-Agent: Debian APT-HTTP/1.3 (2.4.12) non-interactive
Via: 1.1 pkg-proxy (squid/6.10)
X-Forwarded-For: 10.2.59.102
Cache-Control: max-age=0
Connection: keep-alive




-Ursprüngliche Nachricht-
Von: Alex Rousskov 
Gesendet: Mittwoch, 10. Juli 2024 18:56
An: squid-users@lists.squid-cache.org
Cc: Fiehe, Christoph 
Betreff: Re: [squid-users] Rewriting HTTP to HTTPS for generic package proxy

On 2024-07-10 12:42, Fiehe, Christoph wrote:


In the next test case, I used a more modern upstream proxy server based von 
Squid 6.8

and enabled debugging.


The log shows the error SQUID_TLS_ERR_CONNECT+GNUTLS_E_FATAL_ALERT_RECEIVED. I 
am not

sure, what I can do to preve

Re: [squid-users] squidclient -h 127.0.0.1 -p 3128 mgr:info shows access denined

2024-07-10 Thread Alex Rousskov

On 2024-07-10 12:55, Jonathan Lee wrote:


Embedding a password in a cache manager command requires providing a
username with -U



squidclient -w /squid-internal-mgr/info -u admin
squidclient -w /squid-internal-mgr/info@redacted -u admin
squidclient -w http://192.168.1.1:3128/squid-internal-mgr/info@redacted 
-u admin
squidclient -w http://127.0.0.1:3128/squid-internal-mgr/info@redacted -u 
admin

squidclient -w http://127.0.0.1:3128/squid-internal-mgr/info
squidclient http://127.0.0.1:3128/squid-internal-mgr/info
squidclient -h 127.0.0.1:3128/squid-internal-mgr/info
squidclient -h 127.0.0.1 /squid-internal-mgr/info
squidclient -h 127.0.0.1 /squid-internal-mgr/info@redcated
squidclient -w 127.0.0.1 /squid-internal-mgr/info@redacted
squidclient -w 127.0.0.1 /squid-internal-mgr/info@redcated -u admin
squidclient -h 192.168.1.1:3128  /squid-internal-mgr/info@redacted
squidclient -h 192.168.1.1  /squid-internal-mgr/info@redacted
squidclient -h 192.168.1.1  /squid-internal-mgr/info

with -w -u -h http spaces I can’t get it to show me stats

Squid 6.6


I do not know whether this mistake is relevant, but squidclient 
documentation and error message imply that you should be using "-U" 
(capital letter U) while you are using "-u" (small letter u).


FWIW, I recommend using curl instead of deprecated squidclient. If 
nothing else, you would be dealing with a well-known tool with extensive 
documentation and wide-spread knowledge that will continue to work when 
you upgrade to Squid v7. Using curl is not going to solve Squid Bug 5283 
and similar Squid-specific problems[1], but it may reduce the number of 
"external" problems you have to deal with (e.g., guessing what 
squidclient command line options actually do).


[1]: https://bugs.squid-cache.org/show_bug.cgi?id=5283


HTH,

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Rewriting HTTP to HTTPS for generic package proxy

2024-07-10 Thread Alex Rousskov
2024/07/10 18:24:44.050 kid1| 44,2| peer_select.cc(1180) handlePath:   
always_direct = ALLOWED
2024/07/10 18:24:44.050 kid1| 44,2| peer_select.cc(1181) handlePath:
never_direct = DUNNO
2024/07/10 18:24:44.050 kid1| 44,2| peer_select.cc(1182) handlePath:
timedout = 0
2024/07/10 18:24:44.050 kid1| 17,3| FwdState.cc(610) noteDestination: 
conn482186 local=[::] remote=[2600:9000:2490:b600:3:db06:4200:93a1]:443 
HIER_DIRECT flags=1
2024/07/10 18:24:44.050 kid1| 44,2| peer_select.cc(1174) handlePath: 
PeerSelector64364 found conn482187 local=[::] 
remote=[2600:9000:2490:aa00:3:db06:4200:93a1]:443 HIER_DIRECT flags=1, 
destination #12 for 
https://download.docker.com/linux/ubuntu/dists/jammy/InRelease
2024/07/10 18:24:44.050 kid1| 44,2| peer_select.cc(1180) handlePath:   
always_direct = ALLOWED
2024/07/10 18:24:44.050 kid1| 44,2| peer_select.cc(1181) handlePath:
never_direct = DUNNO
2024/07/10 18:24:44.050 kid1| 44,2| peer_select.cc(1182) handlePath:
timedout = 0
2024/07/10 18:24:44.050 kid1| 17,3| FwdState.cc(610) noteDestination: 
conn482187 local=[::] remote=[2600:9000:2490:aa00:3:db06:4200:93a1]:443 
HIER_DIRECT flags=1
2024/07/10 18:24:44.050 kid1| 44,2| peer_select.cc(479) resolveSelected: 
PeerSelector64364 found all 12 destinations for 
https://download.docker.com/linux/ubuntu/dists/jammy/InRelease
2024/07/10 18:24:44.050 kid1| 44,2| peer_select.cc(480) resolveSelected:   
always_direct = ALLOWED
2024/07/10 18:24:44.050 kid1| 44,2| peer_select.cc(481) resolveSelected:
never_direct = DUNNO
2024/07/10 18:24:44.050 kid1| 44,2| peer_select.cc(482) resolveSelected:
timedout = 0
2024/07/10 18:24:44.050 kid1| 44,3| peer_select.cc(241) ~PeerSelector: 
https://download.docker.com/linux/ubuntu/dists/jammy/InRelease
2024/07/10 18:24:44.050 kid1| 20,3| store.cc(458) unlock: peerSelect unlocking 
key 8349D1070100 e:=p2IV/0x5651b365e210*4
2024/07/10 18:24:44.050 kid1| 48,3| pconn.cc(474) popStored: lookup for key 
{108.138.7.18:443/download.docker.com} failed.
2024/07/10 18:24:44.050 kid1| 28,3| Checklist.cc(69) preCheck: 0x7ffef6c0a460 
checking fast ACLs
2024/07/10 18:24:44.050 kid1| 28,3| DomainData.cc(110) match: 
aclMatchDomainList: checking 'download.docker.com'
2024/07/10 18:24:44.050 kid1| 28,3| DomainData.cc(115) match: 
aclMatchDomainList: 'download.docker.com' NOT found
2024/07/10 18:24:44.050 kid1| 28,3| Acl.cc(175) matches: checked: 
SKIP_PALO_DOMAINS_FAST = 0
2024/07/10 18:24:44.050 kid1| 28,3| Acl.cc(175) matches: checked: 
!SKIP_PALO_DOMAINS_FAST = 1
2024/07/10 18:24:44.051 kid1| 28,3| Acl.cc(175) matches: checked: 
(tcp_outgoing_mark 0x14 line) = 1
2024/07/10 18:24:44.051 kid1| 28,3| Acl.cc(175) matches: checked: 
tcp_outgoing_mark 0x14 = 1
2024/07/10 18:24:44.051 kid1| 28,3| Checklist.cc(62) markFinished: 
0x7ffef6c0a460 answer ALLOWED for match
2024/07/10 18:24:44.051 kid1| 17,3| FwdState.cc(1568) GetMarkingsToServer: from 
0.0.0.0 tos 0 netfilter mark 20
2024/07/10 18:24:44.051 kid1| 5,3| ConnOpener.cc(42) ConnOpener: will connect 
to conn482189 local=0.0.0.0 remote=108.138.7.18:443 HIER_DIRECT flags=1 with 60 
timeout
2024/07/10 18:24:44.051 kid1| 50,3| comm.cc(378) comm_openex: comm_openex: 
Attempt open socket for: 0.0.0.0
2024/07/10 18:24:44.051 kid1| 50,3| comm.cc(420) comm_openex: comm_openex: 
Opened socket conn482190 local=0.0.0.0 remote=[::] FD 19 flags=1 : family=2, 
type=1, protocol=6
2024/07/10 18:24:44.051 kid1| 51,3| fd.cc(168) fd_open: fd_open() FD 19 
download.docker.com
2024/07/10 18:24:44.051 kid1| 50,3| QosConfig.cc(581) setSockNfmark: for FD 19 
to 20
2024/07/10 18:24:44.051 kid1| 5,3| ConnOpener.cc(312) createFd: conn482189 
local=0.0.0.0 remote=108.138.7.18:443 HIER_DIRECT flags=1 will timeout in 60
2024/07/10 18:24:44.058 kid1| 83,2| Io.cc(161) Handshake: handshake IN: Unknown 
Handshake packet
2024/07/10 18:24:44.058 kid1| 83,2| Io.cc(163) Handshake: handshake OUT: CLIENT 
HELLO
2024/07/10 18:24:44.058 kid1| 5,3| comm.cc(599) commSetConnTimeout: conn482189 
local=X.X.X.X:36718 remote=108.138.7.18:443 HIER_DIRECT FD 19 flags=1 timeout 60
2024/07/10 18:24:44.064 kid1| 83,2| Io.cc(161) Handshake: handshake IN: Unknown 
Handshake packet
2024/07/10 18:24:44.064 kid1| 83,2| Io.cc(163) Handshake: handshake OUT: CLIENT 
HELLO
2024/07/10 18:24:44.064 kid1| 83,2| PeerConnector.cc(279) 
handleNegotiationResult: ERROR: Cannot establish a TLS connection to conn482189 
local=X.X.X.X:36718 remote=108.138.7.18:443 HIER_DIRECT FD 19 flags=1:
 problem: failure
 detail: SQUID_TLS_ERR_CONNECT+GNUTLS_E_FATAL_ALERT_RECEIVED
2024/07/10 18:24:44.064 kid1| 5,3| comm.cc(625) commUnsetConnTimeout: Remove 
timeout for conn482189 local=X.X.X.X:36718 remote=108.138.7.18:443 HIER_DIRECT 
FD 19 flags=1
2024/07/10 18:24:44.064 kid1| 5,3| comm.cc(599) commSetConnTimeout: conn482189 
local=X.X.X.X:36718 remote=108.138.7.18:443 HIER_DIRECT FD 19 flags=1 timeout -1
2024/07/10 18:24:44.064 kid1| 5,3| comm.cc(850) _comm_close: start closing

Re: [squid-users] Rewriting HTTP to HTTPS for generic package proxy

2024-07-10 Thread Alex Rousskov

On 2024-07-09 18:25, Fiehe, Christoph wrote:

I hope that somebody has an idea, what I am doing wrong. 


AFAICT from the debugging log, it is your parent proxy that returns an 
ERR_SECURE_CONNECT_FAIL error page in response to a seemingly valid 
"HEAD https://..."; request. Can you ask their admin to investigate? You 
may also recommend that they upgrade from Squid v4 that has many known 
security vulnerabiities.


If parent is uncooperative, you can try to reproduce the problem by 
temporary installing your own parent Squid instance and configuring your 
child Squid to use that instead.


HTH,

Alex.
P.S. Unlike Amos, I do not see serious conceptual problems with 
rewriting request target scheme (as a temporary compatibility measure). 
It may not always work, for various reasons, but it does not necessarily 
make things worse (and may make things better).





I try to build a generic package proxy with Squid and need the feature 
to rewrite (not redirect) a HTTP request to a package repository 
transparently to a HTTPS-based package source. I was able to get Jesred 
working and defined the following rewrite rule:


regex ^http:\/\/download\.docker\.com(.*)$ https://download.docker.com\1

I had to use a parent upstream proxy. In my test case the rule gets applied 
successfully:

1720558404.106 10.2.59.102/molecule-ubuntu-jammy.lx.mycompany.de 
http://download.docker.com/linux/ubuntu/dists/jammy/InRelease[http://download.docker.com/linux/ubuntu/dists/jammy/InRelease]
 https://download.docker.com/linux/ubuntu/dists/jammy/InRelease 2

I have validated that the returned URL is correct and that the resource is 
accessible via my upstream proxy.

But at the very end, the client receives a 503 error code. I have set "debug_options 
ALL,3" and this gives the log:

[...]
2024/07/09 23:35:40.115 kid1| 11,2| client_side.cc(1333) parseHttpRequest: HTTP 
Client REQUEST:
-
HEAD 
http://download.docker.com/linux/ubuntu/dists/jammy/InRelease[http://download.docker.com/linux/ubuntu/dists/jammy/InRelease]
 HTTP/1.1
Host: download.docker.com
User-Agent: curl/7.81.0
Accept: */*
Proxy-Connection: Keep-Alive


--
2024/07/09 23:35:40.115 kid1| 33,3| client_side.cc(1364) parseHttpRequest: 
complete request received. prefix_sz = 174, request-line-size=77, 
mime-header-size=97, mime header block:
Host: download.docker.com
User-Agent: curl/7.81.0
Accept: */*
Proxy-Connection: Keep-Alive


--
2024/07/09 23:35:40.115 kid1| 87,3| clientStream.cc(139) 
clientStreamInsertHead: clientStreamInsertHead: Inserted node 0x5c3ba4154308 
with data 0x5c3ba4152950 after head
2024/07/09 23:35:40.115 kid1| 5,3| comm.cc(599) commSetConnTimeout: conn9 
local=10.2.59.103:8000 remote=10.2.59.102:56466 FD 15 flags=1 timeout 86400
2024/07/09 23:35:40.115 kid1| 33,3| client_side.cc(1767) add: 0x5c3ba41518e0*3 
to 0/0
2024/07/09 23:35:40.115 kid1| 33,3| Pipeline.cc(24) add: Pipeline 
0x5c3ba41501f0 add request 1 0x5c3ba41518e0*4
2024/07/09 23:35:40.115 kid1| 23,3| Uri.cc(446) parse: Split URL 
'http://download.docker.com/linux/ubuntu/dists/jammy/InRelease'[http://download.docker.com/linux/ubuntu/dists/jammy/InRelease']
 into proto='http', host='download.docker.com', port='80', 
path='/linux/ubuntu/dists/jammy/InRelease'
2024/07/09 23:35:40.115 kid1| 14,3| Address.cc(389) lookupHostIP: Given Non-IP 
'download.docker.com': Name or service not known
2024/07/09 23:35:40.115 kid1| 33,3| client_side.cc(702) clientSetKeepaliveFlag: 
http_ver = HTTP/1.1
2024/07/09 23:35:40.115 kid1| 33,3| client_side.cc(703) clientSetKeepaliveFlag: 
method = HEAD
2024/07/09 23:35:40.115 kid1| 85,3| client_side_request.cc(122) 
ClientRequestContext: ClientRequestContext constructed, this=0x5c3ba4154e78
2024/07/09 23:35:40.115 kid1| 83,3| client_side_request.cc(1708) doCallouts: Doing 
calloutContext->hostHeaderVerify()
2024/07/09 23:35:40.115 kid1| 85,3| client_side_request.cc(606) 
hostHeaderVerify: validate host=download.docker.com, port=0, portStr=NULL
2024/07/09 23:35:40.115 kid1| 85,3| client_side_request.cc(620) 
hostHeaderVerify: validate skipped.
2024/07/09 23:35:40.115 kid1| 83,3| client_side_request.cc(1715) doCallouts: Doing 
calloutContext->clientAccessCheck()
2024/07/09 23:35:40.115 kid1| 28,3| Checklist.cc(69) preCheck: 0x5c3ba41552d8 
checking slow rules
2024/07/09 23:35:40.115 kid1| 28,3| Ip.cc(538) match: aclIpMatchIp: 
'10.2.59.102:56466' found
2024/07/09 23:35:40.115 kid1| 28,3| Acl.cc(175) matches: checked: all = 1
2024/07/09 23:35:40.115 kid1| 28,3| Acl.cc(175) matches: checked: http_access#1 
= 1
2024/07/09 23:35:40.115 kid1| 28,3| Acl.cc(175) matches: checked: http_access = 
1
2024/07/09 23:35:40.115 kid1| 28,3| Checklist.cc(62) markFinished: 
0x5c3ba41552d8 answer ALLOWED for match
2024/07/09 23:35:40.115 kid1| 28,3| Checklist.cc(162) checkCallback: 
ACLChecklist::checkCallback: 0x5c3ba41552d8 answer=ALLOWED
2024/07/09 23:35:40.115 kid1| 85,2| client_side_request.cc(714) 
clientAccessCheckDone: The request HEAD 
http://download.dock

Re: [squid-users] Squid 6.6 kick abandoning connections

2024-07-08 Thread Alex Rousskov

On 2024-07-08 12:31, Jonathan Lee wrote:


I can confirm I have no ipv6 our isp is ipv4 only and I have IPv6
disabled on the firewall and with layer 2 and 3 traffic


This problem is not specific to any IP family/version.

Alex.



On Jul 8, 2024, at 09:15, Alex Rousskov  
wrote:

On 2024-07-05 21:07, Jonathan Lee wrote:


I am using Bump with certificates installed on devices does anyone know what 
this error is...
kick abandoning conn43723 local=192.168.1.1:3128 remote=192.168.1.5:52129 FD 
178 flags=1



This "kick abandoning" message marks a Squid problem or bug: Squid enters a 
seemingly impossible state. In some (but probably not all) cases, the client connection 
might become stuck (hopefully until some timeout closes it). In some (and possibly all) 
cases, Squid might immediately close the connection and nobody gets hurt. Code reporting 
this problem does not know how we got here and what will happen next.

There were several incomplete/unfinished attempts to fix this problem, 
including two different patches posted at Bug 3715. I do not know whether 
either of them is safe and applies to Squid v6. Neither is a comprehensive 
solution.
https://bugs.squid-cache.org/show_bug.cgi?id=3715



Does anyone know how to fix my last weird error I have with Squid 6.6


I do not know of a good configuration-based workaround. Squid code 
modifications are required to properly address this problem. Other errors may 
trigger this bug, so addressing those other errors may hide (and reduce the 
pressure to fix) this bug. Besides fixing those other errors (if any -- I am 
aware that you have said that there are no other errors left, but perhaps you 
found other problems since then), these basic options apply:

https://wiki.squid-cache.org/SquidFaq/AboutSquid#how-to-add-a-new-squid-feature-enhance-of-fix-something

Alex.



___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 6.6 kick abandoning connections

2024-07-08 Thread Alex Rousskov

On 2024-07-05 21:07, Jonathan Lee wrote:

I am using Bump with certificates installed on devices does anyone know 
what this error is...


kick abandoning conn43723 local=192.168.1.1:3128 
remote=192.168.1.5:52129 FD 178 flags=1



This "kick abandoning" message marks a Squid problem or bug: Squid 
enters a seemingly impossible state. In some (but probably not all) 
cases, the client connection might become stuck (hopefully until some 
timeout closes it). In some (and possibly all) cases, Squid might 
immediately close the connection and nobody gets hurt. Code reporting 
this problem does not know how we got here and what will happen next.


There were several incomplete/unfinished attempts to fix this problem, 
including two different patches posted at Bug 3715. I do not know 
whether either of them is safe and applies to Squid v6. Neither is a 
comprehensive solution.

https://bugs.squid-cache.org/show_bug.cgi?id=3715



Does anyone know how to fix my last weird error I have with Squid 6.6


I do not know of a good configuration-based workaround. Squid code 
modifications are required to properly address this problem. Other 
errors may trigger this bug, so addressing those other errors may hide 
(and reduce the pressure to fix) this bug. Besides fixing those other 
errors (if any -- I am aware that you have said that there are no other 
errors left, but perhaps you found other problems since then), these 
basic options apply:


https://wiki.squid-cache.org/SquidFaq/AboutSquid#how-to-add-a-new-squid-feature-enhance-of-fix-something

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid Cache Issues migration from 5.8 to 6.6

2024-07-08 Thread Alex Rousskov

On 2024-07-05 20:55, Jonathan Lee wrote:

FIXED

I think it wanted a new certificate generated mine became to weak I 
needed one that ECDSA with prime256v sha256 and not RSA anymore that 
solved my errors


Glad you found a solution! And if these errors resurface, you now have a 
blueprint for their initial triage.


Alex.



The error is gone when this cert is used :)


On Jul 5, 2024, at 14:33, Jonathan Lee  wrote:

However even with it marked as no

05.07.2024 14:30:46	ERROR: failure while accepting a TLS connection on 
conn4633 local=192.168.1.1:3128 remote=192.168.1.5:49721 FD 30 
flags=1: SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000417+TLS_IO_ERR=1




continues

I am going to take a break please if anyone know how to resolve this 
or wants me to try something else let me know. I was originally 
looking for the certificate when this error occurs however the error 
comes from the TLS_v1.3 as seen in the pcap files below.



Thanks again everyone


On Jul 4, 2024, at 16:02, Jonathan Lee  wrote:

I do not recommend changing your configuration at this time. I 
recommend rereading my earlier recommendation and following that 
instead: "As the next step in triage, I recommend determining what 
that CA is in these cases (e.g., by capturing raw TLS packets and 
matching them with connection information from A000417 error 
messages in cache.log or %err_detail in access.log)."


Ok I went back to 5.8 and ran the following command after I removed 
the changes I used does this help this is ran on the firewall side 
itself.


 openssl s_client -connect foxnews.com:443 <http://foxnews.com:443/>

depth=2 C = US, O = DigiCert Inc, OU = www.digicert.com, CN = DigiCert Global 
Root CA
verify return:1
depth=1 C = US, O = DigiCert Inc, CN = DigiCert TLS RSA SHA256 2020 CA1
verify return:1
depth=0 C = US, ST = New York, L = New York, O = "Fox News Network, LLC", CN = 
wildcard.foxnews.com
verify return:1
CONNECTED(0004)
---
Certificate chain
  0 s:C = US, ST = New York, L = New York, O = "Fox News Network, LLC", CN = 
wildcard.foxnews.com
i:C = US, O = DigiCert Inc, CN = DigiCert TLS RSA SHA256 2020 CA1
  1 s:C = US, O = DigiCert Inc, CN = DigiCert TLS RSA SHA256 2020 CA1
i:C = US, O = DigiCert Inc, OU = www.digicert.com, CN = DigiCert Global 
Root CA

-END CERTIFICATE-
subject=C = US, ST = New York, L = New York, O = "Fox News Network, LLC", CN = 
wildcard.foxnews.com

issuer=C = US, O = DigiCert Inc, CN = DigiCert TLS RSA SHA256 2020 CA1

---
No client certificate CA names sent
Peer signing digest: SHA256
Peer signature type: ECDSA
Server Temp Key: X25519, 253 bits
---
SSL handshake has read 4198 bytes and written 393 bytes
Verification: OK
---
New, TLSv1.3, Cipher is TLS_AES_256_GCM_SHA384
Server public key is 256 bit
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
Early data was not sent
Verify return code: 0 (ok)
---
DONE

Does that help I am not going to pretend I understand TLS options I 
do understand how the SSL ciphers work and certificates but all the 
different options and kinds are what is confusing me. I did not seem 
to have this error before.



Should I regenerate a new certificate for the new version of Squid 
and redeploy them all to hosts again? I used this method in the past 
and it worked for a long time after I imported it. I am wondering if 
this is outdated now


*openssl req -x509 -new -nodes -key myProxykey.key -sha256 -days 365 
-out myProxyca.pem*




On Jul 4, 2024, at 15:13, Jonathan Lee  wrote:

Sorry

tls_outgoing_options 
cipher=HIGH:MEDIUM:!RC4:!aNULL:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS
tls_outgoing_options options=NO_SSLv3,SINGLE_DH_USE,SINGLE_ECDH_USE

Would I add this here?

On Jul 4, 2024, at 15:12, Jonathan Lee  
wrote:


I know before I could use

tls_outgoing_options 
cipher=EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH+aRSA+RC4:EECDH:EDH+aRSA:HIGH:!RC4:!aNULL:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS

However with the update I am seeing

ERROR: Unsupported TLS option SINGLE_ECDH_USE

I found researching in lists-squid-cache.org 
<http://lists-squid-cache.org/> that someone solved this with 
appending TLS13-AES-256-CGM-SHA384 to the ciphers.


I am thinking this is my issue also.

I see that error over and over when I run "squid -k parse”

Do I append this to the options cipher list?

Jonathan Lee

On Jul 4, 2024, at 14:45, Alex Rousskov 
 wrote:


On 2024-07-04 15:37, Jonathan Lee wrote:


in Squid.conf I have nothing with that detective.


Sounds good; sslproxy_cert_sign default should work OK in most 
cases. I mentioned signUntrusted algorithm so that you can 
discover (from the corresponding sslproxy_cert_sign documentation) 
which CA/certificate Squid uses in which SslBump use case. Triage 
is often easier if folks share the same working theory, and my 
current working theory sugge

Re: [squid-users] ICMP and QUIC

2024-07-08 Thread Alex Rousskov

On 2024-07-08 00:06, Jonathan Lee wrote:


When watching facebook reels everything works as expected after about
15 minutes the system starts to attempt to use QUIC and after my iMac
fan goes crazy and the website locks up..


Squid does not proxy UDP traffic (including QUIC). UDP traffic should 
not be forwarded/redirected/intercepted/etc. to Squid primary TCP 
listening ports (i.e. http_port, https_port, and ftp_port).


If you see signs of UDP traffic getting to Squid primary TCP ports, then 
something is misconfigured outside of Squid. In either case, the 
solution probably lies outside of Squid.



HTH,

Alex.



HTTPS was reserved for 443. QUIC is also using UDP 443 and not following proper 
protocol rules.

I do understand that QUIC is HTTP3 and uses UDP over 443 only. Again from a 
cybersecurity perspective how do you set up this protocol in the proxy ?

Here is the photo of the pcap showing the issue..

Does anyone know what to do to fix this?



___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid Cache Issues migration from 5.8 to 6.6

2024-07-05 Thread Alex Rousskov

On 2024-07-05 12:02, Jonathan Lee wrote:


> Alex: I recommend determining what that CA is in these cases (e.g., by 
capturing raw TLS packets and matching them with connection information from 
A000417 error messages in cache.log or %err_detail in access.log).




I have Wireshark running do I just look for information with ssl.handshake.type 
== 1



Or is there a wireshark particular filter you would like ran to help with 
isolation?



Please use Wireshark to determine the name of CA that issued the 
certificate that Squid sent to the client in the failing test case. If 
you are not sure, feel free to share issuer and subject fields of all 
certificates that Squid sent to the client in that test case (there may 
be two of each if Squid sent two certificates). Or even share a pointer 
to the entire (compressed) raw test case packet capture in pcap format!


These certificates are a part of standard TLS handshake, and Wireshark 
usually displays their fields when one studies TLS handshake bytes using 
Wireshark UI.


I do not know what filter would work best, but there should be just a 
handful of TLS handshake packets to examine for the test case, so no 
filter should be necessary.



HTH,

Alex.




On Jul 5, 2024, at 08:23, Jonathan Lee  wrote:

Thanks for the email and support with this. I will get wireshark running on the 
client and get the info required. Yes the information prior is from the 
firewall side outside of the proxy testing from the demilitarized zone area. I 
wanted to test this first to rule that out as it’s coming in from that first 
and hits the proxy next
Sent from my iPhone


On Jul 5, 2024, at 06:33, Alex Rousskov  
wrote:

On 2024-07-04 19:12, Jonathan Lee wrote:

You also stated .. " my current working theory suggests that we are looking at 
a (default) signUntrusted use case.”
I noticed for Squid documents that default is now set to off ..


The http_port option you are looking at now is not the directive I was talking 
about earlier.


http_port
tls-default-ca[=off]
   Whether to use the system Trusted CAs. Default is OFF.
Would enabling this resolve the problem in Squid 6.6 for error.



No, the above poorly documented http_port option is for validating _client_ 
certificates. It has been off since Squid v4 AFAICT. Your clients are not 
sending client certificates to Squid.

According to the working theory, the problem we are solving is related to 
server certificates. http_port tls-default-ca option does not affect server 
certificate validation. Server certificate validation should use default CAs by 
default.

Outside of SslBump, server certificate validation is controlled by tls_outgoing_options 
default-ca option. That option defaults to "on". I am not sure whether SslBump 
honors that directive/option though. There are known related bugs in that area. However, 
we are jumping ahead of ourselves. We should confirm the working theory first.


The squid.conf.documented lists it incorrectly


Squid has many directives and a directive may have many options. One should not 
use an directive option name instead of a directive name. One should not use an 
option from one directive with another directive. Squid naming is often 
inconsistent; be careful.

* http_port is a directive. tls-default-ca is an option for that directive. It is used for client 
certificate validation. It defaults to "off" (because client certificates are rarely 
signed by well-known (a.k.a. "default") CAs preinstalled in many deployment environments).

* tls_outgoing_options is a directive. default-ca is an option for that directive. It is used for 
server certificate validation outside of SslBump contexts (at least!). It defaults to 
"on" (because server certificates are usually signed by well-known (a.k.a. 
"default") CAs preinstalled in many deployment environments).

AFAICT, the documentation in question is not wrong (but is insufficient).

Again, I do not recommend changing any Squid configuration directives/options 
at this triage state.

Alex.



___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] ERROR: Unsupported TLS option SINGLE_ECDH_USE

2024-07-05 Thread Alex Rousskov

On 2024-07-05 11:35, Jonathan Lee wrote:


tls_outgoing_options options=NO_SSLv3,SINGLE_DH_USE,SINGLE_ECDH_USE

ERROR: Unsupported TLS option SINGLE_ECDH_USE


Your OpenSSL version defines SSL_OP_SINGLE_ECDH_USE name but otherwise 
ignores SSL_OP_SINGLE_ECDH_USE. OpenSSL behavior that was triggered by 
using this option in old OpenSSL releases is now default behavior, so 
using this option is no longer needed to trigger single-DH key use[1].


Adding SINGLE_ECDH_USE to your configuration achieves/changes nothing 
(with modern OpenSSL versions) as far as traffic on the wire is 
concerned. AFAICT, you should not use that option in squid.conf.


HTH,

Alex.

[1]: 
https://wiki.openssl.org/index.php/List_of_SSL_OP_Flags#SSL_OP_SINGLE_DH_USE


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid as http to https forward proxy

2024-07-05 Thread Alex Rousskov

On 2024-07-05 10:15, Wagner, Juergen03 wrote:


FWIW, I do not know why URL scheme rewriting does not work in your use case. In 
principle, bugs notwithstanding, I would expect URL scheme rewriting to work. In 
my original response, I was focusing on avoiding rewrites for a case where they 
should not be >needed because they should not be needed (in that use case) and 
because they did not work (in your specific tests), but _not_ because they should 
not work in principle or on some fundamental level.



Do you expect that by just changing the URL scheme from http to https Squid is 
doing the encryption and decryption of the data?


Yes, I do expect Squid to honor the new URL scheme. Honoring rewriter 
instructions is natural, but there is more to the story here: Bugs 
notwithstanding, Squid is already capable for receiving a plain text 
"GET https://..."; request on http_port and doing the encryption when 
talking to the requested TLS origin server. What your rewriter is doing 
feels very similar to that GET-https use case!




My suspicion was or is, that Squid is just forwarding the unencrypted data form 
the http client to the https server.


IMO, that would be a Squid bug in this case.



How can I check this when generating the Squid logs with "debug_options ALL,9"


You should not. Debugging logs are for Squid developers. Me or others 
will check if you share the logs. Share privately if needed and/or use 
fake/temporary keys/passwords/etc. to avoid leaking something sensitive. 
There are a few relevant hints at


https://wiki.squid-cache.org/SquidFaq/BugReporting#debugging-a-single-transaction


HTH,

Alex.



<<
FWIW, I do not know why URL scheme rewriting does not work in your use case. In 
principle, bugs notwithstanding, I would expect URL scheme rewriting to work. 
In my original response, I was focusing on avoiding rewrites for a case where 
they should not be needed because they should not be needed (in that use case) 
and because they did not work (in your specific tests), but _not_ because they 
should not work in principle or on some fundamental level.




Internal
-Original Message-
From: Alex Rousskov 
Sent: Freitag, 5. Juli 2024 15:52
To: Wagner, Juergen03 ; 
squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Squid as http to https forward proxy

CAUTION: This is an external email. Do not click or open any attachments unless 
you recognize the sender and know that the content is safe. 
(http://links.conti.de/mailcheck)


On 2024-07-05 09:16, Wagner, Juergen03 wrote:


Actually we want to be able to connect to any remote server.
So we are not looking for a solution with a "single true origin server".


Thank you for clarifying that.



My current understanding from your response is, that a simple
url-rewrite only, as we tried, is not working to forward http requests
form a client to any https server.


FWIW, I do not know why URL scheme rewriting does not work in your use case. In 
principle, bugs notwithstanding, I would expect URL scheme rewriting to work. 
In my original response, I was focusing on avoiding rewrites for a case where 
they should not be needed because they should not be needed (in that use case) 
and because they did not work (in your specific tests), but _not_ because they 
should not work in principle or on some fundamental level.



Just to be clear, is the usage of Squid as a forward http proxy for a
client while using https for external communication possible without
any Squid code changes?


That particular question covers several different scenarios. Your use case is 
one of them. The answer for your specific use case is unknown (to me) -- I 
would expect URL scheme rewrites to work but they do not in your test. I do not 
know why.



Alex: At some point, depending on the use case, it will be easier to
enhance Squid to encrypt plain HTTP requests


That comment still applies. However, if you would prefer to avoid (or at least reduce) Squid code 
modifications, it may be useful to triage why scheme rewrites do not work in your tests. In other words, why 
Squid generates a "Bad Gateway" error when the rewriter changes request URL scheme from 
"http" to "https". Sharing a pointer to compressed cache.log collected with debug_options 
set to ALL,9 while reproducing the problem with a single test transaction may be the best next step.


HTH,

Alex.



-----Original Message-
From: squid-users  On
Behalf Of Alex Rousskov
Sent: Donnerstag, 4. Juli 2024 18:43
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Squid as http to https forward proxy

CAUTION: This is an external email. Do not click or open any
attachments unless you recognize the sender and know that the content
is safe. (http://links.conti.de/mailcheck)


On 2024-07-04 12:36, Alex Rousskov wrote:

On 2024-07-04 10:58, Matus UHLAR - fantomas wrote:

On 2024-07-04 09:20, Wagner, Juergen03 wrote:

we 

Re: [squid-users] Squid as http to https forward proxy

2024-07-05 Thread Alex Rousskov

On 2024-07-05 09:16, Wagner, Juergen03 wrote:


Actually we want to be able to connect to any remote server.
So we are not looking for a solution with a "single true origin server".


Thank you for clarifying that.



My current understanding from your response is, that a simple
url-rewrite only, as we tried, is not working to forward http
requests form a client to any https server.


FWIW, I do not know why URL scheme rewriting does not work in your use 
case. In principle, bugs notwithstanding, I would expect URL scheme 
rewriting to work. In my original response, I was focusing on avoiding 
rewrites for a case where they should not be needed because they should 
not be needed (in that use case) and because they did not work (in your 
specific tests), but _not_ because they should not work in principle or 
on some fundamental level.




Just to be clear, is the usage of Squid as a forward http proxy for a
client while using https for external communication possible without
any Squid code changes?


That particular question covers several different scenarios. Your use 
case is one of them. The answer for your specific use case is unknown 
(to me) -- I would expect URL scheme rewrites to work but they do not in 
your test. I do not know why.




Alex: At some point, depending on the use case, it will be easier to
enhance Squid to encrypt plain HTTP requests


That comment still applies. However, if you would prefer to avoid (or at 
least reduce) Squid code modifications, it may be useful to triage why 
scheme rewrites do not work in your tests. In other words, why Squid 
generates a "Bad Gateway" error when the rewriter changes request URL 
scheme from "http" to "https". Sharing a pointer to compressed cache.log 
collected with debug_options set to ALL,9 while reproducing the problem 
with a single test transaction may be the best next step.



HTH,

Alex.



-Original Message-----
From: squid-users  On Behalf Of Alex 
Rousskov
Sent: Donnerstag, 4. Juli 2024 18:43
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Squid as http to https forward proxy

CAUTION: This is an external email. Do not click or open any attachments unless 
you recognize the sender and know that the content is safe. 
(http://links.conti.de/mailcheck)


On 2024-07-04 12:36, Alex Rousskov wrote:

On 2024-07-04 10:58, Matus UHLAR - fantomas wrote:

On 2024-07-04 09:20, Wagner, Juergen03 wrote:

we are evaluating Squid to be used as a http to https forward proxy.

So Squid would need to support the following setup:

 http (client)>   Squid  --->  https ( server )

Could someone please confirm if the given setup is in principle
possible with Squid?

If yes, which configuration needs to be done?


On 04.07.24 10:36, Alex Rousskov wrote:

Yes, Squid should be able to forward plain text HTTP requests to
a secure server. Use cache_peer directive with "tls" and "originserver"
flags. Here is an untested sketch:

# routing all traffic to one HTTPS origin server
cache_peer 127.0.0.1 parent 443 0 tls originserver \
name=MySecureOrigin \
no-query no-digest
cache_peer_access MySecureOrigin allow all
always_direct deny all
never_direct allow all
nonhierarchical_direct off


Afaik this means that it is not possible with any remote server,
because all servers you want to access this way must be explicitly
set up in squid.conf, correct?


I assumed (possibly incorrectly) that Juergen was asking about a
single "true origin server" (e.g., example.com). The above example was
written with a single "true origin server" in mind. However, exactly
the same Squid configuration may work to forward traffic to a reverse
proxy (running at 127.0.0.1 on port 443) that "represents"
multiple/different "true origin servers".

That reverse proxy will need to shovel TLS bytes received from Squid
to the right "true origin server", but I am guessing that it can do
that based on TLS SNI supplied by Squid. Some Squid code modifications
may be necessary to make this work correctly with persistent
Squid-to-peer connections and such, but nothing major AFAICT (and they
can be turned off using server_persistent_connections if they are in the way).

AFAICT, with either SslBump or some Squid code modifications, that
reverse proxy can be a Squid proxy. With even more Squid enhancements,
that reverse proxy can also become an https_port on the same Squid
proxy instance where the http_port receives plain HTTP requests!


At some point, depending on the use case, it will be easier to enhance Squid to 
encrypt plain HTTP requests without using this TLS cache_peer hack, of course.

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users
_

Re: [squid-users] Squid Cache Issues migration from 5.8 to 6.6

2024-07-05 Thread Alex Rousskov

On 2024-07-04 19:12, Jonathan Lee wrote:
You also stated .. " my current working theory suggests that we are 
looking at a (default) signUntrusted use case.”


I noticed for Squid documents that default is now set to off ..


The http_port option you are looking at now is not the directive I was 
talking about earlier.


> http_port

   tls-default-ca[=off]
Whether to use the system Trusted CAs. Default is OFF.

Would enabling this resolve the problem in Squid 6.6 for error.



No, the above poorly documented http_port option is for validating 
_client_ certificates. It has been off since Squid v4 AFAICT. Your 
clients are not sending client certificates to Squid.


According to the working theory, the problem we are solving is related 
to server certificates. http_port tls-default-ca option does not affect 
server certificate validation. Server certificate validation should use 
default CAs by default.


Outside of SslBump, server certificate validation is controlled by 
tls_outgoing_options default-ca option. That option defaults to "on". I 
am not sure whether SslBump honors that directive/option though. There 
are known related bugs in that area. However, we are jumping ahead of 
ourselves. We should confirm the working theory first.


> The squid.conf.documented lists it incorrectly

Squid has many directives and a directive may have many options. One 
should not use an directive option name instead of a directive name. One 
should not use an option from one directive with another directive. 
Squid naming is often inconsistent; be careful.


* http_port is a directive. tls-default-ca is an option for that 
directive. It is used for client certificate validation. It defaults to 
"off" (because client certificates are rarely signed by well-known 
(a.k.a. "default") CAs preinstalled in many deployment environments).


* tls_outgoing_options is a directive. default-ca is an option for that 
directive. It is used for server certificate validation outside of 
SslBump contexts (at least!). It defaults to "on" (because server 
certificates are usually signed by well-known (a.k.a. "default") CAs 
preinstalled in many deployment environments).


AFAICT, the documentation in question is not wrong (but is insufficient).

Again, I do not recommend changing any Squid configuration 
directives/options at this triage state.


Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid Cache Issues migration from 5.8 to 6.6

2024-07-05 Thread Alex Rousskov

On 2024-07-04 19:02, Jonathan Lee wrote:
I do not recommend changing your configuration at this time. I 
recommend rereading my earlier recommendation and following that 
instead: "As the next step in triage, I recommend determining what 
that CA is in these cases (e.g., by capturing raw TLS packets and 
matching them with connection information from A000417 error 
messages in cache.log or %err_detail in access.log)."


Ok I went back to 5.8 and ran the following command after I removed the 
changes I used does this help this is ran on the firewall side itself.


  openssl s_client -connect foxnews.com:443

depth=2 C = US, O = DigiCert Inc, OU = www.digicert.com, CN = DigiCert Global 
Root CA


Did the above connection go through Squid? Sorry, I do not know whether 
"on the firewall side itself" implies a "yes" or "no" answer in this 
test case.




Does that help


It does not hurt, but it is not the information I have requested for the 
next triage step: I asked about the certificate corresponding to the 
A000417 error message in Squid v6.6. You are sharing the certificate 
corresponding to either a direct connection to the origin server or the 
certificate corresponding to a problem-free connection through Squid v5.8.



Should I regenerate a new certificate for the new version of Squid and 
redeploy them all to hosts again?


IMHO, on this thread, you should follow the recommended triage steps. If 
those recommendations are problematic, please discuss.


Alex.


On Jul 4, 2024, at 14:45, Alex Rousskov 
 wrote:


On 2024-07-04 15:37, Jonathan Lee wrote:


in Squid.conf I have nothing with that detective.


Sounds good; sslproxy_cert_sign default should work OK in most 
cases. I mentioned signUntrusted algorithm so that you can discover 
(from the corresponding sslproxy_cert_sign documentation) which 
CA/certificate Squid uses in which SslBump use case. Triage is often 
easier if folks share the same working theory, and my current 
working theory suggests that we are looking at a (default) 
signUntrusted use case.


The solution here probably does _not_ involve changing 
sslproxy_cert_sign configuration, but, to make progress, I need more 
info to confirm this working theory and describe next steps.




Yes I am using SSL bump with this configuration..


Noted, thank you.



So would I use this directive


I do not recommend changing your configuration at this time. I 
recommend rereading my earlier recommendation and following that 
instead: "As the next step in triage, I recommend determining what 
that CA is in these cases (e.g., by capturing raw TLS packets and 
matching them with connection information from A000417 error 
messages in cache.log or %err_detail in access.log)."



HTH,

Alex.



On Jul 4, 2024, at 09:56, Alex Rousskov wrote:

On 2024-07-04 12:11, Jonathan Lee wrote:
failure while accepting a TLS connection on conn5887 
local=192.168.1.1:3128

SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000417


A000417 is an "unknown CA" alert sent by client to Squid while the 
client is trying to establish a TLS connection to/through Squid. 
The client does not trust the Certificate Authority that signed 
the certificate that was used for that TLS connection.


As the next step in triage, I recommend determining what that CA 
is in these cases (e.g., by capturing raw TLS packets and matching 
them with connection information from A000417 error messages in 
cache.log or %err_detail in access.log).


If you use SslBump for port 3128 traffic, then one of the 
possibilities here is that Squid is using an unknown-to-client CA 
to report an origin server that Squid itself does not trust (see 
signUntrusted in squid.conf.documented). In those cases, logging a 
level-1 ERROR is a Squid bug because that expected/desirable 
outcome should be treated as success (and a successful TLS accept 
treated as an error!).



HTH,

Alex.




Is my main concern however I use the squid guard URL blocker
Sent from my iPhone
On Jul 4, 2024, at 07:41, Alex Rousskov 
 wrote:


On 2024-07-03 13:56, Jonathan Lee wrote:

Hello fellow Squid users does anyone know how to fix this issue?


I counted about eight different "issues" in your cache.log 
sample. Most of them are probably independent. I recommend that 
you explicitly pick _one_, search mailing list archives for 
previous discussions about it, and then provide as many details 
about it as you can (e.g., what traffic causes it and/or 
matching access.log records).



HTH,

Alex.



Squid - Cache Logs
Date-Time    Message
31.12.1969 16:00:00
03.07.2024 10:54:34    kick abandoning 
conn7853 local=192.168.1.1:3128 remote=192.168.1.5:49710 FD 89 
flags=1

31.12.1969 16:00:00
03.07.2024 10:54:29    kick abandoning 
conn7844 local=192.168.1.1:3128 remote=192.168.1.5:49702 FD 81 
flags=1
03.07.2024 10:54:09    ERROR: failure while accepting a TLS 
connection on conn7648 local=192.168.1.1:3128 
remote=192.168.1.5:49672

  1   2   3   4   5   6   7   8   9   10   >