Re: [squid-users] NTRIP protocol through proxy

2024-10-22 Thread Amos Jeffries

On 22/10/24 21:20, Matus UHLAR - fantomas wrote:

Hello,

We have customer who communicates using NTRIP protocol through proxy.

I have noticed that the client connects using GET protocol, while server
responds with ICY respone, which indicated the icecast protocol that should
be supported by SQUID since version 3.1
http://www.squid-cache.org/Versions/v3/3.1/RELEASENOTES.html#ss2.11

Customer reports that this does not work. While it is apparently not 
problem

of squid, I still would like to ask

- does anyone have experience with using squid for NTRIP communication?



Not NTRIP specifically. The documentation for it specifies that it "uses 
HTTP/1.1", so should work for Squid.


If you can obtain a "debug_options 11,2" trace of the messages going 
through it may help track down what specifically is not working.



On the ICY topic, it was definitely working as of v3.4. The client I had 
using ICY no longer does. So I would not be surprised if something got 
broken more recently without being noticed.



Cheers
Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Unable to access a device over port 4434

2024-10-22 Thread Amos Jeffries

On 23/10/24 01:34, Piana, Josh wrote:

Amos,

Thank you for the update in regards to the credentials.

I looked into it a bit more to and it helped clear my misunderstanding.

The credentialstls configuration directive only controls how often these credentials are 
internally verified by Squid. It means that if the browser is closed and then opened and 
the browser pops up credential dialog, then it has nothing to do with Squid. It means 
that the browser does not know what credentials it should pass to the proxy and therefore 
asks to enter them. The credentialsttl configuration directive means, how often the 
password should be "verified" after the last successful verification.


Correct.




I reviewed my authentication config and changed it.

Is this correct? We have a this setup via realmD, sssd, using Kerberos 
authentication.

auth_param basic program /usr/lib64/squid/basic_pam_auth

auth_param basic children 10

auth_param basic keep_alive on


Per the docs "For Basic and Digest this parameter is ignored."



auth_param basic credentialsttl 2 hours

auth_param basic realm 



If that works for your needs it is good.

HTH
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Unable to access a device over port 4434

2024-10-22 Thread Amos Jeffries

On 19/10/24 08:52, Piana, Josh wrote:


On a separate note, what would cause me to need to authenticate everytime I 
open a new browser? My credentials are supposed to last a week.



HTTP requires every request to be authenticated.

I assume you mean a popup appears? that would be a Browser decision.
To save across Browser being restarted your credentials need to be added 
to their "Password Manager".




Here's my authentication config:

#
auth_param basic program /usr/lib64/squid/basic_pam_auth
auth_param negotiate children 10
auth_param negotiate keep_alive on
auth_param basic credentialsttl 1 week> acl kerb-auth proxy_auth REQUIRED
#



FYI: Configuring "auth_param negotiate" without an "auth_param negotiate 
program ..." line does nothing.



Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Unable to access a device over port 4434

2024-10-17 Thread Amos Jeffries

Ah, okay. I see.


So what you need with Squid is a cache_peer, relaying relevant traffic 
to that device.


  # details of how Squid should connect to the device
  cache_peer 172.27.46.253 parent 4434 0 originserver \
 tls-cert=/path/to/server.ca

  # which traffic to relay there
  acl foo dstdomain foo.example.com
  cache_peer_access 172.27.46.253 allow foo
  never_direct allow foo

  # permission for clients to make requests that reach that device
  http_access allow localnet foo


Add more ACL conditions as needed to restrict the http_access line to 
the appropriate clients.



Cheers
Amos


On 18/10/24 09:40, Piana, Josh wrote:

Hey Amos,

Thanks for replying.

To clarify on the test, port 4434 is the port that was assigned to get access 
to that device, one of our firewalls.

I looked at the old Squid config that we have, and it seems this was setup in a 
way that internal networks were not being passed through the proxy. This was 
done be either an ACL, or the PAC file, is what we're thinking. The issue is, 
we don't exactly know how to implement the PAC file on our new Squid box.

With that said, I agree with your statement that its difficult to troubleshoot 
an issue as opposed to go around it. Unfortunately, that's how it was done 
before and that's the direction our current management is going again. So I 
need to reconfigure the squid.conf file to ignore internal traffic, networks, 
and IP's, and only web filter and proxy internet connections. We can't just 
copy the old config because it doesn't carry over 1:1, and its an old version 
from 2.5.

Is there additional testing I could do or anything I should know to make this 
happen?

-Original Message-----
From: squid-users  On Behalf Of Amos 
Jeffries
Sent: Wednesday, October 16, 2024 11:45 PM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Unable to access a device over port 4434

Caution: This email originated from outside of Hexcel. Do not click links or 
open attachments unless you recognize the sender and know the content is safe.


On 16/10/24 09:39, Piana, Josh wrote:

Amos,

Thank you for getting back to me and clarifying.

I ran this command:
#wget -Y off 172.27.46.253

Response:
--2024-10-15 16:36:15--
http://172.0.0.2/
7.46.253%2F&data=05%7C02%7Cjosh.piana%40hexcel.com%7Cf595ea81629e4e5db
45f08dcee5e2c12%7C4248050df19546d5ac9c0c7c52b04cae%7C0%7C0%7C638647335
455212558%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIi
LCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&sdata=ZfzWSAFz5VpiHcYtQy4p
c%2BHj08G%2FvUSWdLaU2RckxlA%3D&reserved=0
Connecting to 172.27.46.253:80... connected.
HTTP request sent, awaiting response... 301 Moved Permanently
Location:
https://0.0.0.172/.
27.46.253%2F&data=05%7C02%7Cjosh.piana%40hexcel.com%7Cf595ea81629e4e5d
b45f08dcee5e2c12%7C4248050df19546d5ac9c0c7c52b04cae%7C0%7C0%7C63864733
5458181374%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzI
iLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&sdata=z%2F5YhuDe5d2cz6KUm
TbGhMV6Dpgs3T%2B%2BrlqRewDCmCM%3D&reserved=0 [following]
--2024-10-15 16:36:15--
https://0.0.0.172/.
27.46.253%2F&data=05%7C02%7Cjosh.piana%40hexcel.com%7Cf595ea81629e4e5d
b45f08dcee5e2c12%7C4248050df19546d5ac9c0c7c52b04cae%7C0%7C0%7C63864733
5458181374%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzI
iLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&sdata=z%2F5YhuDe5d2cz6KUm
TbGhMV6Dpgs3T%2B%2BrlqRewDCmCM%3D&reserved=0
Connecting to 172.27.46.253:443... connected.


The TCP here is fully working, on both ports.

The followup hang you mentioned was likely due to a mistake in your followup 
test (that extra '4' typo?).



ERROR: The certificate of '172.27.46.253' is not trusted.
ERROR: The certificate of '172.27.46.253' doesn't have a known issuer.
The certificate's owner does not match hostname '172.27.46.253'



There you go. Two problems to resolve.

First Problem;  unknown "Issuer" (aka root or intermediate CA certificate).

Please use this to find out what details need to be retrieved:
wget -v --no-proxy https://172.27.46.253/


Find the public CA certificate for the missing "Issuer".

Further tests with wget should use:
wget -v --no-proxy \
  --ca-certificate=/path/to/server.ca https://172.27.46.253/

When wget test shows trust of the server certificate working, Squid should be 
configured to use it for checking too:
 tsl_outgoing_options ca=/path/to/server.ca or
cache_peer 172.27.46.253 443 0 originserver tls-ca=/path/to/server.ca


Second Problem;  mismatch between "172.27.46.253" and "Owner" (or 
maybe"SubjectAltName" fields).

The wget output when troubleshooting for the first problem should give more 
hints about what this means.




So with the errors given, would that stop us from connecting to it? Typically 
with sites with trust issues or certificatio

Re: [squid-users] Unable to access a device over port 4434

2024-10-16 Thread Amos Jeffries

On 16/10/24 09:39, Piana, Josh wrote:

Amos,

Thank you for getting back to me and clarifying.

I ran this command:
#wget -Y off 172.27.46.253

Response:
--2024-10-15 16:36:15--  http://172.27.46.253/
Connecting to 172.27.46.253:80... connected.
HTTP request sent, awaiting response... 301 Moved Permanently
Location: https://172.27.46.253/ [following]
--2024-10-15 16:36:15--  https://172.27.46.253/
Connecting to 172.27.46.253:443... connected.


The TCP here is fully working, on both ports.

The followup hang you mentioned was likely due to a mistake in your 
followup test (that extra '4' typo?).




ERROR: The certificate of '172.27.46.253' is not trusted.
ERROR: The certificate of '172.27.46.253' doesn't have a known issuer.
The certificate's owner does not match hostname '172.27.46.253'



There you go. Two problems to resolve.

First Problem;  unknown "Issuer" (aka root or intermediate CA certificate).

Please use this to find out what details need to be retrieved:
  wget -v --no-proxy https://172.27.46.253/


Find the public CA certificate for the missing "Issuer".

Further tests with wget should use:
  wget -v --no-proxy \
--ca-certificate=/path/to/server.ca https://172.27.46.253/

When wget test shows trust of the server certificate working, Squid 
should be configured to use it for checking too:

   tsl_outgoing_options ca=/path/to/server.ca
or
  cache_peer 172.27.46.253 443 0 originserver tls-ca=/path/to/server.ca


Second Problem;  mismatch between "172.27.46.253" and "Owner" (or 
maybe"SubjectAltName" fields).


The wget output when troubleshooting for the first problem should give 
more hints about what this means.





So with the errors given, would that stop us from connecting to it? Typically 
with sites with trust issues or certification issues, you can still bypass it. 
We'd like to do the same here if applicable.


One could, but your wget command does not.


FYI, It is also a bad idea to bypass unless you really have to. 
Especially bad to bypass unknown amount of things when trying to 
identify reasons for failure.



Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 6.10 SSL-Bump Woes

2024-10-12 Thread Amos Jeffries

On 12/10/24 12:48, Bryan Seitz wrote:
    I wanted to note that since these are BMCs they require basic auth 
headers to return their response. I noticed that the ignore-auth option 
was removed awhile ago.  Is my only option to go back to Squid 3.5 ?




Squid supports caching of authenticated traffic now.
The authentication is also purely between Squid and the server, so is 
not relevant to the cacheability of these responses.



HTH
Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 6.10 SSL-Bump Woes

2024-10-12 Thread Amos Jeffries



Okay, I am seeing the server response is marked "private" and 7hrs old 
(25200sec).


Replacing the Cache-Control header using "max-age=1800" is not having 
noticeable effect because 25200sec is already past the 1800sec limit.



What you need to do there instead is:

 1) remove the config changing Cache-Control header.

 2) modify the "refresh_pattern . " line to be this:

 refresh_pattern . 0 20% 4320 ignore-private reload-into-ims


Your Squid should then cache the server response and treat the "private" 
as if it were a "must-revalidate".



Please do that then provide a new cache.log trace for us to see how the 
server handles a revalidation check.



Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Unable to access a device over port 4434

2024-10-10 Thread Amos Jeffries

On 11/10/24 07:21, Piana, Josh wrote:

Hello Matus,

I apologize, I was unable to read any of the links that were responded with because our 
environment appended the " eur02.safelinks.protection.outlook.com..." Outlook 
protection. Did you see that as well on your side? When I did click the links to view 
them is just stated as failed.

What I gather from what you said was that, it's not likely Squid is the issue. 
Even when we bypass Squid it does work. FWIW, it's possible that there is some 
other network problem coming into play here on our side. Though I did try to 
verify there's now blockages from the firewall, the networks, the traffic, etc.



FTR; the critical detail in what Matus wrote was that the "wget" (or 
curl if you prefer) connection test **must** be performed

 A) on the Squid machine,
 B) using the same low-privileges user account that Squid runs with,
 D) to the same server IP address Squid is trying to contact.

That ensures the TCP connection privileges are as close to identical to 
what Squid is doing.


Running it from another machine and/or user account may encounter 
different firewall or routing behaviour that hides the real issue.


If that test provides a successful TCP connection, *and* HTTP response 
message the next step is to



Also, FYI; your custom change to the timestamp has somehow lost the 
"duration" value, so I/we cannot tell if this was a probable TCP FIN/RST 
(hint of firewall problem) or a SYN+ACK timeout (hint of routing problem).



HTH
Amos






I suppose from here I'll try to troubleshoot other things.

Alternatively, do you think I should try to create an ACL which bypasses any 
filters or rules to that network?

-Original Message-
From: squid-users  On Behalf Of 
Matus UHLAR - fantomas
Sent: Thursday, October 10, 2024 3:21 AM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Unable to access a device over port 4434

Caution: This email originated from outside of Hexcel. Do not click links or 
open attachments unless you recognize the sender and know the content is safe.


On 09.10.24 19:59, Piana, Josh wrote:

I'm running into an issue wherein, when using Squid proxy, I'm unable to get to 
one of our management devices from port 4434.

I've already verified that this device is not blocking access from the proxy 
directly, and should be allowed to get to the access page.

-  When reviewing the access logs, I can see that we're running into a 
generic 503 error

-  When browsing to this page, it will attempt to load for about 30 
seconds, and then fail

-  The webpage response is a generic "The system returned: (110) Connection 
timed out"

-  When we forgo the proxy, we can access it without an issue

This device is located on a 172.0.0.0/8 internal network.

-  Other devices which do NOT use this port are accessible

-  Changing the access port is not an option (not up to me)

Access Log entry:
09/Oct/2024:15:54:21 -0400.758 10.46.49.190 TCP_MISS/503 4448 GET
http://172.0.0.27/
.46.253%3A4434%2F&data=05%7C02%7Cjosh.piana%40hexcel.com%7Cad6b9a6df5da
44a2b73508dce8fc1971%7C4248050df19546d5ac9c0c7c52b04cae%7C0%7C0%7C63864
1416681623895%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luM
zIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&sdata=G%2FrqNK0o%2Bdk0ia
zrnMhbyTvL0RmZAK27lulhMBhPMDU%3D&reserved=0 jpiana \
HIER_DIRECT/172.27.46.253 text/html ERR_CONNECT_FAIL/WITH_SERVER



I guess the correct URL is: http://172.27.46.253:4434/jpiana

have you tried running following directly from the squid machine?

wget -Y off http://172.27.46.253:4434/jpiana


Because ERR_CONNECT_FAIL/WITH_SERVER and "Connection timed out" both say that 
the squid was unable to open connection to server.

which is not a squid issue but network connection issue.
--
Matus UHLAR - fantomas, uh...@fantomas.sk ; http://www.fantomas.sk/
Warning: I wish NOT to receive e-mail advertising to this address.
Varovanie: na tuto adresu chcem NEDOSTAVAT akukolvek reklamnu postu.
LSD will make your ECS screen display 16.7 million colors 
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 6.10 SSL-Bump Woes

2024-10-10 Thread Amos Jeffries

On 11/10/24 11:08, Bryan Seitz wrote:

I removed the header mods and changed the refresh pattern to:

refresh_pattern .               15      20%     1800    override-expire 
ignore-no-cache ignore-no-store ignore-private


And I always get TCP_MISS.  Any other thoughts?


Ah, I believe it would be best to get a baseline of what Squid default 
behaviour is like in your environment. So we can identify what/how you 
need to improve it.



Firstly, FYI; this is what those controls **actually** do in current 
Squid ..


 * override-expires ... forces Squid to handle all responses to act as 
if they received "Cache-Control: max-age=900" (15 min) ... store, but 
revalidate 180+ seconds (20% of 15min) later.
  Result: Anything that could cache longer than 15min becomes a 
REFRESH_MISS or MISS, instead of HIT.

  Squid default: **do** cache. Revalidate
* after("Date"+"CC: max-age=N") timestamp, otherwise
* after "Expires" timestamp, otherwise
* after ("Date" +1800 minutes) timestamp.

 * ignore-no-cache ... the standardized "CC: no-cache" is badly named, 
it tells Squid what **can** be cached.
  Result: Squid will discard many stored objects and perform a MISS 
instead.
  Squid default: **do** cache "CC:private" responses, revalidate on 
HIT. Log as REFRESH.


 * ignore-no-store ... force everything marked "CC: no-store" to be stored.
 Result: cache fills with non-reusable objects. Leaving not much room 
for actual HIT objects.

  Squid default: store only objects with can result in more HITs.

 * ignore-private ... force everything with "CC: private" to be discarded.
  Result: same as "ignore-no-store".
  Squid default: **do** cache "CC:private" responses, revalidate on HIT.

Note that both HIT and REFRESH mean the object **was** cached.


You said that the access.log now contains MISS. Would that be just 
"MISS" or "REFRESH" + "MISS" (actually a HIT, but a new object was given 
by the server and replaced the pre-stored object).



Can you show a pair of request headers from the client, with matching 
response from the server?  You can use "debug_options 11,2" in recent 
Squid versions to get a cache.log trace of the HTTP transactions.


That might help us spot something more specific. The config change makes 
the earlier given ones obsolete.



HTH
Amos


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Wpad

2024-10-02 Thread Amos Jeffries

On 2/10/24 05:05, Jonathan Lee wrote:

Hello fellow squid users,

Can you please help? I am attempting to run wpad on the same machine as squid 
however port 80 443 is blocked, I have a url redirect 192.168.1.1/wpad.dat to 
https://192.168.1.1:8080/wpad.dat this is done with use of squid guard, however 
you must disable bypass for 192.168.1.1 on squid. Squid resides on 
192.168.1.1:3128,

It works on the iMac for auto config proxy I can access the url file within the 
redirect.

My question is how can this be managed directly with squid custom config ?? Is 
there a way to have squid manage a simple wpad?



  acl wpad urlpath_regex ^/wpad.dat$
  deny_info 200:/etc/squid/wpad.dat wpad
  http_access deny wpad

  reply_header_access Content-Type deny wpad
  reply_header_replace Content-Type application/x-ns-proxy-autoconfig



HTH
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] [External Sender] Re: Squid service not restarting properly

2024-09-30 Thread Amos Jeffries

To answer your question specifically..

On 27/09/24 23:50, Vivek Saurabh (CONT) wrote:

Hi NgTech,

I can restart the service wuth user and group being root. However, while 
I try to start using user as apdpr01 and group as root, it is 
getting timed out and not giving any errors. Can you please advise on this?


Squid low-level processes MUST NOT be given root access.

Your old user/group were valid and should still be working same as with 
any older Squid version.





Also, what the ./configure structure should be to compile the binary?




  ./configure --with-default-user=apdpr01   [1]

With the above build you can remove any "cache_effective_user apdpr01" 
lines from squid.conf.


group does not matter (much), so long as that user (apdpr01) is a member 
of the group according to the OS.


The OS will auto-assign the default group of user (apdpr01) to Squid 
processes, **unless** your squid.conf indicates a specific one such as:

  cache_effective_group apache


What you should expect to see is a single "squid" process running as 
root (this is the master one controlled by "service squid ...").

Its child processes should all have your custom user/group.


Things to check:

 * Any important looking messages from:
  squid -k parse


 * Does running squid manually work properly?
 sudo squid

  - If not, what does cache.log say about why it halted?


 * Do you have a "squid.service" installed properly with systemd ?

  - does it match the one we publish for Squid v6?
 



  - are the file paths listed there correct for where your Squid got 
installed to?




[1] other options may be needed, I do not know of specifically right now.

HTH
Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Unable to access internal resources via hostname

2024-09-16 Thread Amos Jeffries
p_access rules make all subsequent http_access rules 
irrelevant/unused because these two rules match all traffic:


http_access deny !localnet
http_access allow localnet


I did not look further, but the above combination is a sign that you interpret 
http_access rules differently than Squid does. Please make sure you understand 
why the two rules above make all subsequent http_access rules irrelevant/unused 
before adjusting your configuration further. Ask questions as needed.


If the primary problem persists after addressing this configuration problem, then my 
earlier (2024-09-04) recommendation stands: Please restate the primary problem (e.g., 
detail what "don't have browsing"
means in terms of the test transaction outcome) and share debugging log of that 
test transaction again.


HTH,

Alex.



So there's something wrong with either the order of the squid.conf or I'm missing some 
"allow" variable.

Please see below for my current config:

#
#

# Authentication
#
#


auth_param negotiate program /usr/lib64/squid/negotiate_kerberos_auth
-k /etc/squid/HTTP.keytab -s
HTTP/arcgate2.ad.arc-tech@ad.arc-tech.com
auth_param negotiate children 10
auth_param negotiate keep_alive on
acl kerb-auth proxy_auth REQUIRED

#
#  # Access control - shared/common ACL definitions
#
#
 #
-
-
-- # networks and hosts (by name or IP address)

acl src_self src 127.0.0.0/8
acl src_self src 10.46.11.69

acl to_localhost dst 10.46.11.69

acl localnet src 10.0.0.0/8
acl localnet src 172.0.0.0/8

acl local_dst_addr dst 10.0.0.0/8
acl local_dst_addr dst 172.0.0.0/8

#
-
-
--
# protocols (URL schemes)

acl proto_FTP proto FTP
acl proto_HTTP proto HTTP

#
-
-
--
# TCP port numbers

# TCP ports for ordinary HTTP
acl http_ports port 80 # standard HTTP
acl http_ports port 81 # common alternative
acl http_ports port 8001   # epson.com support sub-site
acl http_ports port 8080   # common alternative
acl http_ports port 88 8000    # ad-hoc services
acl http_ports port 1080   # SOCK frontend to HTTP service
acl http_ports port 21-22  # http:// frontend to FTP service
acl http_ports port 443# https:// URLs

# TCP ports for HTTP-over-SSL
acl Ssl_ports port 443   # standard HTTPS
acl Ssl_ports port 9571 # lexmark.com
acl Ssl_ports port 22 # SSH

# TCP ports for plain FTP command channel acl ftp_ports port 21

#
-
-
--
# HTTP methods (and pseudo-methods)

acl CONNECT method CONNECT

#
#

# Access control - general proxy
#
#


# define and allow new ACL for "Safe_ports"
acl Safe_Ports any-of http_ports Ssl_ports ftp_ports

#
-
-
--
# basic deny rules

# deny anything not from the LAN
http_access deny !localnet

# allow localnet users
http_access allow localnet

# blocks self to self connections
http_access deny to_localhost

# deny unauthorized access and DoS attacks http_access deny
!Safe_ports http_access deny CONNECT !Ssl_ports

# allow authenticated clients after all other ACL's http_access allow
kerb-auth

# deny any request we missed in the above http_access deny all

# if no other ACL applies, allow
http_reply_access allow all

This is it. I commented out all other ACL's for allow/deny we had in place for 
our custom rules. Still unable to browse local via hostname, IP works fine 
again.

-Original Message-
From: squid-users  On
Behalf Of Amos Jeffries
Sent: Saturday, September 7, 2024 10:51 AM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Unable to access internal resources via
hostname

Caution: This email originated from outside of Hexcel. Do not click links or 
open attachments unless you recognize the sender and know the content is safe.


On 6/09/24 03:56, Piana, Josh wrote:

Hello Amos,

While the comments did say that it was just the 10.46.11.0 range, I don't think there's 
any other ACL forcing that. I tried adding the the two internal sites that are being 
blocked by their IP, restarted Squid, and tested. Still being blocked. You are right 
though, both of those web addresses are o

[squid-users] RFC: Removal of ESI Support from Squid

2024-09-07 Thread Amos Jeffries

Hi all,

The ESI (Edge Side Includes) feature of Squid has a growing number of 
unfixed bugs, more than a few are turning into security issues.


Also, the current Squid developers do not have spare brain cycles to 
maintain everything and v7 is seeing a lot more effort to prune away old 
and unused mechanisms in Squid.



As such this is a callout to see how much use there is for this feature.


  DO you need ESI in Squid?  Yes or No.

   Speak Now, or face regrets at upgrade time.



Thank You
Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Unable to access internal resources via hostname

2024-09-07 Thread Amos Jeffries

On 6/09/24 03:56, Piana, Josh wrote:

Hello Amos,

While the comments did say that it was just the 10.46.11.0 range, I don't think there's 
any other ACL forcing that. I tried adding the the two internal sites that are being 
blocked by their IP, restarted Squid, and tested. Still being blocked. You are right 
though, both of those web addresses are on a different IP scheme. Ideally we want 
anything on 172.0.0.0 to be allowed, and 10.96.0.0. The other question I have is, even if 
we specify those sites IP as "allowed", shouldn't we be able to brwose to them 
by their hostname as well?



That depends no your configuration policy. The default squid.conf only 
checks that clients are from the LAN and not doing any nasty protocol 
trick attacks.





Currently, those internal sites ARE reachable. But only if we use IP. While 
this doesn't bother me, personally, the rest of our users would like to keep 
browsing via hostname as that's what they're used to and what many have 
shortcuts for.



Currently I see the ACL "local_dst_dom" is commented out (disabled).

I guess you also have not listed the hostname or domain in the file 
loaded by "authless_dst" ACL.






In regards to the results of /etc/resolv.conf, see below:
search ad.arc-tech.com
nameserver 10.46.11.67



Okay. Then the "NDOT" default of 1 will be applied. So for these HTTP 
messages:


  CONNECT hexp:443 HTTP/1.1
  Host: hexp
  ...


Will be interpreted by Squid as URL:

 https://hexp/


The "dstdomain" ACL will try to match "hexp" exactly.

The "dst" ACL will try to match IPs of "hexp.ad.arc-tech.com"




There must be a better way to just allow internal to internal traffic without 
needing to authenticate through the web proxy. The old config had it, but that 
was part of the issue. We have no idea how that was working, it didn't make 
sense at all and it was a bit outdated, Version 2.5 as opposed to our current 
5.5.

I'm happy to post out config again here, as it's changed a bit and I have 
cleaned it up.

# squid.conf - Squid web cache configuration

##
# General
##

# 2020MAR23 running out with just 1024 as we switch to Hexcel.com OMA
max_filedesc 4096



Unrelated to your problem, but FYI this should probably be much larger 
(ie 64K minimum, up to 100x expected user count) for a production proxy.





##
# Logging
##

# this makes the logs readable to humans
logformat custom %tl.%03tu %>a %Ss/%03>Hs %

This repeat of "custom" will cause issues.

IIRC this was added misunderstanding of Alex instructions.
What he meant was to **add** "%err_code/%err_detail" to your existing 
"custom" format.


Like this:

logformat custom %tl.%03tu %>a %Ss/%03>Hs %
# Red Hat-ish log names
cache_log /var/log/squid/cache.log
cache_access_log /var/log/squid/access.log


This setting opens a third logger writing to access.log, causing more 
issues.


Remove this "cache_access_log" line.




# store_log is only useful for debugging
cache_store_log none



FYI, off by default on current Squid. You can probably erase this 
setting entirely now.




##
# Network - General/misc
##

# our HTTP proxy port
http_port 10.46.11.69:8080
# loopback management
http_port 127.0.0.1:3128



FWIW, you have denied access to "dst 127.0.0.0/8". So traffic to this 
port will be rejected.





# disable ICP, port is typically 3130
icp_port 0


FYI; disabled by default in modern Squid. You can remove "icp_port".



# if set to "on", Squid will append your client's IP address in the HTTP 
requests it forwards
forwarded_for off


"off" will send the text "unknown".

It is better to use "transparent" (pass-thru unchanged) or "delete" 
(erase if existing).




##
# Authentication
##

auth_param negotiate program /usr/lib64/squid/negotiate_kerberos_auth -k 
/etc/squid/HTTP.keytab -s HTTP/arcgate2.ad.arc-tech@ad.arc-tech.com
auth_param negotiate children 10
auth_param negotiate keep_alive on
acl kerb-auth proxy_auth REQUIRED

##
# Access control - shared/common ACL definitions
##

# 
# networks and hosts (by name or IP address)

# acl all src all

acl src_self src 127.0.0.0/8
acl src_self src 10.46.11.69

acl dst_self dst 127.0.0.0/8
acl dst_self dst 10.46.11.69



Re: [squid-users] Unable to access internal resources via hostname

2024-09-04 Thread Amos Jeffries

Hi Josh,


There are two things I can see in the original message:

1) trusted *clients* (acl authless_src src ...) are documented as being 
limited to the 10.46.11.0/24 range.

 The client testing is outside, in the 10.46.49.190 IP address.

 ==> Please check your authless_src list is correct.


2) The CONNECT request has zero dots in the "domain" name. Which means 
the /etc/resolv.conf settings other than nameserver apply to the 
hostname during lookup.


 ==> Please supply your /etc/resolv.conf contents.


HTH
Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid 6.10 - Debian 12 undefined reference to `EVP_MD_type' in ssl-crtd

2024-08-21 Thread Amos Jeffries


Might be worth checking if you still need to custom build.
Debian now provides a "squid-openssl" package.

Amos


On 22/08/24 01:37, David Touzeau wrote:

Configure:
./configure --prefix=/usr --build=x86_64-linux-gnu --includedir=/include 
--mandir=/share/man --infodir=/share/info --localstatedir=/var -- 
libexecdir=/lib/squid3 --disable-maintainer-mode --disable-dependency- 
tracking --datadir=/usr/share/squid3 --sysconfdir=/etc/squid3 --enable- 
gnuregex --enable-removal-policy=heap --enable-follow-x-forwarded-for -- 
disable-cache-digests --enable-http-violations --enable-removal- 
policies=lru,heap --enable-arp-acl --enable-truncate --with-large-files 
--with-pthreads --enable-esi --enable-storeio=aufs,diskd,ufs,rock -- 
enable-x-accelerator-vary --with-dl--enable-linux-netfilter --with- 
netfilter-conntrack --enable-wccpv2 --enable-eui --enable-auth --enable- 
auth-basic --enable-snmp --enable-icmp --enable-auth-digest --enable- 
log-daemon-helpers --enable-url-rewrite-helpers --enable-auth-ntlm -- 
with-default-user=squid --enable-icap-client --disable-cache-digests -- 
enable-poll --enable-epoll --enable-async-io=128 --enable-zph-qos -- 
enable-delay-pools --enable-http-violations --enable-url-maps --enable- 
ecap --enable-ssl --with-openssl --enable-ssl-crtd --enable-xmalloc- 
statistics --enable-ident-lookups --with-filedescriptors=163840 --with- 
aufs-threads=128 --disable-arch-native --with-logdir=/var/log/squid -- 
with-pidfile=/var/run/squid/squid.pid --with-swapdir=/var/cache/squid



Hi, after the make install got

/bin/bash ../../../../libtool  --tag=CXX   --mode=link g++ -std=c++17 - 
Wall -Wextra -Wimplicit-fallthrough=5 -Wpointer-arith -Wwrite-strings - 
Wcomments -Wshadow -Wmissing-declarations -Woverloaded-virtual -Werror - 
pipe -D_REENTRANT -m64 -I/usr/include/p11-kit-1    -g -O2  -m64   -g -o 
security_file_certgen certificate_db.o 
security_file_certgen.o ../../../../src/ssl/libsslutil.la ../../../../ 
src/sbuf/libsbuf.la ../../../../src/debug/libdebug.la ../../../../src/ 
error/liberror.la ../../../../src/comm/libminimal.la ../../../../src/ 
mem/libminimal.la ../../../../src/base/libbase.la ../../../../src/time/ 
libtime.la -lssl -lcrypto   -lgnutls   ../../../../compat/libcompatsquid.la
libtool: link: g++ -std=c++17 -Wall -Wextra -Wimplicit-fallthrough=5 - 
Wpointer-arith -Wwrite-strings -Wcomments -Wshadow -Wmissing- 
declarations -Woverloaded-virtual -Werror -pipe -D_REENTRANT -m64 -I/ 
usr/include/p11-kit-1 -g -O2 -m64 -g -o security_file_certgen 
certificate_db.o security_file_certgen.o ../../../../src/ssl/.libs/ 
libsslutil.a ../../../../src/sbuf/.libs/libsbuf.a ../../../../src/ 
debug/.libs/libdebug.a ../../../../src/error/.libs/ 
liberror.a ../../../../src/comm/.libs/libminimal.a ../../../../src/ 
mem/.libs/libminimal.a ../../../../src/base/.libs/libbase.a ../../../../ 
src/time/.libs/libtime.a -lssl -lcrypto -lgnutls ../../../../ 
compat/.libs/libcompatsquid.a
/usr/bin/ld: ../../../../src/ssl/.libs/libsslutil.a(crtd_message.o): in 
function `Ssl::CrtdMessage::composeRequest(Ssl::CertificateProperties 
const&)':
/root/squid-6.10/src/ssl/crtd_message.cc:248: undefined reference to 
`EVP_MD_type'


How i can fix it ?

--
David Touzeau - Artica Tech France


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid with PV6 Tunnel Broker

2024-07-31 Thread Amos Jeffries

On 31/07/24 18:05, Jonathan Lee wrote:

The error it shows when I activate IPv6 only mode not dual stack is



There is no "IPv6 only mode" in Squid. What do you mean?



Error: no forward proxy ports configured



In the config you showed earlier all of your IPv6 listening ports use 
the "intercept" flag.


Please try with this much simplified configuration for listening ports:

 # Receive forward-proxy and cache manager traffic
 http_port 3128 ssl-bump \
generate-host-certificates=on dynamic_cert_mem_cache_size=20MB \
tls-cert=/usr/local/etc/squid/serverkey.pem \
tls-dh=prime256v1:/etc/dh-parameters.2048 \
options=NO_SSLv3

 # Receive intercepted port 80 traffic
 http_port 3127 intercept ssl-bump \
generate-host-certificates=on dynamic_cert_mem_cache_size=20MB \
tls-cert=/usr/local/etc/squid/serverkey.pem \
tls-dh=prime256v1:/etc/dh-parameters.2048 \
options=NO_SSLv3

 # Receive intercepted port 443 traffic
 https_port 3129 intercept ssl-bump \
generate-host-certificates=on dynamic_cert_mem_cache_size=20MB \
tls-cert=/usr/local/etc/squid/serverkey.pem \
tls-dh=prime256v1:/etc/dh-parameters.2048 \
options=NO_SSLv3


There are other changes you will need to make the SSL-Bump and access 
controls fully work. But this is all you should need to at least get 
Squid accepting TCP and TLS connections.


The two "intercept" port numbers above are arbitrary. Just make sure 
that your NAT rules are passing port 80 and port 443 to the right one.

 IIRC, your IPv6 NAT rule may need changing.


Cheers
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 5.7 - HOWTO Transparent SSL-Bump

2024-07-30 Thread Amos Jeffries



Debian/12 (aka "Bookworm") provides the package "squid-openssl" with the 
SSL-Bump feature enabled. It is a drop-in replacement for the "squid" 
package.



Cheers
Amos



On 31/07/24 03:11, John Mok wrote:

Hi Nishant,

Yes, I did rebuild the package with

--with-openssl
--enable-ssl-crtd

but squid service failed to start with http_port configured with 
intercept and ssl-bump modes at the same time. Any idea ?


On Tue, Jul 30, 2024, 21:12 Nishant Sharma wrote:

Hi John,

On 30/07/24 18:05, John Mok wrote:
 > Hi all,
 >
 > I am using squid 5.7 on Debian Bookworm, and would like to setup a
 > transparent + SSL bump proxy.
 >
 > Anyone can point to the right direction ?

Squid on Debian and Ubuntu do not have following options:

--enable-ssl
--enable-ssl-crtd

You may want to build one from source for yourself.

Regards,
Nishant
___
squid-users mailing list
squid-users@lists.squid-cache.org

https://lists.squid-cache.org/listinfo/squid-users



___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid with PV6 Tunnel Broker

2024-07-30 Thread Amos Jeffries

On 30/07/24 08:47, Jonathan Lee wrote:
I did not know that I had the option set to disable Squid ICMP pinger 


pinger helper is not releted.


What I meant was that you need to ensure ICMPv6 protocol is enabled and 
working on your network. That is usually a firewall issue.


If it is blocked, the IPv6 packet fragmentation mechanism (required for 
tunnels) will not work and result in behaviour like you are seeing.

Similarly if MTU is set too large for the tunnel maximum packet size.


I enabled ping helper I show a good socket for my IPV6 interface address 
but every IPV6 only device shows NONE_NONE/409 on the Squid Access Table




409 generated by Squid is a failed security check.





I get the same result. How would I change MTU on Squid isn’t that set to 
auto discover with the HTTP port directive?


Yes, that is dneone using ICMPv6 and teh primary reason why Squid needs 
that protocol working.




I also forgot to mention the IPV6 only device works when I have it set 
to not use the proxy.


The list of ports you show below has Squid accepting direct (forward 
proxy) connections with an IPv4-only port 3128.



I really do recommend using the port-only configuration style. At least 
until you get the proxy working properly. Squid sockets are dual-stack 
and accept both protocols by default. That will help you sort out the 
scope of what each port number is doing and avoid copy-paste mistakes 
like this.





Thanks again for the reply. It does work from IPV4 to IPV6 requests but 
never for IPV6 to IPV6 addresses or pure IPV6. I can disable the proxy 
and the system works for IPV6 to IPV6 only.






Here is my configuration I am testing..

# This file is automatically generated by pfSense
# Do not edit manually !

http_port 192.168.1.1:3128 ssl-bump generate-host-certificates=on 
dynamic_cert_mem_cache_size=20MB cert=/usr/local/etc/squid/serverkey.pem 
tls-cafile=/usr/local/share/certs/ca-root-nss.crt 
capath=/usr/local/share/certs/ 
cipher=EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH+aRSA+RC4:EECDH:EDH+aRSA:HIGH:!RC4:!aNULL:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS
 tls-dh=prime256v1:/etc/dh-parameters.2048 options=NO_SSLv3

http_port 127.0.0.1:3128 intercept ssl-bump generate-host-certificates=on 
dynamic_cert_mem_cache_size=20MB cert=/usr/local/etc/squid/serverkey.pem 
tls-cafile=/usr/local/share/certs/ca-root-nss.crt 
capath=/usr/local/share/certs/ 
cipher=EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH+aRSA+RC4:EECDH:EDH+aRSA:HIGH:!RC4:!aNULL:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS
 tls-dh=prime256v1:/etc/dh-parameters.2048 options=NO_SSLv3

https_port 127.0.0.1:3129 intercept ssl-bump generate-host-certificates=on 
dynamic_cert_mem_cache_size=20MB cert=/usr/local/etc/squid/serverkey.pem 
tls-cafile=/usr/local/share/certs/ca-root-nss.crt 
capath=/usr/local/share/certs/ 
cipher=EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH+aRSA+RC4:EECDH:EDH+aRSA:HIGH:!RC4:!aNULL:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS
 tls-dh=prime256v1:/etc/dh-parameters.2048 options=NO_SSLv3

http_port [REDACTED:192::]:3128 intercept ssl-bump 
generate-host-certificates=on dynamic_cert_mem_cache_size=20MB 
cert=/usr/local/etc/squid/serverkey.pem 
tls-cafile=/usr/local/share/certs/ca-root-nss.crt 
capath=/usr/local/share/certs/ 
cipher=EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH+aRSA+RC4:EECDH:EDH+aRSA:HIGH:!RC4:!aNULL:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS
 tls-dh=prime256v1:/etc/dh-parameters.2048 options=NO_SSLv3

https_port [REDACTED:192::]:3129 intercept ssl-bump 
generate-host-certificates=on dynamic_cert_mem_cache_size=20MB 
cert=/usr/local/etc/squid/serverkey.pem 
tls-cafile=/usr/local/share/certs/ca-root-nss.crt 
capath=/usr/local/share/certs/ 
cipher=EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH+aRSA+RC4:EECDH:EDH+aRSA:HIGH:!RC4:!aNULL:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS
 tls-dh=prime256v1:/etc/dh-parameters.2048 options=NO_SSLv3





tls_outgoing_options cafile=/usr/local/share/certs/ca-root-nss.crt
tls_outgoing_options capath=/usr/local/share/certs/
tls_outgoing_options options=NO_SSLv3
tls_outgoing_options 
cipher=EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH+aRSA+RC4:EECDH:EDH+aRSA:HIGH:!RC4:!aNULL:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS
sslcrtd_children 10





# Allow local network(s) on interface(s)
acl localnet src  192.168.1.0/27 REDACTED:192::/64




acl block_hours time 00:30-05:00
ssl_bump terminate all block_hours
http_access deny all block_hours
acl getmethod method GET




tls_outgoing_options options

Re: [squid-users] Squid with PV6 Tunnel Broker

2024-07-29 Thread Amos Jeffries

On 27/07/24 10:10, Jonathan Lee wrote:

Hello fellow squid users can you please help me??

I know I have good IPV6 internet if I use the IPV4 proxy address, and 
the IPv6 test sites pass 10 out of 10. If I make the client IPV6 only 
and have the rules set to use the proxy with the proxy IPV6 address for 
the proxy I get no internet.


I am using a IPV6 tunnel broker in pfsense. When I configure my client 
to IPv6 only it can access all IPv6 sites. As soon as I use the proxy 
address in IPv6 of Squid squid gives me the following errors...


Check that ICMPv6 is working. It is mandatory when tunnels are used.

Also, check the MSS and MTU values.


Cheers
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid "make check" error

2024-07-22 Thread Amos Jeffries

On 20/07/24 03:19, Alex Rousskov wrote:

On 2024-07-19 09:20, Rafał Stanilewicz wrote:

Thank you. It worked.


Glad to hear that!



Seconded.



I incorrectly assumed all dependencies would be captured by aptitude 
build-dep squid and ./configure.




AFAIK that is a correct assumption for Debian based packages. The only 
additional dependencies needed should be for features *not* enabled in 
the package.


This failure is quite a surprise to me. That said, I test the Debian 
"squid" package with "apt" not "aptitude" and there are some unexpected 
algorithm differences at times.




Should "squid" package build dependencies accommodate "make check"?



The Debian/Ubuntu packages should since the package creation runs "make 
check". The "apt build-dep squid" should pull in everything necessary to 
build the relevant squid_*.deb package (except some few essential OS 
packages which should exist everywhere).



libcppunit-dev has been listed as a squid dependency for many years. So 
I would not be surprised if some ancient Ubuntu (circa 2010 or such) 
showed this behaviour, but certainly not the one you have.



Cheers
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid Version squid-5.7-150400.3.6.1.x86_64 -- Squid is crashing continusly

2024-07-18 Thread Amos Jeffries

On 19/07/24 04:23, M, Anitha (CSS) wrote:

Hi Team,

We are seeing squid is continuously crashing with signal 6.


"signal 6" in system log means there should be an "assertion" error 
message in the cache.log. Please look for that.



Any known 
issues with this version?


Many. It is not clear which (or another) is happening for you.

Please be aware that Squid-5 are no longer supported and has quite a 
number of security issues that have been fixed in later releases.

Current Squid release is v6.10. If you are able to upgrade, please do so.


Cheers
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Prefer or force ipv6 usage on dual stack interface

2024-07-16 Thread Amos Jeffries

On 17/07/24 01:31, Rasmus Horndrup wrote:

Hi,
On a dual stack network interface I’m interested in using squid as a ipv6 only 
forward proxy.
My general understanding was that squid will prefer to use ipv6 whenever 
available, but I’m having issues with squid seemingly preferring ipv4 in some 
cases.

I have two examples, where it proceeds using IPv6 for the first and IPv4 for 
the second.

 From the looks of it, they both successfully receive A and  records, but 
how can I basically force squid to use IPv6?



Squid obeys the IPv6 specifications which still require IPv4 transition 
capabilities. Including whether you are on an IPv4-only or IPv6-only or 
Dual-Stack network.


The best way to make Squid IPv6-only is to make the machine it is 
running on IPv6-only.


Alternatively, to setup a firewall rule matching the proxy by PID/UID to 
reject TCPv4 connections with an ICMP "(3) Destination Unreachable" packet.



Cheers
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Rewriting HTTP to HTTPS for generic package proxy

2024-07-15 Thread Amos Jeffries

On 12/07/24 10:10, Alex Rousskov wrote:

On 2024-07-11 17:03, Amos Jeffries wrote:

On 11/07/24 00:49, Alex Rousskov wrote:

On 2024-07-09 18:25, Fiehe, Christoph wrote:

I hope that somebody has an idea, what I am doing wrong. 


AFAICT from the debugging log, it is your parent proxy that returns 
an ERR_SECURE_CONNECT_FAIL error page in response to a seemingly 
valid "HEAD https://..."; request. Can you ask their admin to 
investigate? You may also recommend that they upgrade from Squid v4 
that has many known security vulnerabiities.


If parent is uncooperative, you can try to reproduce the problem by 
temporary installing your own parent Squid instance and configuring 
your child Squid to use that instead.


HTH,

Alex.
P.S. Unlike Amos, I do not see serious conceptual problems with 
rewriting request target scheme (as a temporary compatibility 
measure). It may not always work, for various reasons, but it does 
not necessarily make things worse (and may make things better).




To which I refer you to:


None of the weaknesses below are applicable to request target scheme 
rewriting (assuming both proxies in question are implemented/configured 
correctly, of course). Specific non-applicability reasons are given 
below for each weakness URL:



https://cwe.mitre.org/data/definitions/311.html


The above "The product does not encrypt sensitive or critical 
information before storage or transmission" case is not applicable: All 
connections can be encrypted as needed after the scheme rewrite.




Reminder, OP requirement is to cache the responses and send un-encrypted.

"can be" is not a safety measure.





https://cwe.mitre.org/data/definitions/312.html


The above "The product stores sensitive information in cleartext within 
a resource that might be accessible to another control sphere." case is 
not applicable: Squid does not store information in such an accessible 
resource.




Reminder, Squid does cache both https:// and http:// traffic.





https://cwe.mitre.org/data/definitions/319.html


The above "The product transmits sensitive or security-critical data in 
cleartext in a communication channel that can be sniffed by unauthorized 
actors." case is not applicable: All connections can be encrypted as 
needed after the scheme rewrite.


The relevant sensitive data is in the Responses, which are absolutely 
transmitted un-encrypted per the OP requirements.



Cheers
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squidclient -h 127.0.0.1 -p 3128 mgr:info shows access denined

2024-07-12 Thread Amos Jeffries

On 13/07/24 04:16, Jonathan Lee wrote:

tested with removal of IP and port failed If I leave port I get this

2024/07/12 09:15:17| Processing: http_port :3128 intercept


No  ":" before thr port number.


Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] cachemgr.cgi isn't mgr:info ?

2024-07-12 Thread Amos Jeffries

Per your subject question "cachemgr.cgi isn't mgr:info ?"

Correct.

 cachemgr.cgi is an old tool to access multiple proxies manager reports.

 "mgr:info" is a command line parameter for the squidclient tool to 
access a proxies "info" manager report.
  Also, commonly used shorthand in Squid community to refer to the 
"info" report, regardless of how it is accessed.



Responses to your other queries inline...


On 13/07/24 03:18, Brian Cook wrote:

Picking up squid again and trying to look at what's going on inside..

Squid on OpenWRT.. wanted to look at mgr:info for file desc, etc..

trying to access the cachemgr.cgi.. as this looks like the new squidclient

Wasn't working etc..



FYI, both squidclient and cachemgr.cgi are deprecated. It depends on the 
tool version vs Squid version whether you will encounter an issue.


Current recommendation for current supported Squid is to use a tool like 
this one: .
(I may be a bit biased there as its author, but also not yet aware of 
any others to reference.)




..
debug_options ALL,2
cache_log /tmp/squid_cache.log
..

--
2024/07/12 10:57:08.388| 33,2| client_side.cc(1646) 
clientProcessRequest: internal URL found: http://10.20.245.10:3128 
2024/07/12 10:57:08.388| 85,2| client_side_request.cc(715) 
clientAccessCheckDone: The request GET 
http://10.20.245.10:3128/squid-internal-mgr/menu is DENIED; last ACL 
checked: Safe_ports

# EOF
-

Q: So I added 3128 to the Safe_ports.. and then it works..

image.png

Q: no password set for cachemgr_passwd.. cachemgr.cgi just open to the 
world? unsecured?




Apparently so in your setup. Unless your Browser etc did some implicit 
authentication that you overlooked.




and is Process Filedescriptor Allocation the closest thing?



That report is a list of what each filedescriptor is currently being 
used for.



I (think) I remember something like max, in use, and something else.. 
being in mgr:info




Yes.



fwiw openwrt starts squid with like 4096 max files..

needed something like this:

..
         procd_set_param file $CONFIGFILE
         procd_set_param limits nofile="262140 262140"
         procd_set_param respawn
..

to set the hard and soft limits..

any better practice than adding 3128 to the 'Safe_ports'? (can't keep 
that in place..)



Ports 1025 to 65535 should already be listed as "Safe_ports". That ACL 
is supposed to be used to pinhole a denial of the known **non-safe** ports.





and setting a cachemgr_passwd would be the only thing to secure the cgi?



No.

 The CGI tool is restricted by any configuration of the web server 
running it. And,


 Then tool requests to Squid are restricted by your http_access rules 
for what requests can be made of the proxy. And,


 Then the access to individual manager reports is controlled by 
cachemgr_passwd directive in Squid.



Cheers
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TCP_MISS_ABORTED/502

2024-07-12 Thread Amos Jeffries



On 13/07/24 01:52, Alex Rousskov wrote:

On 2024-07-12 08:06, Ben Toms wrote:
Seems that my issue is similar to - 
https://serverfault.com/questions/1104330/squid-cache-items-behind-basic-authentication 


You are facing up to two problems:

1. Some authenticated responses are not cachable by Squid. Please share 
HTTP headers of the response in question.




FYI, those can be obtained by configuring squid.conf with

  debug_options 11,2


Cheers
Amos


2. TCP_MISS_ABORTED/502 errors may delete a being-cached response. These 
can be bogus errors (essentially Squid logging bugs) or real ones (e.g., 
due to communication bugs, misconfiguration, or compatibility problems). 
I recommend adding %err_code/%err_detail to your logformat and sharing 
the corresponding access.log lines (obfuscated as needed).


Sharing (privately if needed) a pointer to compressed ALL,9 cache.log 
while reproducing the issue using a single transaction may help us 
resolve all the unknowns:


https://wiki.squid-cache.org/SquidFaq/BugReporting#debugging-a-single-transaction


HTH,

Alex.





___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squidclient -h 127.0.0.1 -p 3128 mgr:info shows access denined

2024-07-12 Thread Amos Jeffries

On 12/07/24 11:50, Jonathan Lee wrote:

I recommend changing your main port to this:

  http_port 3128 ssl-bump 


This is set to this when it processes

http_port 192.168.1.1:3128 ssl-bump generate-host-certificates=on 
dynamic_cert_mem_cache_size=20MB cert=/usr/local/etc/squid/serverkey.pem 
cafile=/usr/local/share/certs/ca-root-nss.crt capath=/usr/local/share/certs/ 
cipher=EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH+aRSA+RC4:EECDH:EDH+aRSA:!RC4:!aNULL:!eNULL:!LOW:!3DES:!SHA1:!MD5:!EXP:!PSK:!SRP:!DSS
 tls-dh=prime256v1:/etc/dh-parameters.2048 
options=NO_SSLv3,NO_TLSv1,SINGLE_DH_USE,SINGLE_ECDH_USE



The key thing here was the removal of the IP address. So that Squid 
received both the 192.168.*.* and the 127.0.0.* traffic without needing 
separate http_port lines.







and receiving the intercepted traffic on:

 http_port 3129 intercept ssl-bump …


Do you mean https?



Sorry. I missed that you had an https_port using 3129 already.




https_port 127.0.0.1:3129 intercept ssl-bump generate-host-certificates=on 
dynamic_cert_mem_cache_size=20MB cert=/usr/local/etc/squid/serverkey.pem 
cafile=/usr/local/share/certs/ca-root-nss.crt capath=/usr/local/share/certs/ 
cipher=EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH+aRSA+RC4:EECDH:EDH+aRSA:!RC4:!aNULL:!eNULL:!LOW:!3DES:!SHA1:!MD5:!EXP:!PSK:!SRP:!DSS
 tls-dh=prime256v1:/etc/dh-parameters.2048 
options=NO_SSLv3,NO_TLSv1,SINGLE_DH_USE,SINGLE_ECDH_USE

Https uses that port 3129

What should I adapt

http_port
https_port?



Both.

FYI, there are two issues:

1) listening on IP 127.0.0.1. Inside the OS there are different devices 
for localhost (lo) and WAN (eg. eth0). NAT is problematic already 
without introducing any tricky behaviours from bridging those "private" 
(lo) and "public" WAN devices.


The simplest solution is just not to put any IP address on the 
squid.conf *port line(s) with intercept options. The OS will select one 
appropriate for whatever device and tell Squid on a per-connection basis.


The more difficult way is to put one of the machines "global" (WAN or 
LAN) IP addresses. In your case 192.168.1.1. With most connections being 
from the LAN that minimizes the possible problems.



2) listening on a well-known proxy port 3128 for intercepted traffic.

There is malware in existence that scans for at least port 3128 (likely 
1080, 8080 etc common proxy ports) being used by proxies like yours and 
abuses them. As a result at least one popular antivirus network scanner 
(from Trend) does the same scan to detect insecure proxies.


The worst thing about this situation is that the NAT very effectively 
hides the malware. So it is extremely hard to see whether it is 
happening to you.



I am not sure what UI you are using to show those firewall rules in your 
other email. However the one that had ALLOW for the port range 3128-3129 
worries me. AFAIK that should only be for 3128 and a separate rule 
somewhere else to drop the intercepted port 3129 traffic pre-NAT.



HTH
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squidclient -h 127.0.0.1 -p 3128 mgr:info shows access denined

2024-07-11 Thread Amos Jeffries

Oh, I see the problem:

  http_port 127.0.0.1:3128 intercept ...

 (which also means you lack a firewall rule preventing external 
software like squidclient from sending traffic directly to your 
intercept port.)



Please **do not** use port 3128 to receive intercepted traffic.


I recommend changing your main port to this:

   http_port 3128 ssl-bump 

and receiving the intercepted traffic on:

  http_port 3129 intercept ssl-bump ...


and check your firewall has all the rules listed at 
.

One to note in particular is the "mangle" table rule.


Cheers
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] TCP_MISS_ABORTED/502

2024-07-11 Thread Amos Jeffries

On 12/07/24 03:37, Ben Toms wrote:

Hi folks,

We’re looking to leverage squid-cache as an accelerator, but for large 
content. For example, a local cache of macOS installers so that the 
internet line isn’t swamped when updating Photoshop etc across devices.


Below is an example of the conf I’ve been using (and have been going 
backwards and forwards trying different things):


https_port 443 accel protocol=HTTPS tls-cert=/usr/local/squid/client.pem 
tls-key=/usr/local/squid/client.key


cache_peer public.server.fqdn parent 443 0 no-query originserver 
no-digest no-netdb-exchange tls login=PASSTHRU name=myAccel




I suggest also adding the option to this cache_peer line:
   forceddomain=public.server.fqdn



acl our_sites dstdomain local.server.fqdn

http_access allow our_sites

cache_peer_access myAccel allow our_sites

cache_peer_access myAccel deny all

refresh_pattern -i public.server.fqdn/.* 3600    80% 14400


Note: you do not need to put ".*" at either end of a regex. It is implicit.




cache_dir ufs /usr/local/squid/var/cache 10 16 256

When I attempt to curl a file from local.server.fqdn, I can see that 
there has been a request made to public.server.fqdn and that the 
authentication has been passed through and all is well (it returns a 200 
code and needs authentication),



That does not make sense. "needs authentication" in HTTP is a 4xx status 
code.


A response cannot be 200 "OK, successful complete" and "needs 
authentication" at the same time.



but I’m seeing TCP_MISS_ABORTED/502 in 
/var/log/squid/access.log as per the below:


1720711470.297 84 192.168.0.156 TCP_MISS_ABORTED/502 3974 GET 
https://local.server.fqdn/some/file/path 
 - 
FIRSTUP_PARENT/public.ip.of.public.server text/html


Seems like the client to squid-cache HTTPS conection is fine, and 
squid-cache can contact public.server.fqdn.. but nothing is cached.




There is nothing in the above which indicates a problem caching.

There is a client doing unexpected abort - which may (or not) have 
side-effects on storage of the response. But still no problem exactly - 
clients can do what they want.



Cheers
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Rewriting HTTP to HTTPS for generic package proxy

2024-07-11 Thread Amos Jeffries

On 11/07/24 00:49, Alex Rousskov wrote:

On 2024-07-09 18:25, Fiehe, Christoph wrote:

I hope that somebody has an idea, what I am doing wrong. 


AFAICT from the debugging log, it is your parent proxy that returns an 
ERR_SECURE_CONNECT_FAIL error page in response to a seemingly valid 
"HEAD https://..."; request. Can you ask their admin to investigate? You 
may also recommend that they upgrade from Squid v4 that has many known 
security vulnerabiities.


If parent is uncooperative, you can try to reproduce the problem by 
temporary installing your own parent Squid instance and configuring your 
child Squid to use that instead.


HTH,

Alex.
P.S. Unlike Amos, I do not see serious conceptual problems with 
rewriting request target scheme (as a temporary compatibility measure). 
It may not always work, for various reasons, but it does not necessarily 
make things worse (and may make things better).




To which I refer you to:
 
 
 

Cheers
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid 6.6 error clientProcessHit: Vary object loop!

2024-07-11 Thread Amos Jeffries

On 12/07/24 06:43, Jonathan Lee wrote:

What is Vary Object loop??



In HTTP URLs can point at a set or "variants" of a resource.

Squid "Vary Object" is an entry in the cache that is used to represent 
these types of resource.
 When the URL-only is looked up, the "Vary Object" is found and tells 
Squid to perform a second lookup appending certain details to the hash 
key to find the actual object that client needs.


A "Vary Object loop" is when this second lookup finds an object which is 
not the desired variant of that original URL.



You can see this in your first message ...


>> 10.07.2024 09:56:30clientProcessHit: Vary object loop!
>> 10.07.2024 09:56:30varyEvaluateMatch: Oops. Not a Vary match on
>> second attempt,

Original request came in for "https%3A%2F%2Fstatic.foxnews.com"

That URL had a cached "Vary Object" saying there were different 
responses to provide depending on the Accept-Encoding header:


Squdi performed a second lookup for:

>> 'origin="https%3A%2F%2Fstatic.foxnews.com", 
accept-encoding="gzip,%20deflate,%20br,%20zstd"'



... which found a cache entry for this URL:

>> 
'https://zagent20.h-cdn.com/cmd/get_thumb_info?customer=foxnews&ver=1.165.67&url=https%3A%2F%2F247preview.foxnews.com%2Fhls%2Flive%2F2020027%2Ffncv3preview%2Findex.m3u8'



which is not the same URL.




Does that  mean clear my cache?


No. But yes.

A quick check of the URLs from your log message with the tool at 
 



We can see that:

"
The resource doesn't send Vary consistently.
The ETag doesn't change between negotiated representations.
"


You can ignore these log messages.

Or, you can configure Squid not to cache content from this server. If 
you do this, then clearing the cache would stop the log entries continuing.



Or is that something I am missing has 
anyone else seen this?




That origin server is broken. So likely everyone is seeing the same 
problem with that website.




11.07.2024 11:36:49 clientProcessHit: Vary object loop!
11.07.2024 11:36:49	varyEvaluateMatch: Oops. Not a Vary match on second 
attempt, 
'https://static.foxnews.com/static/orion/styles/css/fox-news/article-new.rs.css9; 'accept-encoding="gzip,%20deflate,%20br,%20zstd"'

11.07.2024 11:36:49 clientProcessHit: Vary object loop!
11.07.2024 11:36:49	varyEvaluateMatch: Oops. Not a Vary match on second 
attempt, 
'https://static.foxnews.com/static/strike/ver/foxnews/loader.global.js' 
'accept-encoding="gzip,%20deflate,%20br,%20zstd"'

31.12.1969 16:00:00 




Cheers
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squidclient -h 127.0.0.1 -p 3128 mgr:info shows access denined

2024-07-11 Thread Amos Jeffries



Lets see ...

>>> On Jul 11, 2024, at 11:02, Jonathan Lee wrote:
>>> Shell Output - squidclient -h 127.0.0.1 -v -U admin -W redacted
>>> mgr:info
>>>
>>> Request:
>>> GET http://127.0.0.1:3128/squid-internal-mgr/info HTTP/1.0
>>> Host: 127.0.0.1:3128
>>> User-Agent: squidclient/6.6
>>> Accept: */*
>>> Authorization: Basic YWRtaW4..REDACTED..Q==
>>> Connection: close


On 12/07/24 06:12, Jonathan Lee wrote:

http_access allow CONNECT wuCONNECT localnet
http_access allow CONNECT wuCONNECT localhost



 ... GET is not CONNECT. Skip the above.



http_access allow windowsupdate localnet
http_access allow windowsupdate localhost



 ... 127.0.0.1 is not in *.microsoft.com. Skip the above.



http_access allow HttpAccess localnet
http_access allow HttpAccess localhost



 ... 127.0.0.1 is not listed in /usr/local/pkg/http.access. Skip the above.



http_access deny manager



 ... /squid-internal-mgr/ matches.  DENY the request.


Problem solved.

What you should do is restore the default security settings which we 
ship with Squid.


Place these above your custom http_access lines:

  http_access deny !Safe_ports
  http_access deny CONNECT !SSL_ports
  http_access allow localhost manager
  http_access deny manager


see  for the ACL details 
if you need them too.




Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squidclient -h 127.0.0.1 -p 3128 mgr:info shows access denined

2024-07-11 Thread Amos Jeffries

On 12/07/24 05:27, Jonathan Lee wrote:

Thanks what about the password is it set with@ or -p where would I place that?


Neither. It is set with -W .

Amos



Sent from my iPhone


On Jul 11, 2024, at 10:17, Amos Jeffries wrote:

It is very relevant. As Matus already mentioned, both -U and -W.


squidclient -v -U admin -W cachemgr_password mgr:info
Request:
GET http://localhost:3128/squid-internal-mgr/info HTTP/1.0
Host: localhost:3128
User-Agent: squidclient/6.10
Accept: */*
Authorization: Basic YWRtaW46Y2FjaGVtZ3JfcGFzc3dvcmQ=
Connection: close


squidclient -v -U admin -W cachemgr_password /squid-internal-mgr/info
Request:
GET /squid-internal-mgr/info HTTP/1.0
Host: localhost:3128
User-Agent: squidclient/6.10
Accept: */*
Authorization: Basic YWRtaW46Y2FjaGVtZ3JfcGFzc3dvcmQ=
Connection: close


Cheers
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squidclient -h 127.0.0.1 -p 3128 mgr:info shows access denined

2024-07-11 Thread Amos Jeffries


On 11/07/24 06:08, Alex Rousskov wrote:

On 2024-07-10 12:55, Jonathan Lee wrote:


Embedding a password in a cache manager command requires providing a
username with -U



squidclient -w /squid-internal-mgr/info -u admin
squidclient -w /squid-internal-mgr/info@redacted -u admin
squidclient -w 
http://192.168.1.1:3128/squid-internal-mgr/info@redacted -u admin
squidclient -w http://127.0.0.1:3128/squid-internal-mgr/info@redacted 
-u admin

squidclient -w http://127.0.0.1:3128/squid-internal-mgr/info
squidclient http://127.0.0.1:3128/squid-internal-mgr/info
squidclient -h 127.0.0.1:3128/squid-internal-mgr/info
squidclient -h 127.0.0.1 /squid-internal-mgr/info
squidclient -h 127.0.0.1 /squid-internal-mgr/info@redcated
squidclient -w 127.0.0.1 /squid-internal-mgr/info@redacted
squidclient -w 127.0.0.1 /squid-internal-mgr/info@redcated -u admin
squidclient -h 192.168.1.1:3128  /squid-internal-mgr/info@redacted
squidclient -h 192.168.1.1  /squid-internal-mgr/info@redacted
squidclient -h 192.168.1.1  /squid-internal-mgr/info

with -w -u -h http spaces I can’t get it to show me stats

Squid 6.6


I do not know whether this mistake is relevant, but squidclient 
documentation and error message imply that you should be using "-U" 
(capital letter U) while you are using "-u" (small letter u).



It is very relevant. As Matus already mentioned, both -U and -W.


squidclient -v -U admin -W cachemgr_password mgr:info
Request:
GET http://localhost:3128/squid-internal-mgr/info HTTP/1.0
Host: localhost:3128
User-Agent: squidclient/6.10
Accept: */*
Authorization: Basic YWRtaW46Y2FjaGVtZ3JfcGFzc3dvcmQ=
Connection: close


squidclient -v -U admin -W cachemgr_password /squid-internal-mgr/info
Request:
GET /squid-internal-mgr/info HTTP/1.0
Host: localhost:3128
User-Agent: squidclient/6.10
Accept: */*
Authorization: Basic YWRtaW46Y2FjaGVtZ3JfcGFzc3dvcmQ=
Connection: close


Cheers
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Rewriting HTTP to HTTPS for generic package proxy

2024-07-10 Thread Amos Jeffries

On 10/07/24 22:57, Fiehe, Christoph wrote:

The idea behind was to find a way to cache packages from a repository that only 
provides HTTPS-based connections. It would work, when the HTTPS connection 
terminates at the Squid Proxy and not at the client, so that the proxy can 
forward the message payload to the client using normal HTTP. Apt-Cacher-NG 
implements the behavior, but it seems to be too buggy to use in a productive 
environment.

There is no way to achieve that with standard Squid mechanisms?



At risk of allowing bad actors to install arbitrary software on all of 
your clients: You can direct all the archive traffic to a cache_peer 
with port 443 and "originserver tls" flags.


YMMV, caveat emptor.


Cheers
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Unable to explain 407 Proxy Authentication Required

2024-07-10 Thread Amos Jeffries

On 9/07/24 02:39, Random Dude wrote:

Hey everyone.

I'm trying to get a minimal forward proxy with authentication set up. I 
have the following config (purposely kept as minimal as possible) and 
have followed these steps - 
https://wiki.squid-cache.org/ConfigExamples/Authenticate/ 



squid.conf ---
auth_param basic program /usr/lib/squid/basic_ncsa_auth 
/etc/squid/passwords auth_param basic children 5 auth_param basic 
credentialsttl 1 minute acl CONNECT method CONNECT acl auth proxy_auth 
REQUIRED http_port 3128 http_access deny !auth http_access allow auth 
http_access deny all

---

However, no matter what I do I always get a 407 Proxy Authentication 
Required response from the proxy. I've been testing with "curl -v -U 
: -x localhost:3128 " I must be missing 
something very simple so what am I doing wrong?




The config above is correct. So whatever the issue, it is not Squid.

I would start with a check to see if the login you are testing with is 
correctly encoded in the passwords file.


This command line should tell you that:

 echo "username password" | \
 /usr/lib/squid/basic_ncsa_auth /etc/squid/passwords


HTH
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Rewriting HTTP to HTTPS for generic package proxy

2024-07-10 Thread Amos Jeffries

On 10/07/24 10:25, Fiehe, Christoph wrote:

Hallo,

I hope that somebody has an idea, what I am doing wrong. I try to build a 
generic package proxy with Squid and need the feature to rewrite (not redirect) 
a HTTP request to a package repository transparently to a HTTPS-based package 
source.


The "Wrong" starts with the very idea you have that re-writing a URL 
scheme is even useful.



While it mas *seem* like an okay idea, what you are actually doing is 
exposing the HTTPS secured Response message to transmission over 
insecure connections from Squid to client. That is prohibited by HTTPS 
which assumes/requires encryption negotiated between the origin client 
and the origin server.



The best you can do for a regular proxy. Is *redirect* the http:// 
requests with a 302 message telling the client to use https:// instead.



  ...
  http_access deny !to_archive_mirrors

  acl HTTP proto HTTP
  deny_info 302:https://%>rd%rp HTTP
  http_access deny HTTP

  http_access allow src_networks
  ...



HTH
Amos

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] ICMP and QUIC

2024-07-08 Thread Amos Jeffries

On 8/07/24 16:42, Jonathan Lee wrote:
Does anyone use this directive for QUIC in the mean time… what’s weird 
is that IP address is Apple when Facebook is running


on_unsupported_protocol



This directive is only relevant to protocols Squid receives over TCP 
connections. For SSL-Bumped CONNECT tunnels or intercepted ports.


Squid does not support QUIC at all yet.




On Jul 7, 2024, at 21:24, Jonathan Lee wrote:

I have just found... YEAH!!! has anyone tested this? Does Squid 6.6 
have it?


QUIC foundations for HTTP/3 by yadij · Pull Request #919 · 
squid-cache/squid 

github.com 


As of this writing, that work is a Draft for design review. It still 
needs a lot of protocol support added before it can be used for more 
than debugging experiments.


HTH
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Upgrade path from squid 4.15 to 6.x

2024-06-14 Thread Amos Jeffries

On 14/06/24 20:43, NgTech LTD wrote:

Hey Amis,

Ok, so with the tools we have available, can we take this case and maybe 
write a brief summary of changes between the squid features versions?




That what the Release Notes are.

Cheers
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Information Request: "Accept-Ranges" with use of SSL intercept and dynamic update caching

2024-06-14 Thread Amos Jeffries

On 11/06/24 16:47, Jonathan Lee wrote:
The reason I ask is sometimes Facebook when I am using it locks up and 
my fan goes crazy I close Safari and restart the browser and it works 
fine again. It acts like it is restarting a download over and over again.




Because it is. Those websites use "infinite scrolling" for delivery. 
Accept-Ranges tells the server that it does not have to re-deliver the 
entire JSON dataset for the scrolling part in full, every few seconds.


That header is defined by 




HTH
Amos



On Jun 10, 2024, at 21:45, Jonathan Lee wrote:

Hello fellow Squid community can you please help?

Should I be using the following if I have SSL certificates, dynamic updates, 
StoreID, and ClamAV running?

*request_header_access Accept-Ranges deny all reply_header_access 
Accept-Ranges deny all request_header_replace Accept-Ranges none 
reply_header_replace Accept-Ranges none*


None of the documents show what Accept-Ranges does

Can anyone help explain this to me?



___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Upgrade path from squid 4.15 to 6.x

2024-06-14 Thread Amos Jeffries



Regarding the OP question:

Upgrade for all Squid-3 is to:
 * read release notes of N thru M versions (as-needed) about existing 
feature changes

 * install the new version
 * run "squid -k parse" to identify mandatory changes
 * fix all "FATAL" and "ERROR" identified
 * run with new version

... look at all logged "NOTICE", "UPGRADE" etc, and the Release Notes 
new feature additions to work on operational improvements possible with 
the new version.



HTH
Amos


On 10/06/24 19:43, ngtech1ltd wrote:


@Alex and @Amos, can you try to help me compile a menu list of 
functionalities that Squid-Cache can be used for?




The Squid wiki ConfigExamples section does that. Or at least is supposed 
to, with examples per use-case.



FYI, this line of discussion you have is well off-topic for Akash's 
original question and I think is probably just adding confusion.



Cheers
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Samba DNS Invalid zone operation IsSigned

2024-06-09 Thread Amos Jeffries

Hi Ronny,
 This is the Squid users mailing list.  You would be better served 
contacting the Samba help channels for this problem.



Cheers
Amos


On 8/06/24 23:05, Ronny Preiss wrote:

Hi Everybody,

Does someone know where this comes from and how to solve it? I've 
changed nothing for weeks.


- BIND 9.18.18-0ubuntu0.22.04.2-Ubuntu
- Ubuntu 22.04.4 LTS \n \l
- Samba Version 4.19.0 AC-DC

### /var/log/syslog
Jun  8 13:01:31 01-dc02 samba[996]: [2024/06/08 13:01:31.057443,  0] 
../../source4/rpc_server/dn  
   
  sserver/dcerpc_dnsserver.c:1076(dnsserver_query_zone)
Jun  8 13:01:31 01-dc02 samba[996]:   dnsserver: Invalid zone operation 
IsSigned
Jun  8 13:01:31 01-dc02 samba[996]: [2024/06/08 13:01:31.060313,  0] 
../../source4/rpc_server/dn  
   
  sserver/dcerpc_dnsserver.c:1076(dnsserver_query_zone)
Jun  8 13:01:31 01-dc02 samba[996]:   dnsserver: Invalid zone operation 
IsSigned
Jun  8 13:01:31 01-dc02 samba[996]: [2024/06/08 13:01:31.061385,  0] 
../../source4/rpc_server/dn  
   
  sserver/dcerpc_dnsserver.c:1076(dnsserver_query_zone)
Jun  8 13:01:31 01-dc02 samba[996]:   dnsserver: Invalid zone operation 
IsSigned


Regards, Ronny

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] can't explain 403 denied for authenticated

2024-06-06 Thread Amos Jeffries


On 7/06/24 07:08, Kevin wrote:

 >
 >> acl trellix_phone_cloud dstdomain amcore-ens.rest.gti.trellix.com
 >> http_access deny trellix_phone_cloud
 >> external_acl_type host_based_filter children-max=15 ttl=0 0X0P+0CL 
 >> acl HostBasedRules external host_based_filter

 >> http_access allow HostBasedRules
 >> auth_param digest program /usr/lib/squid/digest_file_auth -c 
/etc/squid/passwd

 >> auth_param digest realm squid
 >> auth_param digest children 2
 >> auth_param basic program /usr/lib/squid/basic_ncsa_auth 
/etc/squid/basic_passwd

 >> auth_param basic children 2
 >> auth_param basic realm squidb
 >> auth_param basic credentialsttl 2 hours
 >
 >> acl auth_users proxy_auth REQUIRED
 >> external_acl_type custom_acl_db children-max=15 ttl=0 0X0P+0CL >> 
acl CustomAclDB external custom_acl_db

 >> http_access allow CustomAclDB
 >
 >
 >Hmm, this use of combined authentication+authorization is a bit tricky
 >with two layers of asynchronous helper lookups going on. That alone
 >might be what is going on with the weird 403's.
 >
 >
 >A better sequence would be:
 >
 ># ensure login is performed
 >http_access deny !auth_users
 >
 ># check the access permissions for whichever user logged in
 >http_access allow CustomAclDB


The first call the the external_acl is to process unauthenticated 
requests.   Is the suggestion to replace


acl auth_users proxy_auth REQUIRED

with

http_access deny !auth_users

before the second external_acl (for authenticated requests)?



No. It is to ensure that "missing credentials" are treated differently 
than "bad credentials". Specifically that any auth challenge response is 
never able to be given "allow" permission.


HTH
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] can't explain 403 denied for authenticated

2024-06-05 Thread Amos Jeffries

Free config audit inline ...

On 6/06/24 05:24, Kevin wrote:


Understood.   Here it is:


acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
acl localnet src fc00::/7   # RFC 4193 local private network range
acl localnet src fe80::/10  # RFC 4291 link-local (directly plugged) 
machines
acl windows_net src 172.18.114.0/24
acl sample_host src 172.18.115.1/32
acl rsync port 873


You can remove the above line. This "rsync" ACL is unused and the port 
is added directly to the SSL_Ports and Safe_ports.




acl SSL_ports port 443
acl SSL_ports port 873  #873 is rsync
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl Safe_ports port 873
acl CONNECT method CONNECT


You can remove the above line. "CONNECT" is a built-in ACL.



acl PURGE method PURGE


Your proxy is configured with "cache deny all" preventing anything being 
stored.


As such you can improve performance somewhat by removing the "acl PURGE" 
and all config below that uses it.




acl localhost src 127.0.0.1


You can remove the above line. "localhost" is a built-in ACL.



http_access allow PURGE localhost
http_access deny PURGE
acl URN proto URN
http_access deny URN
http_access deny manager
acl API_FIREFOX dstdomain api.profiler.firefox.com
http_access deny API_FIREFOX
acl ff_browser browser ^Mozilla/5\.0
acl rma_ua browser ^RMA/1\.0.*compatible;.RealMedia
uri_whitespace encode


Hmm. Accepting whitespace in URLs is a risky choice. One can never be 
completely sure how third-party agents in the network are handling it 
before the request arrived.


If (big IF) you are able to use "uri_whitespace deny" this proxy would 
be a bit more secure. This is just a suggestion, you know best here.




acl trellix_phone_cloud dstdomain amcore-ens.rest.gti.trellix.com
http_access deny trellix_phone_cloud
external_acl_type host_based_filter children-max=15 ttl=0 %ACL %DATA %SRC %>rd 
%>rP  /PATH/TO/FILTER/SCRIPT.py
acl HostBasedRules external host_based_filter
http_access allow HostBasedRules
auth_param digest program /usr/lib/squid/digest_file_auth -c /etc/squid/passwd
auth_param digest realm squid
auth_param digest children 2
auth_param basic program /usr/lib/squid/basic_ncsa_auth /etc/squid/basic_passwd
auth_param basic children 2
auth_param basic realm squidb
auth_param basic credentialsttl 2 hours



acl auth_users proxy_auth REQUIRED
external_acl_type custom_acl_db children-max=15 ttl=0 %ACL %DATA %ul %SRC %>rd 
%>rP %credentials /PATH/TO/FILTER/SCRIPT.py
acl CustomAclDB external custom_acl_db
http_access allow CustomAclDB



Hmm, this use of combined authentication+authorization is a bit tricky 
with two layers of asynchronous helper lookups going on. That alone 
might be what is going on with the weird 403's.



A better sequence would be:

 # ensure login is performed
 http_access deny !auth_users

 # check the access permissions for whichever user logged in
 http_access allow CustomAclDB



acl CRLs url_regex "/etc/squid/conf.d/CRL_urls.txt"
http_access allow CRLs
deny_info 303:https://abc.def.com/



FYI; deny_info is a way to customize what happens when the "deny" action 
is performed an a specific ACL match.


The above deny_info line does nothing unless you name which ACL(s) it is 
to become the action for and also those ACLs are used in a "deny" rule.


For example:

 acl redirect_HTTPS url_regex ^http://example\.com
 deny_info 303:https://example.com%rp redirect_HTTPS
 http_access deny redirect_HTTPS




http_access deny all



So ... none of the http_access lines below here are doing anything.



acl apache rep_header Server ^Apache
icp_access allow localnet
icp_access deny all



These lines...


http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports


.. to here are supposed to protect your proxy against some nasty DDoS 
type attacks.


They need to be first out of all your http_access lines in order to do 
that efficiently.



The http_access below are optional from our default squid.conf setup. 
Since your install does not appear to need them they can just be removed.




http_access allow localhost manager
http_access deny manager
http_access allow localhost



http_port 3128
coredump_dir /var/cache/squid
refresh_pattern ^ftp:   144020% 10080
refresh_pattern ^gopher:14400%  1440
refresh_pattern -i (/cgi-bin/|\?) 0 0%  0
refresh_pattern .   0   20% 4320
logformat squid %ts.%03tu %6tr %>a %Ss/%>Hs %a %Ss/%>Hs %h] [%a %ui %un [%t

Re: [squid-users] can't explain 403 denied for authenticated user

2024-05-30 Thread Amos Jeffries

On 25/05/24 07:28, Kevin wrote:

Hi,

We have 2 external ACLs that take a request's data (IP, authenticated 
username, URL, user-agent, etc) and uses that information to determine 
whether a user or host should be permitted to access that URL.   It 
almost always works well, but we have a recurring occasional issue that 
I can't figure out.


When it occurs, it's always around 4AM.   This particular request occurs 
often - averages about once a second throughout the day.


What we see is a "403 forbidden" for a (should be) permitted site from 
an authenticated user from the same IP/user and to the same site that 
gets a "202 connection established" every other time.


Is maybe 4am the time when your auth system refreshes all nonce?
 - thus making any currently in-use by the clients invalid until they 
re-auth. You might see a mix of 403/401/407 in a bunch at such times.



Or maybe in a similar style one/some of the clients is broken and fails 
to update its nonce before it expires at 4am?
 - looking at which client agent and IP were getting the 403 and/or the 
nonce which received 403 will give you hints about this possibility.



Or your network router(s) do garbage collection and terminate 
long-running connections to free up TCP resources?

 - thus forcing a lot of client re-connects at 4am, which may:
 a) overload the auth system/helper, or
 b) break a transaction that included nonce update for clients - 
resulting in their next request being invalid nonce.



Or maybe you have log processing software that does "squid -k restart" 
instead of the proper "squid -k rotate" to get access to the log files?


Or maybe your auth system has a limit on how large nonce-count can become?
 - I notice that the working request has 0x2F uses and the forbidden 
has 0x35 (suspiciously close to 50 in decimal)







The difference I see in the logs:  though all the digest auth info looks 
okay, the %un field in the log for the usual (successful) request is the 
authenticated username, while in the failed request, it's "-".   So 
though there hasn't been an authentication error or "authentication 
required" in the log - and the username is in the authentication details 
in the log entry -  it seems like squid isn't recognizing that username 
as %un.



Be aware that a properly behaving client will *not* send credentials 
until they are requested, after which it should *always* send 
credentials on that same connection (even if they are not requested 
explicitly).


That means some requests on a multiplex/pipeline/keep-alive connection 
MAY arrive with credentials and be accepted(2xx)/denied(403) without 
authentication having occured. Entirely due to your *_access directives 
sequence. In these cases the log will show auth headers but no value for 
%un and/or %ul.





My squid.conf first tests a request to see if an unauthenticated request 
from a particular host is permitted.  That external ACL doesn't take a 
username as an argument.   If that external ACL passes, the request is 
allowed.




Please *show* the config lines rather than describing what you *think* 
they do. Their exact ordering matters *a lot*.


 Obfuscation of sensitive details is okay/expected so long as you makes 
it easy for us to tell that value A and value B are different.



FWIW; if your config file is *actually* containing only what you 
described it is missing a number of things necessary to have a safe and 
secure proxy. A look at your full config (without comments or empty 
lines) will help us point out any unnoticed issues for you to consider 
fixing.




The next line in squid.conf is

acl auth_users proxy_auth REQUIRED



FYI the above just means that Squid is using authentication. It says 
nothing about when the authentication will be (or not be) performed.



... and after that, the external ACL that takes the username as well as 
the other info.





HTH
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Validation of IP address for SSL spliced connections

2024-05-30 Thread Amos Jeffries

On 30/05/24 18:30, Rik Theys wrote:

Hi,

On 5/29/24 11:31 PM, Alex Rousskov wrote:

On 2024-05-29 17:06, Rik Theys wrote:

On 5/29/24 5:29 PM, Alex Rousskov wrote:

On 2024-05-29 05:01, Rik Theys wrote:



squid doesn't seem to validate that the IP address we're connecting 
to is valid for the specified name in the SNI header?


That observation matches my reading of Squid Host header forgery 
detection code which says "we do not yet handle CONNECT tunnels 
well, so ignore for them". To validate that theory, use 
"debug_options ALL,3" and look for "SECURITY ALERT: Host header 
forgery detected" messages in cache.log.


I've enabled this debug option, but I never see the security alert in 
the logs. Maybe it was introduced in more recent versions? I'm 
currently using Squid 5.5 that comes with Rocky Linux 9.4.


The code logging "SECURITY ALERT: Host header forgery detected" 
messages is present in v5.5, but perhaps it is not triggered in that 
version (or even in modern/supported Squids) when I expect it to be 
triggered. Unfortunately, there are too many variables for me to 
predict what exactly went wrong in your particular test case without 
doing a lot more work (and I cannot volunteer to do that work right now).


Looking at https://wiki.squid-cache.org/KnowledgeBase/HostHeaderForgery, 
it always seems to mention the Host header. It has no mention of 
performing the same checks for the SNI value. Since we're peeking at the 
request, we can't see the actual Host header being sent.




FYI, The SSL-Bump feature uses a CONNECT tunnel at the HTTP layer to 
transfer the HTTPS (or encrypted non-HTTPS) content through Squid. The 
SNI value, cert altSubjectName, or raw-IP (whichever most-trusted is 
available) received from peek/bump gets used as the Host header on that 
internal CONNECT tunnel.


The Host header forgery check at HTTP layer is performed on that 
HTTP-level CONNECT request regardless of whether a specific SNI-vs-IP 
check was done by the TLS logic. Ideally both layers would do it, but 
SSL-Bump permutations/complexity makes that hard.




And indeed: if I perform the same test for HTTP traffic, I do see the 
error message:


curl http://wordpress.org --connect-to wordpress.org:80:8.8.8.8:80


I believe that for my use-case (only splice certain domains and prevent 
connecting to a wrong IP address), there's currently no solution then. 
Squid would also have to perform a similar check as the Host check for 
the SNI information. Maybe I can perform the same function with an 
external acl as you've mentioned. I will look into that later. Thanks 
for your time.



IIRC there is at least one SSL-Bump permutation which does server name 
vs IP validation (in a way, not explicitly). But that particular code 
path is not always taken and the SSL-Bump logic does not go out of its 
way to lookup missing details. So likely you are just not encountering 
the rare case that SNI gets verified.




HTH
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] log_referrer question

2024-05-22 Thread Amos Jeffries

On 22/05/24 07:51, Alex Rousskov wrote:

On 2024-05-21 13:50, Bobby Matznick wrote:
I have been trying to use a combined log format for squid. The below 
line in the squid config is my current attempt.


logformat combined %>a %[ui %[un [%tl "%rm %ru HTTP/%rv" %>Hs %"%{Referer}>h" "%{User-Agent}>h" %Ss:%Sh


Please do not redefine built-in logformat configurations like "squid" 
and "combined". Name and define your own instead.




For built-in formats do not use logformat directive at all. Just 
configure the log output:


 access_log daemon:/var/log/squid/access.log combined


As Alex said, please do not try to re-define the built-in formats. If 
you must define *a* format with the same/similar details, use a custom 
name for yours.




So, checked with squid -v and do not see “—enable-referrer_log” as one 
of the configure options used during install. Would I need to 
reinstall, or is that no longer necessary in version 4.13?


referer_log and the corresponding ./configure options have been removed 
long time ago, probably before v4.13 was released.




Since Squid v3.2 that log has been a built-in logformat. Just configure 
a log like this:


 access_log daemon:/var/log/squid/access.log referrer


HTH
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Tune Squid proxy to handle 90k connection

2024-05-16 Thread Amos Jeffries

On 17/05/24 02:23, Bolinhas André wrote:

Hi Alex
Has I explain, by default I set those directives to off to avoid high 
cpu consumption.



Ah, actually with NTLM auth you are using *more* CPU per transaction 
with those turned off.


The thing is that auth takes a relatively long time to happen, so the 
transactions are slower. Hiding the fact that they are, in total, using 
more CPU and TCP networking resources.




My doubt is enabling persistent connection will help squid to process 
the request more efficiently and gain more performance or not.




With persistent connections disabled, every client request must:

 1) wait for a TCP socket to become free for use
 2) perform a full SYN / SYN+ACK exchange to open it for use
 3) perform a NTLM challenge-response over HTTP
 4) wait for a second TCP socket to become free for use
 5) perform a full SYN / SYN+ACK exchange to open it for use
 6) perform the actual HTTP NTLM authenticated transaction.

Then
 7) locate a server that can be used
 8) wait for a TCP socket to become free for use
 9) perform a full SYN / SYN+ACK exchange to open it for use
 10) send the request on to the found server


That is a LOT of time, CPU, and networking.


With persistent connections enabled, only the first request looks like 
above. The second, third etc look like below:



 11) perform the HTTP NTLM authenticated transaction.

Then
 12) locate a server that can be used
 13) send the request on to the found server


 14) perform the HTTP NTLM authenticated transaction.

Then
 15) locate a server that can be used
 16) send the request on to the found server


That is MUCH better for performance.


HTH
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] deny_info URL not working

2024-05-12 Thread Amos Jeffries

On 12/05/24 17:48, Dieter Bloms wrote:

Hello,

On Sat, May 11, Vilmondes Queiroz wrote:


deny_info http://example.com !authorized_ips


does it works, if you add the http status code like:

deny_info 307:http://example.com !authorized_ips



Also the "!" is not valid here. The ACL on deny_info lines is the name 
of the one that is to be adjusted when it is used for a "deny" action 
by, for example, "http_access deny".



  acl authorized_ips src ...
  deny_info 307:http://example.com authorized_ips
  http_access deny !authorized_ips


HTH
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Dynamic ACL with local auth

2024-05-08 Thread Amos Jeffries

On 8/05/24 19:55, Albert Shih wrote:

Le 06/05/2024 à 12:21:10+0300, ngtech1ltda écrit
Hi,



The right way to do it is to use an external acl helper that will use some kind 
of database for the settings.


Ok. I will check that.


The other option is to use a reloadable ACLs file.


But those this reload need a restart of the service ?


But you need to clarify exactly the goal if you want more then a basic advise.


Well..pretty simple task


Ah, this is about equivalent to "just create life" level of simplicity.


I expect that what you need is doable, but not in the way you are 
describing so far.



(p-PS. If you can mention how much experience you have working with 
Squid configuration it will help us know how much detail we can skip 
over when offering options.)





I need to build a squid server to allow/deny
people access to some data (website) because those website don't support
authentication.



So Squid needs to authenticate. Is that every request or on a 
per-resource (URL) basis?


 A) needs only simple auth setup
or
 B) needs auth setup, with ACL(s) defining when to authenticate



But the rule of access “allow/deny” are manage in other place through
another application.



What criteria/details is this other application checking?

Can any of its decision logic be codified as a sequence of Squid ACL 
types checked in some specific order?


How are you expecting Squid to communicate with it?



So the goal is to have some «thing» who going to retrieve the «permissions»
of the user and apply the ACL on squid.



Please explain/clarify what **exactly** a "permission" is in your design?


Cheers
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Linux Noob - Squid Config

2024-05-07 Thread Amos Jeffries

On 7/05/24 07:59, Piana, Josh wrote:

Amos,

You raise a good point about Kerberos! I was not aware that Squid supported 
this method. Yes - I think we would preferably use this method, especially 
because this looks like it's much easier to setup and still checks all the 
boxes we need for security purposes.

With that being said, without using NTLM, can we bypass using Samba? We would 
rather not rely on that resource if possible.



I'm not sure how much of Samba need to be setup to use the NTLM helper. 
It has been a while since I used it.




In regards to your responses to all of the lines of code, I'll be going through 
that seperately and will get back to you if I have any more questions with it. 
After installing Squid, moving over and updating the old config, and adjusting 
the parameters you mentioned below, what else is there to do to finish setting 
up this server? I'm not entirely sure if Apache is needed anymore either. This 
would simplify and modernize our processes a great deal if this can be remopved 
as well.



There is no sign in the squid.conf as to what Apache was being used for.
So that and any other services the old machine had going will still need 
your attention, but they are not related to Squid.



Cheers
Amos



- Josh

-Original Message-
From: squid-users  On Behalf Of Amos 
Jeffries
Sent: Monday, May 6, 2024 12:59 PM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Linux Noob - Squid Config

Caution: This email originated from outside of Hexcel. Do not click links or 
open attachments unless you recognize the sender and know the content is safe.


[ please keep responses on-list to assist any others who encounter the same 
issues in future ]

On 4/05/24 08:51, Piana, Josh wrote:

Hey Amos,

Thank you so much for getting back to me so quickly!

To answer your question about NTLM, I meant to say NTLMv2. We're trying to 
become compliant with newer security standards and this old box in depersate 
need of some love and updating.




Hmm. My question was more aiming as a yes/no answer.

Squid can certainly still support NTLM. But if possible going to just 
Negotiate/Kerberos auth would be a simpler config.

The /usr/bin/ntlm_auth authenticator you have been using is provided by Samba. 
So you will need to have Samba installed (yum install samba) and configured the 
same (or equivalent for its upgrade) as before Squid authentication is usable.

FYI; Modern Squid start helpers only as-needed. Meaning Squid will startup and 
run fine without a working auth helper ... until the point where a helper 
lookup is needed. So you can test Squid with some trivial requests before 
needing Samba fully working.



--
Current squid.conf file Output:

max_filedesc 4096



I advise changing this to at least:

max_filedescriptors 65536

Why? Modern web pages can cause clients to open up to a hundred connections to 
various servers to display a single web page. Each client of those connection 
consumes 3-4 file descriptors.

You will also need to check the OS limitation to ensure



cache_mgr itadmin@...
cache_effective_user squid
cache_effective_group squid
coredump_dir /opt/squid/var
pid_filename /var/run/squid.pid
shutdown_lifetime 5 seconds
error_directory /usr/local/share/squid/errors/English_CUSTOM



Check what customizations have been done to the files inside that directory.

If it is just the new templates for the deny_info lines later in your config; 
then you can copy those templates to the new machine.
And create symlnks from the

I suggest placing the custom error templates in a directory such as 
/etc/squid/errors/ and a symlink from the 
/usr/local/share/squid/errors/templates/ directory (or wherever the templates 
are put by yum install).
   [ This way upgrades that change the default templates will not erase your 
ones. At worst you should only have to re-create the symlinks manually. ]

(If you need it; to learn how to create symlinks type "man ln".)



logfile_rotate 0
debug_options ALL,1


You can remove the above line. It is a default setting.



buffered_logs on > cache_log /var/log/squid/general> cache_access_log
/var/log/squid/access



The above two lines should be more like:

cache_log /var/log/squid/cache.log
access_log daemon:/var/log/squid/access.log



cache_store_log none
log_mime_hdrs off


The above two lines can be removed. They are default settings.



log_fqdn off


Remove this line. It is not supported in modern Squid.



strip_query_terms off
http_port 10.46.11.20:8080
http_port 127.0.0.1:3128
icp_port 0


The above line can be removed. It is a default setting.



forwarded_for off


Change that "off" to;
   * "delete" for complete removal of the header), or
   * "transparent" for Squid to not add the header.



ftp_user anonftpuser@...
ftp_list_width 32
ftp_passive on
connect_ti

Re: [squid-users] Linux Noob - Squid Config

2024-05-06 Thread Amos Jeffries
alhost' '--enable-underscores' 
'--enable-basic-auth-helpers=LDAP,MSNT,NCSA,PAM,SMB,YP,getpwnam,multi-domain-NTLM,SASL'
 '--enable-cache-digests' '--enable-ident-lookups' '--with-large-files' 
'--enable-follow-x-forwarded-for' '--enable-wccpv2' '--enable-fd-config' 
'--with-maxfd=16384' 'build_alias=i386-redhat-linux-gnu' 
'host_alias=i386-redhat-linux-gnu' 'target_alias=i386-redhat-linux-gnu' 
'CFLAGS=-D_FORTIFY_SOURCE=2 -fPIE -Os -g -pipe -fsigned-char' 'LDFLAGS=-pie'

--

New Box squid -v Output:

Squid Cache: Version 5.5
Service Name: squid

This binary uses OpenSSL 3.0.7 1 Nov 2022. For legal restrictions on 
distribution see https://www.openssl.org/source/license.html

configure options:  '--build=x86_64-redhat-linux-gnu' 
'--host=x86_64-redhat-linux-gnu' '--program-prefix=' '--prefix=/usr' 
'--exec-prefix=/usr' '--bindir=/usr/bin' '--sbindir=/usr/sbin' 
'--sysconfdir=/etc' '--datadir=/usr/share' '--includedir=/usr/include' 
'--libdir=/usr/lib64' '--libexecdir=/usr/libexec' '--localstatedir=/var' 
'--sharedstatedir=/var/lib' '--mandir=/usr/share/man' 
'--infodir=/usr/share/info' '--libexecdir=/usr/lib64/squid' 
'--datadir=/usr/share/squid' '--sysconfdir=/etc/squid' 
'--with-logdir=/var/log/squid' '--with-pidfile=/run/squid.pid' 
'--disable-dependency-tracking' '--enable-eui' 
'--enable-follow-x-forwarded-for' '--enable-auth' 
'--enable-auth-basic=DB,fake,getpwnam,LDAP,NCSA,PAM,POP3,RADIUS,SASL,SMB,SMB_LM'
 '--enable-auth-ntlm=SMB_LM,fake' '--enable-auth-digest=file,LDAP' 
'--enable-auth-negotiate=kerberos' 
'--enable-external-acl-helpers=LDAP_group,time_quota,session,unix_group,wbinfo_group,kerberos_ldap_group'
 '--enable-storeid-rewrite-helpers=file' '--enable-cache-digests' 
'--enable-cachemgr-hostname=localhost' '--enable-delay-pools' '--enable-epoll' 
'--enable-icap-client' '--enable-ident-lookups' '--enable-linux-netfilter' 
'--enable-removal-policies=heap,lru' '--enable-snmp' '--enable-ssl' 
'--enable-ssl-crtd' '--enable-storeio=aufs,diskd,ufs,rock' '--enable-diskio' 
'--enable-wccpv2' '--enable-esi' '--enable-ecap' '--with-aio' 
'--with-default-user=squid' '--with-dl' '--with-openssl' '--with-pthreads' 
'--disable-arch-native' '--disable-security-cert-validators' 
'--disable-strict-error-checking' '--with-swapdir=/var/spool/squid' 
'build_alias=x86_64-redhat-linux-gnu' 'host_alias=x86_64-redhat-linux-gnu' 
'CC=gcc' 'CFLAGS=-O2 -flto=auto -ffat-lto-objects -fexceptions -g 
-grecord-gcc-switches -pipe -Wall -Werror=format-security 
-Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS 
-specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong 
-specs=/usr/lib/rpm/redhat/redhat-annobin-cc1  -m64 -march=x86-64-v2 
-mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection 
-fcf-protection' 'LDFLAGS=-Wl,-z,relro -Wl,--as-needed  -Wl,-z,now 
-specs=/usr/lib/rpm/redhat/redhat-hardened-ld 
-specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 ' 'CXX=g++' 'CXXFLAGS=-O2 
-flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall 
-Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS 
-specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong 
-specs=/usr/lib/rpm/redhat/redhat-annobin-cc1  -m64 -march=x86-64-v2 
-mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection 
-fcf-protection' 'PKG_CONFIG_PATH=:/usr/lib64/pkgconfig:/usr/share/pkgconfig' 
'LT_SYS_LIBRARY_PATH=/usr/lib64:'

--

-Original Message-
From: squid-users  On Behalf Of Amos 
Jeffries
Sent: Friday, May 3, 2024 4:21 PM
To: squid-users@lists.squid-cache.org
Subject: Re: [squid-users] Linux Noob - Squid Config

[You don't often get email from squ...@treenet.co.nz. Learn why this is 
important at https://aka.ms/LearnAboutSenderIdentification ]

Caution: This email originated from outside of Hexcel. Do not click links or 
open attachments unless you recognize the sender and know the content is safe.


On 4/05/24 07:59, Piana, Josh wrote:

Hey Everyone.

I apologize in advance for any lack of formality normally shared on
mailing lists such as these, it’s my 

Re: [squid-users] Squid TCP_TUNNEL_ABORTED/200

2024-05-05 Thread Amos Jeffries

On 4/05/24 11:17, Emre Oksum wrote:

 >In this case, all your tcp_outgoing_addr lines being tested. Most of
 >them will not match.
Sorry I'm not really a Squid guy I was working on it due to a job that I 
took but I cannot figure this out. What do you mean most of them do not 
match? Does it mean Squid checks every ACL one by one that is defined in 
config to find the correct IPv6 address?


Yes, exactly so.

Each tcp_outgoing_address line of squid.conf is checked top-to-bottom, 
the ACLs on that line tested left-to-right against the Squid local-IP 
the client connected to.

 Most will non-match (as seen in the trace snippet you showed).
 One should match, at which point Squid uses the IP address on that 
tcp_outgoing_address line.



As mentioned earlier, this is all on *outgoing* Squid-to-server 
connections. tcp_outgoing_* directives have no effect on the client 
connection.



If that's the case I still 
didn't understand why Squid randomly sends Connection Reset flag to 
client.


That is what we are trying to figure out, yes.

I asked for the cache.log trace so I could look through and see when one 
of the problematic connections was identified by Squid as closed, and 
whether that was caused by something else Squid was doing - or whether 
the signal came to Squid from the OS.
 Which would tell us whether Squid had sent it, or if the OS had sent 
it to both Squid and client.


I/we will need a full cache.log trace from before a problematic 
connection was opened, to after it fails. At least several seconds 
before and after.


Cheers
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid TCP_TUNNEL_ABORTED/200

2024-05-03 Thread Amos Jeffries

On 4/05/24 09:48, Emre Oksum wrote:

Hi Amos,
 >FTR, "debug_options ALL" alone is invalid syntax and will not change
 >from the default cache.log output

Yes, you were right! I was surely missing on that one. I changed 
debug_options ALL to debug_options ALL 5 and now, I found these warnings 
in cache.log file:




FYI, these are not warnings. They are debug traces saying what is going on.

In this case, all your tcp_outgoing_addr lines being tested. Most of 
them will not match.




Cheers
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid TCP_TUNNEL_ABORTED/200

2024-05-03 Thread Amos Jeffries

On 4/05/24 08:33, Emre Oksum wrote:

Hi Jonathan,

 >> Have you attempted to enable debugging ??
Yes, debugging was enabled but as I have pointed out, unfortunately it 
didn't give any information about the issue.
Maybe I was missing something? I don't know. debug_options was ALL in my 
squid.conf.


Sure, "ALL" sections.

But what display level:

 0 (critical only)?
 1 (important)?
 2 (protocol trace)?
 3-6 (debugs)?
 9 (raw I/O data traces)?


FTR, "debug_options ALL" alone is invalid syntax and will not change 
from the default cache.log output.



Cheers
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Linux Noob - Squid Config

2024-05-03 Thread Amos Jeffries

On 4/05/24 07:59, Piana, Josh wrote:

Hey Everyone.

I apologize in advance for any lack of formality normally shared on 
mailing lists such as these, it’s my first time seeking product support 
in this manner.




NO need to apologize. Help and questions is most of what we do here :-)


I want to start by saying that I’m new to Linux, been using Windows 
environments my entire life. Such is the reason for me reaching out to 
you all.


I have been tasked with modernizing a Squid box and feel very 
overwhelmed, to say the least.


Current Setup:

èCentOS 5.0

èSquid 2.3

èApache 2.0.46

èSamba 3.0.9

Desired Setup:

èRHEL 9.2 OS

èNeeds to qualify for NTLM authentication



Hmm, does it *have* to be NTLM? that auth protocol was deprecated in 2006.



èWould like to remove legacy apps/services

èContinue to authenticate outgoing communication via AD

My question is, how do I get all of these services/apps to work 
together? Do I just install the newest versions of each and migrate the 
existing config files?


I was hoping for a better understanding on how all of these work 
together and exactly how to configure or edit these as needed. I’ve 
gotten as far as installing RHEL 9.2 on a fresh VM Server and trying as 
best as I can to learn the basics on Linux and just the general 
operation of a Linux ran environment. It feels like trying to ride a 
bike with box wheels.





The installation of a basic Squid service for RHEL is easy.
Just open a terminal and enter this command:

   yum install squid


The next part is going over your old Squid configuration to see how much 
of it remains necessary or can be updated. It would be useful for the 
next steps to copy it to the RHEL machine as /etc/squid/squid.conf.old .


You can likely find it on the CentOS machine at /etc/squid/squid.conf or 
/usr/share/squid/etc/squid.conf depending on how that Squid was built.



If you are able to paste the contents of that file (without the '#' 
comment or empty lines) here, we can assist with getting the new Squid 
doing the same or equivalent actions.



Also please paste the output of "squid -v" run on both the old CentOS 
machine and on the new RHEL.



Cheers
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid TCP_TUNNEL_ABORTED/200

2024-05-03 Thread Amos Jeffries

On 4/05/24 02:29, Emre Oksum wrote:

Hi everyone,

I'm having a issue with Squid Cache 4.10 which I cannot fix for weeks 
now and kinda lost at the moment. I will be appreciated if someone can 
guide me through the issue I'm having.
I need to create a IPv6 HTTP proxy which should match the entry address 
to outgoing TCP address. For example, if user is connecting from 
fe80:abcd::1 it should exit the HTTP proxy from the same address. We got 
like 50k addresses like this at the moment.


What your "for example,..." describes is Transparent Proxy (TPROXY).


However, what you have in the config below is very different. The IP the 
client is connected **to** (not "from") is being pinned on outgoing 
connections.



The issue is, client connecting to the proxy is receiving "EOF" or 
"FLOW_CONTROL_ERROR" on their side.


The FLOW_CONTROL_ERROR is not something produced by Squid. Likely it 
comes from the TCP stack and/or OS routing system.


The EOF may be coming from either Squid or the OS. It also may be 
perfectly normal for the circumstances, or a side effect of an error 
elsewhere.



To solve will require identifying exactly what is sending those signals, 
and why. Since they are signals going to the client, focus on the 
client->Squid connections (not the Squid->server ones you talk about 
testing below).




When I test connection by connecting 
to whatismyip.com  everything works fine and 
entry IP always matches with outgoing IP for each of the 50k addresses. 
Client tells me this problem occurs both at GET and POST requests with 
around 10 MB of data.


Well, you are trying to manually force certain flow patterns that 
prohibit or break some major HTTP performance features. Some problems 
are to be expected.


The issues which I expect to occur in your proxy would not show up in a 
trivial outgoing-IP or connectivity test.



I initially thought that could be related to server resources being 
drained but upon inspecting server resource usage, Squid isn't even 
topping at 100% CPU or RAM anytime so not that.




IMO, "FLOW_CONTROL_ERROR" is likely related to quantity of traffic 
flooding through the proxy to specific origin servers.


The concept you are implementing of the outgoing TCP connection having 
the same IP as the incoming connection reduces the available TCP sockets 
by 25%. Prohibiting the OS from allocating ports on otherwise unused 
outgoing addresses when





My Squid.conf is like this at the moment:


Some improvements highlighted inline below.
Nothing stands out to me as being related to your issues.



auth_param basic program /usr/lib/squid/basic_ncsa_auth /etc/squid/passwd
acl auth_users proxy_auth REQUIRED
http_access allow auth_users
http_access deny !auth_users


Above two lines are backwards. Deny first, then allow.



cache deny all
dns_nameservers 
dns_v4_first off
via off
forwarded_for delete
follow_x_forwarded_for deny all
server_persistent_connections off


*If* the issue turns out to be congestion on Squid->server connections
enabling this might be worthwhile. Otherwise it should be fine.



max_filedesc 1048576


You can remove that line. "max_filedesc" was a RedHat hack from 20+ 
years ago when the feature was experimental.


Any value you set on the line above, will be erased and replaced by the 
line below:




max_filedescriptors 1048576
workers 8
http_port [::0]:1182


Above is just a complicated way to write:

 http_port 1182


Any particular reason not to use the registered port 3128 ?
(Not important, just wondering.)



acl binding1 myip fe80:abcd::1
tcp_outgoing_address fe80:abcd::1 binding1
acl binding2 myip fe80:abcd::2
tcp_outgoing_address fe80:abcd::2 binding2
acl binding3 myip fe80:abcd::3
tcp_outgoing_address fe80:abcd::3 binding3
...
...
...
access_log /var/log/squid/access.log squid



cache_store_log none


You can erase this line.
This is default setting. No need to manually set it.



cache deny all


You can erase this line.
This "cache deny all" exists earlier in the config.




I've tried to get a PCAP file and realized when client tries to connect 
with a new IPv6 address, Squid is not trying to open a new connection 
instead tries to resume a previously opened one on a different outgoing 
IPv6 address.


Can you provide the trace demonstrating that issue?

Although, as noted earlier your problems are apparently on the client 
connections. This is about server connections behaviour.



I set server_persistent_connections off which should have 
disabled this behavior but it's still the same.


Nod. Yes that should forbid re-use of connections.

I/we will need to see the PCAP trace along with a cache.log generated 
using "debug_options ALL,6" to confirm a bug or identify other breakage 
though.




I tried using a newer 
version of Squid but it behaved differently and did not follow my 
outgoing address specifications and kept connecting on IPv4.


That would seem to indicate that your IPv4 connectivity is better than 
your

Re: [squid-users] Best way to utilize time constraints with squid?

2024-05-01 Thread Amos Jeffries

Hi Jonathan,

There may be some misunderstanding of what I wrote earlier..

 "time" is just a check of the machine clock. When ACLs are checked it 
is always expected to work.



The problem I was referring to was that ssl_bump and https_access ACLs 
are *not* checked for already active connections. Only for new 
connections as they are setup.


For example; CONNECT tunnel and/or HTTPS connections might start on 
Monday and stay open and used until Friday.



HTH
Amos



On 30/04/24 04:54, Jonathan Lee wrote:

Squid -k parse also does not fail with use of the time ACL
Sent from my iPhone


On Apr 27, 2024, at 07:49, Jonathan Lee  wrote:

The time constraints for termination do appear to lock out all new connections 
until that timeframe has elapsed. My devices have connection errors during this 
duration.

Just to confirm ssl_bump can not be used with time ? Because my connections 
don’t work during the timeframe so that is a plus.


Sent from my iPhone


On Apr 27, 2024, at 00:41, Amos Jeffries  wrote:

On 26/04/24 17:15, Jonathan Lee wrote:
aclblock_hourstime01:30-05:00ssl_bumpterminateallblock_hourshttp_accessdenyallblock_hours
In this a good way to time lock squid with times lock down?


That depends on your criteria/definition of "good".

Be aware that http_access only checks *new* transactions. Large downloads, and 
long-running transactions such as CONNECT tunnel which start during an allowed 
time will continue running across the disallowed time(s).



To essentially terminate all connections and block http access.


The "terminate all connections" is not enforced by 'time` ACL. Once a 
transaction is allowed to start, it can continue until completion - be that milliseconds 
or days later.


HTH
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Container Based Issues Lock Down Password and Terminate SSL

2024-04-27 Thread Amos Jeffries

On 24/04/24 17:27, Jonathan Lee wrote:

Hello fellow Squid users I wanted to ask a quick question for use with 
termination would http access for cache still work with this type of setup and 
custom refresh patterns?

I think it would terminate all but the clients and if they use the cache it 
would be ok.



These things are sequential, but otherwise not directly related.

SSL-Bump is about TLS handshake opening a connection from a client.

The "ssl_bump splice" action allows the client connection to go through 
Squid in the form of a blind tunnel. Caching (and thus refresh of cached 
objects) is not applicable to tunneled traffic.



The "ssl_bump terminate" action closes the client connection 
immediately. It should be obvious that nothing can be done in that 
connection once it is closed. HTTP(S) and/or caching are irrelevant - 
they can never happen on a terminated connection.





But I think an invasive container would be blocked my goal here.

acl markBumped annotate_client bumped=true
acl active_use annotate_client active=true
acl bump_only src 192.168.1.3 #webtv
acl bump_only src 192.168.1.4 #toshiba
acl bump_only src 192.168.1.5 #imac
acl bump_only src 192.168.1.9 #macbook
acl bump_only src 192.168.1.13 #dell

acl bump_only_mac arp macaddresshere
acl bump_only_mac arp macaddresshere
acl bump_only_mac arp macaddresshere
acl bump_only_mac arp macaddresshere
acl bump_only_mac arp macaddresshere

ssl_bump peek step1
miss_access deny no_miss active_use
ssl_bump splice https_login active_use
ssl_bump splice splice_only_mac splice_only active_use
ssl_bump splice NoBumpDNS active_use
ssl_bump splice NoSSLIntercept active_use
ssl_bump bump bump_only_mac bump_only active_use
acl activated note active_use true
ssl_bump terminate !activated



___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] enctype aes256-cts found in keytab but cannot decrypt ticket

2024-04-27 Thread Amos Jeffries

On 24/04/24 17:31, ivc chgaki wrote:
hello. i hve Samba DC and squid. i created user, then SPN, and then 
exported keytab and imported him to squid. im using kerberos negotiate 
helper but when i try go to internet i have popup window with 
login/password and in cace.log log error



2024/04/21 21:41:58 kid1| ERROR: Negotiate Authentication validating 
user. Result: {result=BH, notes={message: gss_accept_sec_context() 
failed: Unspecified GSS failure.  Minor code may provide more 
information. Request ticket server 
HTTP/srv-proxy.mydomain@myadomain.com kvno 2 enctype aes256-cts 
found in keytab but cannot decrypt ticket; }}








HTH
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] tls_key_log

2024-04-27 Thread Amos Jeffries

On 25/04/24 19:57, Andrey K wrote:

Hello,

Does squid 6.9 allow you to log TLS 1.3 keys so that you can then 
decrypt traffic using Wireshark?
I found that there was an issue earlier with using tls_key_log to 
decrypt TLS 1.3: 
https://lists.squid-cache.org/pipermail/squid-users/2022-January/024424.html 


I tried using tls_key_log on Squid 6.9 to decrypt TLS 1.3, but 
without success.


You answer your own question here.



Is work on TLS 1.3 logging support still ongoing?



Not specifically. As I understand it logging is not the issue - Squid 
cannot log something it cannot see. TLS support has quieted down in 
recent times, but not stopped.



Cheers
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Best way to utilize time constraints with squid?

2024-04-27 Thread Amos Jeffries

On 26/04/24 17:15, Jonathan Lee wrote:


aclblock_hourstime01:30-05:00ssl_bumpterminateallblock_hourshttp_accessdenyallblock_hours

In this a good way to time lock squid with times lock down?



That depends on your criteria/definition of "good".

Be aware that http_access only checks *new* transactions. Large 
downloads, and long-running transactions such as CONNECT tunnel which 
start during an allowed time will continue running across the disallowed 
time(s).





To essentially terminate all connections and block http access.



The "terminate all connections" is not enforced by 'time` ACL. Once a 
transaction is allowed to start, it can continue until completion - be 
that milliseconds or days later.



HTH
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Container Based Issues Lock Down Password and Terminate SSL

2024-04-23 Thread Amos Jeffries

On 23/04/24 11:52, Jonathan Lee wrote:

Hello fellow Squid Accelerator/Dynamic Cache/Web Cache Users/PfSense users

I think this might resolve any container based issues/fears if they 
happened to get into the cache. Ie a Docker Proxy got installed and 
tried to data marshal the network card inside of a freeBSD jail or 
something like that. Biggest fear with my cache it is a big cache now


Please yet me know what you think or if it is wrong.

Here is my configuration. I wanted to share it as it might help to 
secure some of this.


FTR, this config was auto-generated by pfsense. A number of things which 
that tool forces into the config could be done much better in the latest 
Squid, but the tool does not do due to needing to support older Squid 
version.





Keep in mine I use cachemgr.cgi within Squidlight so I had to set the 
password and I have to also adapt the php status file to include the 
password and also the sqlight php file.


After that the status and gui pages work still with the new password. 
Only issues area that it shows up in clear text when it goes over the 
proxy I can see my password clear as day again that was an issue listed 
inside the Squid O’REILLY book also.



Please ensure you are using the latest Squid v6 release. That release 
has both a number of security fixes, and working https:// URL access to 
the manager reports.


The cachemgr.cgi tool is deprecated fro a number of issues including 
that style of embedding passwords in the URLs.


Francesco and I have created a tool that can be found at 
 for basic 
access to the reports directly from Browser.
That tool uses HTTP authentication configured via the well-documented 
proxy_auth ACLs and http_access for more secure access than the old URL 
based mechanism (which still exists, just deprecated).




Cheers
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Warm cold times

2024-04-23 Thread Amos Jeffries

On 22/04/24 17:42, Jonathan Lee wrote:

Has anyone else taken up the fun challenge of doing windows update caching. It 
is amazing when it works right. It is a complex configuration, but it is worth 
it to see a warm download come down that originally took 30 mins instantly to a 
second client. I didn’t know how much of the updates are the same across 
different vendor laptops.



There have been several people over the years.
The collected information is being gathered at 



If you would like to check and update the information for the current 
Windows 11 and Squid 6, etc. that would be useful.


Wiki updates are now made using github PRs against the repository at 
.






Amazing stuff Squid team.
I wish I could get some of the Roblox Xbox stuff to cache but it’s a night to 
get running with squid in the first place, I had to splice a bunch of stuff and 
also wpad the Xbox system.


FWIW, what I have seen from routing perspective is that Roblox likes to 
use custom ports and P2P connections for a lot of things. So no high 
expectations there, but anything cacheable is great news.





On Apr 18, 2024, at 23:55, Jonathan Lee wrote:

Does anyone know the current warm cold download times for dynamic cache of 
windows updates?

I can say my experience was a massive increase in the warm download it was 
delivered in under a couple mins versus 30 or so to download it cold. The warm 
download was almost instant on the second device. Very green energy efficient.


Does squid 5.8 or 6 work better on warm delivery?


There is no significant differences AFAIK. They both come down to what 
you have configured. That said, the ongoing improvements may make v6 
some amount of "better" - even if only trivial.





Is there a way to make 100 percent sure a docker container can’t get inside the 
cache?


For Windows I would expect the only "100% sure" way is to completely 
forbid access to the disk where the cache is stored.



The rest of your questions are about container management and Windows 
configuration. Which are kind of off-topic.



Cheers
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] SQUID_TLS_ERR_ACCEPT+TLS_LIB_ERR=A000417+TLS_IO_ERR

2024-04-11 Thread Amos Jeffries

On 11/04/24 08:22, Jonathan Lee wrote:

Could it be related to this ??

"WARNING: Failed to decode EC parameters '/etc/dh-parameters.2048'. 
error:1E08010C:DECODER routines::unsupported”




That would certainly make Squid unable to use EC (Elliptic Curve) ciphers.


Unfortunately OpenSSL is not verbose enough to explain the actual 
problem in an easily understood way.



HTH
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid as a http/https transparent web proxy in 2024.... do I still have to build from source?

2024-04-11 Thread Amos Jeffries

On 11/04/24 21:55, PinPin Poola wrote:
I don't care which Linux distro tbh; but would prefer Ubuntu as I have 
most familiarity with it.




Latest Ubuntu provide the "squid-openssl" package, which contains the 
SSL-Bump and other OpenSSL-exclusive features.


Just install that package as you would the "squid" package. It can also 
be installed as a drop-in upgrade for the "squid" package.


One thing to be aware of in both cases, is that the SELinux security 
system does not allow Squid by default to access the /etc/ssl/* config 
area. So you may need to allow that depending on what your desired 
TLS/SSL settings in squid.conf are.




Cheers
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squidclient -h 127.0.0.1 -p 3128 mgr:info shows access denined

2024-04-06 Thread Amos Jeffries

On 6/04/24 18:48, Jonathan Lee wrote:

Correction I can’t access it from the loop back


From the config in the other "Squid cache questions" thread you are 
only intercepting traffic on the loopback 127.0.0.1:3128 port. You 
cannot access it directly on "localhost".


You do have direct proxy (and thus manager) access via the 
192.168.1.1:3128 so this URL should work:

  http://192.168.1.1:3128/squid-internal-mgr/menu


.. or substitute the raw-IP for the visible_hostname setting **if** that 
hostname actually resolves to that IP.



HTH
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid cache questions

2024-04-06 Thread Amos Jeffries


On 6/04/24 11:34, Jonathan Lee wrote:
if (empty($settings['sslproxy_compatibility_mode']) || 
($settings['sslproxy_compatibility_mode'] == 'modern')) {

// Modern cipher suites
$sslproxy_cipher = 
"EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH+aRSA+RC4:EECDH:EDH+aRSA:!RC4:!aNULL:!eNULL:!LOW:!3DES:!SHA1:!MD5:!EXP:!PSK:!SRP:!DSS";

$sslproxy_options .= ",NO_TLSv1";
} else {
$sslproxy_cipher = 
"EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH+aRSA+RC4:EECDH:EDH+aRSA:HIGH:!RC4:!aNULL:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS";

}

Should the RC4  be removed or allowed?

https://github.com/pfsense/FreeBSD-ports/pull/1365 






AFAIK it should be removed. What I was intending to point out was that 
its removal via "!RC4" is likely making the prior "EECDH+aRSA+RC4" 
addition pointless. Sorry if that was not clear.


If you check the TLS handshake and find Squid is working fine without 
advertising "EECDH+aRSA+RC4" it would be a bit simpler/easier to read 
the config by removing that cipher and just relying on the "!RC4".



HTH
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid cache questions

2024-04-06 Thread Amos Jeffries

On 5/04/24 17:25, Jonathan Lee wrote:

ssl_bump splice https_login
ssl_bump splice splice_only
ssl_bump splice NoSSLIntercept
ssl_bump bump bump_only markBumped
ssl_bump stare all
acl markedBumped note bumped true
url_rewrite_access deny markedBumped


for good hits should the url_rewirte_access deny be splice not bumped 
connections ?


I feel I mixed this up



Depends on what the re-write program is doing.

Ideally no traffic should be re-written by your proxy at all. Every 
change you make to the protocol(s) as they go throug adds problems to 
traffic behaviour.


Since you have squidguard..
 * if it only does ACL checks, that is fine. But ideally those checks 
would be done by http_access rules instead.
 * if it is actually changing URLs, that is where the problems start 
and caching is risky.


If you are re-writing URLs just to improve caching, I recommend using 
Store-ID feature instead for those URLs. It does a better job of 
balancing the caching risk vs ratio gains, even though outwardly it can 
appear to have less HITs.



HTH
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid cache questions

2024-04-04 Thread Amos Jeffries

On 4/04/24 17:48, Jonathan Lee wrote:

Is there any particular order to squid configuration??



Yes. 



Does this look correct?



Best way to find out is to run "squid -k parse", which should be done 
after upgrades as well to identify and fix changes between versions as 
we improve the output.



I actually get allot of hits and it functions amazing, so I wanted to 
share this in case I could improve something. Is there any issues with 
security?


Yes, the obvious one is "DONT_VERIFY_PEER" disabling TLS security 
entirely on outbound connections. That particular option will prevent 
you even being told about suspicious activity regarding TLS.


Also there are a few weird things in your TLS cipher settings, such as 
this sequence "  EECDH+aRSA+RC4:...:!RC4 "
 Which as I understand, enables the EECDH with RC4 hash, but also 
forbids all uses of RC4.



I am concerned that an invasive container could become 
installed in the cache and data marshal the network card.




You have a limit of 4 MB for objects allowed to pass through this proxy, 
exception being objects from domains listed in the "windowsupdate" ACL 
(not all Windows related) which are allowed up to 512 MB.


For the general case, any type of file which can store an image of some 
system is a risk for that type of vulnerability can be cached.


The place to fix that vulnerability properly is not the cache or Squid. 
It is the OS permissions allowing non-Squid software access to the cache 
files and/or directory.





Here is my config

# This file is automatically generated by pfSense
# Do not edit manually !


Since this file is generated by pfsense there is little that can be done 
about ordering issues and very hard to tell which of the problems below 
are due to pfsense and which due toy your settings.


FWIW, there are no major issues, just some lines not being necessary due 
to setting things to their default values, or just some blocks already 
denyign things that are blocked previously.





http_port 192.168.1.1:3128 ssl-bump generate-host-certificates=on 
dynamic_cert_mem_cache_size=20MB cert=/usr/local/etc/squid/serverkey.pem 
cafile=/usr/local/share/certs/ca-root-nss.crt capath=/usr/local/share/certs/ 
cipher=EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH+aRSA+RC4:EECDH:EDH+aRSA:HIGH:!RC4:!aNULL:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS
 tls-dh=prime256v1:/etc/dh-parameters.2048 
options=NO_SSLv3,SINGLE_DH_USE,SINGLE_ECDH_USE

http_port 127.0.0.1:3128 intercept ssl-bump generate-host-certificates=on 
dynamic_cert_mem_cache_size=20MB cert=/usr/local/etc/squid/serverkey.pem 
cafile=/usr/local/share/certs/ca-root-nss.crt capath=/usr/local/share/certs/ 
cipher=EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH+aRSA+RC4:EECDH:EDH+aRSA:HIGH:!RC4:!aNULL:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS
 tls-dh=prime256v1:/etc/dh-parameters.2048 
options=NO_SSLv3,SINGLE_DH_USE,SINGLE_ECDH_USE

https_port 127.0.0.1:3129 intercept ssl-bump generate-host-certificates=on 
dynamic_cert_mem_cache_size=20MB cert=/usr/local/etc/squid/serverkey.pem 
cafile=/usr/local/share/certs/ca-root-nss.crt capath=/usr/local/share/certs/ 
cipher=EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH+aRSA+RC4:EECDH:EDH+aRSA:HIGH:!RC4:!aNULL:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS
 tls-dh=prime256v1:/etc/dh-parameters.2048 
options=NO_SSLv3,SINGLE_DH_USE,SINGLE_ECDH_USE

icp_port 0
digest_generation off
dns_v4_first on
pid_filename /var/run/squid/squid.pid
cache_effective_user squid
cache_effective_group proxy
error_default_language en
icon_directory /usr/local/etc/squid/icons
visible_hostname 
cache_mgr 
access_log /var/squid/logs/access.log
cache_log /var/squid/logs/cache.log
cache_store_log none
netdb_filename /var/squid/logs/netdb.state
pinger_enable on
pinger_program /usr/local/libexec/squid/pinger
sslcrtd_program /usr/local/libexec/squid/security_file_certgen -s 
/var/squid/lib/ssl_db -M 4MB -b 2048
tls_outgoing_options cafile=/usr/local/share/certs/ca-root-nss.crt
tls_outgoing_options capath=/usr/local/share/certs/
tls_outgoing_options options=NO_SSLv3,SINGLE_DH_USE,SINGLE_ECDH_USE
tls_outgoing_options 
cipher=EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH+aRSA+RC4:EECDH:EDH+aRSA:HIGH:!RC4:!aNULL:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS
tls_outgoing_options flags=DONT_VERIFY_PEER
sslcrtd_children 10

logfile_rotate 0
debug_options rotate=0
shutdown_lifetime 3 seconds
# Allow local network(s) on interface(s)
acl localnet src  192.168.1.0/27
forwarded_for transparent
httpd_suppress_version_string on
uri_whitespace strip

acl getmethod method GET

acl windowsupdate dstdomain windowsupdate.microsoft.com
acl windowsupdate dstdomain .u

Re: [squid-users] Chrome auto-HTTPS-upgrade - not falling to http

2024-04-03 Thread Amos Jeffries
There is no way to configure around this. The error produced by Squid is 
a hard-coded reaction to TLS level errors in the SSL-Bump process.


Squid needs some significant code redesign to do a better job of 
handling the situation. Which I understand is already underway, but 
still some way off usable.



Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] BWS after chunk-size

2024-04-03 Thread Amos Jeffries

On 2/04/24 16:03, root wrote:

Hi Team,

after an upgrade from squid 5.4.1 to squid 5.9, unable to parse HTTP 
chunked response containing whitespace after chunk size. >
I think the following bugs were fixed and worked fine in squid 5.9 and 
earlier.
https://bugs.squid-cache.org/show_bug.cgi?id=4492 





There was no bug. We caved to user pressure and relaxed the protocol 
validation to tolerate and "fix" known-bad syntax. That change is what 
opened the security issue...



However, after the fix for SQUID2023:1 in 5.9, it seems that it does not 
work properly.





Indeed. That particular broken syntax is being intentionally rejected as 
a security attack.



I could be wrong, but Can you please advise me know if there is a way or 
patch to fix this issue.




You need to fix or stop using the software which is adding BWS (bad 
whitespace) to the protocol syntax fixed.



Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] GCC optimizer is provably junk. Here is the evidence.

2024-03-24 Thread Amos Jeffries

This inflammatory post is not relevant to Squid.

Please do not followup to this thread.


Cheers
Amos Jeffries
The Squid Software Foundation
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] After upgrade from squid6.6 to 6.8 we have a lot of ICAP_ERR_OTHER and ICAP_ERR_GONE messages in icap logfiles

2024-03-13 Thread Amos Jeffries



On 12/03/24 04:31, Dieter Bloms wrote:

Hello,

after an upgrade from squid6.6 to squid6.8 on a debian bookworm we have a lot
of messages from type:

ICAP_ERR_GONE/000
ICAP_ERR_OTHER/200
ICAP_ERR_OTHER/408
ICAP_ERR_OTHER/204

and some of our users claim about bad performance and some get "empty
pages".
Unfortunately it is not deterministic, the page will appear the next
time it is called up. I can't see anything conspicuous in the cache.log.



Hmm, there was 
 
changing message I/O in particular. The behavioural changes from that 
might have impacted ICAP in some unexpected way.


Also, if you are using SSL-Bump to enable virus scanning then 
 
might also be having effects.


HTH
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Manipulating request headers

2024-03-11 Thread Amos Jeffries

On 12/03/24 04:00, Ben Goz wrote:

By the help of God.

Hi all,
I'm using squid with ssl-bump I want to remove br encoding for request 
header Accept-Encoding

currently I'm doing it using the following configuration:
request_header_access Accept-Encoding deny all
request_header_add Accept-Encoding gzip,deflate

Is there a more gentle way of doing it?


You could use q-value to prohibit it instead.


Replace both the above lines with just this one:

 request_header_add Accept-Encoding br;q=0


HTH
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid Proxy timing out 500/503 errors

2024-03-05 Thread Amos Jeffries

On 6/03/24 07:23, M, Anitha (CSS) wrote:

Hi team,

We are using squid service deployed as a KVM VM on SLES 15 Sp5 os image.

We are using squid. Rpm: *squid-5.7-150400.3.20.1.x86_64*

**

We are seeing too many 503 errors with this version of squid.

This is the squid configuration file. Pls review it and let us know if 
issues.




It appears that your configuration file consists of at least 2 different 
configuration files appended to each other.


Please start by running "squid -k parse" and fixing all the warnings it 
should produce.



We are performing squid scale testing, where every secs there will be 
200+requests reaching the squid and squid is spitting out 500/503 errors.




FYI: you have restricted Squid to no more than 3200 filedescriptors. 
That is rather low. I recommend at least 64K.




Squid.conf:

gl-pcesreblr-squidproxy03:/var/log/squid # cat /etc/squid/squid.conf
# Recommended minimum configuration:
acl localnet src 172.28.1.0/24
acl localnet src 172.28.4.0/24
acl localnet src 172.28.0.0/24
acl localnet src 172.28.0.12/32
connect_timeout 120 seconds
connect_retries 10
#debug_options ALL,5
#connect_retries_delay 5 seconds
acl localnet src 0.0.0.1-0.255.255.255  # RFC 1122 "this" network (LAN)
acl localnet src 10.0.0.0/8 # RFC 1918 local private network 
(LAN)
acl localnet src 100.64.0.0/10  # RFC 6598 shared address space 
(CGN)
acl localnet src 169.254.0.0/16 # RFC 3927 link-local (directly 
plugged) machines

acl localnet src 172.28.11.0/24
#acl localnet src 172.16.0.0/12 # RFC 1918 local private network 
(LAN)
#acl localnet src 192.168.0.0/16    # RFC 1918 local private 
network (LAN)
#acl localnet src fc00::/7  # RFC 4193 local private network 
range
#acl localnet src fe80::/10 # RFC 4291 link-local (directly 
plugged) machines


acl blocksites url_regex "/etc/squid/blocksites"
http_access deny blocksites

debug_options ALL,7

acl SSL_ports port 443
acl SSL_ports port 8071
acl SSL_ports port 11052
acl Safe_ports port 80  # http
acl Safe_ports port 21  # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70  # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535  # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl Safe_ports port 53  # pdns
acl Safe_ports port 5300    # pdns
acl Safe_ports port 123 #NTP
acl Safe_ports port 8071
acl Safe_ports port 11052   # pdns web server
acl Safe_ports port 514 # rsyslog
acl CONNECT method CONNECT
acl SSL_ports port 8053
acl Safe_ports port 8053
acl SSL_ports port 3002
acl Safe_ports port 3002
acl SSL_ports port 3006
acl Safe_ports port 3006
acl SSL_ports port 8203
acl Safe_ports port 8203
acl SSL_ports port 8204
acl Safe_ports port 8204
acl SSL_ports port 8071
acl Safe_ports port 8071
acl Safe_ports port 8200
acl SSL_ports port 8099
acl Safe_ports port 8099
tcp_outgoing_address 20.20.30.5

#
# Recommended minimum Access Permission configuration:
#
# Deny requests to certain unsafe ports
http_access deny !Safe_ports

# Deny CONNECT to other than secure SSL ports
http_access deny CONNECT !SSL_ports

# Only allow cachemgr access from localhost
http_access allow localhost manager
http_access deny manager

# We strongly recommend the following be uncommented to protect innocent
# web applications running on the proxy server who think the only
# one who can access services on "localhost" is a local user
#http_access deny to_localhost

#
# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
#


Please notice what the above line says.



# Example rule allowing access from your local networks.
# Adapt localnet in the ACL section to list your (internal) IP networks
# from where browsing should be allowed
http_access allow localnet
http_access allow localhost

# And finally deny all other access to this proxy
#http_access deny all
#http_access allow all

cache_peer proxy-in.its.hpecorp.net parent 443 0 no-query no-delay default


... so a server listening for plain-text HTTP on port 443. That is a bit 
broken. At least consider enabling TLS/SSL on connections to this peer 
so Squid can send it HTTPS traffic.




#cache_peer 16.242.46.11 parent 8080 0 no-query default
#cache_peer 10.132.100.29 parent 3128 0 no-query default

acl parent_proxy src all
http_access allow parent_proxy


The above two lines are identical to:
  http_access allow all

... no http_access lines following this one will ever have any effects.


never_direct allow parent_proxy


Likewise same as:
  never_direct allow all

... however you have always_direct rules later that override this.



# Squid normally listens to port 3128
http_port 3128

# Leave coredumps in the first cache dir
coredump_dir /var/cache/squid

#
# Add any of your own refresh_pattern entries abov

[squid-users] [squid-announce] [ADVISORY] SQUID-2024:1 Denial of Service in HTTP Chunked Decoding

2024-03-04 Thread Amos Jeffries

__

Squid Proxy Cache Security Update Advisory SQUID-2024:1
__

Advisory ID:   | SQUID-2024:1
Date:  | Mar 4, 2024
Summary:   | Denial of Service in HTTP Chunked Decoding
Affected versions: | Squid 3.5.27 -> 3.5.28
   | Squid 4.x -> 4.17
   | Squid 5.x -> 5.9
   | Squid 6.x -> 6.7
Fixed in version:  | Squid 6.8
__

Problem Description:

 Due to an Uncontrolled Recursion bug, Squid may be vulnerable to
 a Denial of Service attack against HTTP Chunked decoder.

__

Severity:

 This problem allows a remote attacker to perform Denial of
 Service when sending a crafted chunked encoded HTTP Message.

__

Updated Packages:

This bug is fixed by Squid version 6.8.

 In addition, patches addressing this problem for the stable
 releases can be found in our patch archives:

Squid 6:
 

 If you are using a prepackaged version of Squid then please refer
 to the package vendor for availability information on updated
 packages.

__

Determining if your version is vulnerable:

 Squid older than 3.5.27 are not vulnerable.

 All Squid 3.5.27 to 4.17 have not been tested and should be
 assumed to be vulnerable.

 All Squid-5.x up to and including 5.9 are vulnerable.

 All Squid-6.x up to and including 6.7 are vulnerable.

__

Workaround:

  **There is no workaround for this issue**
__

Contact details for the Squid project:

 For installation / upgrade support on binary packaged versions
 of Squid: your first point of contact should be your binary
 package vendor.

 If you install and build Squid from the original Squid sources
 then the  mailing list is your
 primary support point. For subscription details see
 .

 For reporting of non-security bugs in the latest STABLE release
 the squid bugzilla database should be used
 .

 For reporting of security sensitive bugs send an email to the
  mailing list. It's a closed
 list (though anyone can post) and security related bug reports
 are treated in confidence until the impact has been established.

__

Credits:

 This vulnerability was discovered by Joshua Rogers of Opera
 Software.

 Fixed by The Measurement Factory.

__

Revision history:

 2023-10-12 11:53:02 UTC Initial Report
 2023-10-31 11:35:02 UTC Patches Released
 2024-03-04 06:27:00 UTC Fixed Version Released
__
END
___
squid-announce mailing list
squid-annou...@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-announce
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


[squid-users] [squid-announce] [ADVISORY] SQUID-2024:2 Denial of Service in HTTP Header parser

2024-03-04 Thread Amos Jeffries

__

Squid Proxy Cache Security Update Advisory SQUID-2024:2
__

Advisory ID:   | SQUID-2024:2
Date:  | Feb 15, 2024
Summary:   | Denial of Service in HTTP Header parser
Affected versions: | Squid 3.x -> 3.5.28
   | Squid 4.x -> 4.17
   | Squid 5.x -> 5.9
   | Squid 6.x -> 6.4
Fixed in version:  | Squid 6.5
__

Problem Description:

 Due to a Collapse of Data into Unsafe Value bug,
 Squid may be vulnerable to a Denial of Service
 attack against HTTP header parsing.

__

Severity:

 This problem allows a remote client or a remote server to
 perform Denial of Service when sending oversized headers in
 HTTP messages.

 In versions of Squid prior to 6.5 this can be achieved if the
 request_header_max_size or reply_header_max_size settings are
 unchanged from the default.

 In Squid version 6.5 and later, the default setting of these
 parameters is safe. Squid will emit a critical warning in
 cache.log if the administrator is setting these parameters to
 unsafe values. Squid will not at this time prevent these settings
 from being changed to unsafe values.

__

Updated Packages:

Hardening against this issue is added to Squid version 6.5.

 In addition, patches addressing this problem for the stable
 releases can be found in our patch archives:

Squid 6:
 

 If you are using a prepackaged version of Squid then please refer
 to the package vendor for availability information on updated
 packages.

__

Determining if your version is vulnerable:

 Run the following command to identify how (and whether)
 your Squid has been configured with relevant settings:

squid -k parse 2>&1 | grep header_max_size

 All Squid-3.0 up to and including 6.4 without header_max_size
 settings are vulnerable.

 All Squid-3.0 up to and including 6.4 with either header_max_size
 setting over 21 KB are vulnerable.

 All Squid-3.0 up to and including 6.4 with both header_max_size
 settings below 21 KB are not vulnerable.

 All Squid-6.5 and later without header_max_size configured
 are not vulnerable.

 All Squid-6.5 and later configured with both header_max_size
 settings below 64 KB are not vulnerable.

 All Squid-6.5 and later configured with either header_max_size
 setting over 64 KB are vulnerable.

__

Workaround:

For Squid older than 6.5, add to squid.conf:

  request_header_max_size 21 KB
  reply_header_max_size 21 KB


For Squid 6.5 and later, remove request_header_max_size
 and reply_header_max_size from squid.conf

__

Contact details for the Squid project:

 For installation / upgrade support on binary packaged versions
 of Squid: Your first point of contact should be your binary
 package vendor.

 If you install and build Squid from the original Squid sources
 then the  mailing list is your
 primary support point. For subscription details see
 .

 For reporting of non-security bugs in the latest STABLE release
 the squid bugzilla database should be used
 .

 For reporting of security sensitive bugs send an email to the
  mailing list. It's a closed
 list (though anyone can post) and security related bug reports
 are treated in confidence until the impact has been established.

__

Credits:

 This vulnerability was discovered by Joshua Rogers of Opera
 Software.

 Fixed by The Measurement Factory.

__

Revision history:

 2023-10-12 11:53:02 UTC Initial Report
 2023-10-25 11:47:19 UTC Patches Released
__
END
___
squid-announce mailing list
squid-annou...@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-announce
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


[squid-users] [squid-announce] [ADVISORY] SQUID-2023:11 Denial of Service in Cache Manager

2024-03-04 Thread Amos Jeffries

__

  Squid Proxy Cache Security Update Advisory SQUID-2023:11
__

Advisory ID:   | SQUID-2023:11
Date:  | Jan 24, 2024
Summary:   | Denial of Service in Cache Manager
Affected versions: | Squid 2.x -> 2.7.STABLE9
   | Squid 3.x -> 3.5.28
   | Squid 4.x -> 4.17
   | Squid 5.x -> 5.9
   | Squid 6.x -> 6.5
Fixed in version:  | Squid 6.6
__

Problem Description:

 Due to a hanging pointer reference bug Squid is vulnerable to a
 Denial of Service attack against Cache Manager error responses.

__

Severity:

 This problem allows a trusted client to perform Denial of Service
 when generating error pages for Client Manager reports.

__

Updated Packages:

  This bug is fixed by Squid version 6.6.

 In addition, patches addressing this problem for the stable
 releases can be found in our patch archives:

Squid 5:
 

Squid 6:
 

 If you are using a prepackaged version of Squid then please refer
 to the package vendor for availability information on updated
 packages.

__

Determining if your version is vulnerable:

 Squid older than 5.0.5 have not been tested and should be assumed
 to be vulnerable.

 All Squid-5.x up to and including 5.9 are vulnerable.

 All Squid-6.x up to and including 6.5 are vulnerable.

__

Workaround:

 Prevent access to Cache Manager using Squid's main access
 control:

  http_access deny manager

__

Contact details for the Squid project:

 For installation / upgrade support on binary packaged versions
 of Squid: Your first point of contact should be your binary
 package vendor.

 If you install and build Squid from the original Squid sources
 then the  mailing list is your
 primary support point. For subscription details see
 .

 For reporting of non-security bugs in the latest STABLE release
 the squid bugzilla database should be used
 .

 For reporting of security sensitive bugs send an email to the
  mailing list. It's a closed
 list (though anyone can post) and security related bug reports
 are treated in confidence until the impact has been established.

__

Credits:

 This vulnerability was discovered by Joshua Rogers of Opera
 Software.

 Fixed by The Measurement Factory.

__

Revision history:

 2023-10-12 11:53:02 UTC Initial Report
 2023-11-12 09:33:20 UTC Patches Released
__
END
___
squid-announce mailing list
squid-annou...@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-announce
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


[squid-users] [squid-announce] [ADVISORY] SQUID-2023:10 Denial of Service in HTTP Request parsing

2024-03-04 Thread Amos Jeffries

__

  Squid Proxy Cache Security Update Advisory SQUID-2023:10
__

Advisory ID:   | SQUID-2023:10
Date:  | Dec 10, 2023
Summary:   | Denial of Service in HTTP Request parsing
Affected versions: | Squid 2.6 -> 2.7.STABLE9
   | Squid 3.1 -> 3.5.28
   | Squid 4.x -> 4.17
   | Squid 5.x -> 5.9
   | Squid 6.x -> 6.5
Fixed in version:  | Squid 6.6
__

Problem Description:

 Due to an Uncontrolled Recursion bug, Squid may be vulnerable to a
 Denial of Service attack against HTTP Request parsing.

__

Severity:

This problem allows a remote client to perform Denial of Service attack
by sending a large X-Forwarded-For header when the
follow_x_forwarded_for feature is configured.

__

Updated Packages:

This bug is fixed by Squid version 6.6.

 In addition, patches addressing this problem for the stable
 releases can be found in our patch archives:

Squid 5:
 

Squid 6:
 

 If you are using a prepackaged version of Squid then please refer
 to the package vendor for availability information on updated
 packages.

__

Determining if your version is vulnerable:

 To check for follow_x_forwarded_for run the following command:

  `squid -k parse 2>&1 |grep follow_x_forwarded_for`


 All Squid configured without follow_x_forwarded_for are not
 vulnerable.

 All Squid older than 5.0.5 have not been tested and should be
 assumed to be vulnerable when configured with
 follow_x_forwarded_for.

 All Squid-5.x up to and including 5.9 are vulnerable when
 configured with follow_x_forwarded_for.

 All Squid-6.x up to and including 6.5 are vulnerable when
 configured with follow_x_forwarded_for.

__

Workaround:

 Remove all follow_x_forwarded_for lines from squid.conf

__

Contact details for the Squid project:

 For installation / upgrade support on binary packaged versions
 of Squid: Your first point of contact should be your binary
 package vendor.

 If you install and build Squid from the original Squid sources
 then the  mailing list is your
 primary support point. For subscription details see
 .

 For reporting of non-security bugs in the latest STABLE release
 the squid bugzilla database should be used
 .

 For reporting of security sensitive bugs send an email to the
  mailing list. It's a closed
 list (though anyone can post) and security related bug reports
 are treated in confidence until the impact has been established.

__

Credits:

 This vulnerability was discovered by Joshua Rogers of Opera
 Software.

 Fixed by Thomas Leroy of the SUSE security team.

__

Revision history:

 2023-10-12 11:53:02 UTC Initial Report
 2023-11-28 07:35:46 UTC Patches Released
__
END
___
squid-announce mailing list
squid-annou...@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-announce
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Missing IPv6 sockets in Squid 6.7 in some servers

2024-03-04 Thread Amos Jeffries

On 5/03/24 08:03, Dragos Pacher wrote:

Hello,

I am a Squid beginner and we would like to use Squid inside our 
organization only as a HTTPS traffic inspection/logging tool for some 
3rd party apps that we bought,
something close to what a "MITM proxy" is called but we will not do 
that, instead we use a self signed certificate and the 3rd party app 
owners know this. Everything is

100% completely legal. (Ps: I am the IT lead).



FYI: "MITM proxy" is a ridiculous term. "MITM" means "intermediary" in 
security terminology, "proxy" means "intermediary" in networking 
terminology.

 So that term just means "intermediary intermediary", yeah.



Any serious HTTPS inspection/logging by Squid needs some form of 
SSL-Bump configuration and those 3rd-party Apps MUST be configured with 
trust for the self-signed root CA you are using.



Without that nothing Squid (or any other proxy) does will allow traffic 
inspection beyond the initial TLS handshake.




Assuming that you have checked that detail, on to your issue ...


We will be using Squid only internally, no outside access. Here is my 
issue with the current knowledge of Squid: POC running well on 3 servers 
but on the 4th I get no IPv6

sockets:
ubuntu@A2-3:/$ sudo netstat -patun | grep squid | grep tcp
tcp        0      0 10.10.0.16:3128         0.0.0.0:*   
LISTEN      2891391/(squid-1)



Your problem is the https(s)_port "port" configuration parameter.


This Squid is configured to listen like:

  http_port 10.10.0.16:3128

or

  http_port example.com:3128

(when example.com has only address 10.10.0.16)


The "http_port" receives port 80 syntax traffic, it may also be
"https_port" which receives port 443 syntax traffic.




and on the other 3 I have IPv6:
ubuntu@A2-2:/$ sudo netstat -patun | grep squid | grep tcp
tcp        0      0 x.x.x.x:52386    x.x.x.x:443     ESTABLISHED 
997651/(squid-1)
tcp6       0      0 :::3128                 :::*   
  LISTEN      997651/(squid-1)



These Squid are configured to listen like:

 http_port 3128


Ensure that the machine/server the 4th Squid is running on has its 
http(s)_port line matching the other three machines port value.


At this point do not care about the "mode" or options later in the line. 
Your issue is solely the "port" parameter.



Cheers
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] ICAP response to avoid backend

2024-02-26 Thread Amos Jeffries

On 26/02/24 06:52, Ed wrote:

On 2024-02-24 17:26+, Ed wrote:

In varnish land this is doable in the vcl_miss hook, but I don't know
how to do that in squid.


I think I found a way, but maybe there's a better method - I'd like to
the cache_peer_access to apply to all backends, but this does seem to do
what I was after:

   acl bad_foo req_header ICAPHEADER -i foobar
   cache_peer_access server_1 deny bad_foo



Assuming that an ICAP service is controlling whether the peers are to be 
used that is the correct way.


However, if you have an ICAP service controlling whether a peer can be 
used consider having the ICAP service just send Squid the final 
response. There is a relatively huge amount of complexity, both in the 
config and what Squid has to do slowing the transaction down just for 
this maybe-a-HIT behaviour.



Alternatives to "cache_peer_access .. deny bad_foo" are:

A) "always_direct allow bad_foo",
  If you want the request to be served, but using servers from a DNS 
lookup instead of the configured cache_peer.


B) "miss_access deny bad_foo",
  If you do not want the cache MISS to be answered at all.


It has been a while since I tested it, but IIRC with miss_access a 
"deny_info" line may be used to change the default 403 error status into 
another in the 200-599 status range. Which includes redirects, 
retry-after, empty responses, and template pages responses ... whichever 
suits your need best.



Cheers
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Can't verify the signature of squid-6.7.tar.gz

2024-02-26 Thread Amos Jeffries

Excellent news.

Thank you for the feedback on the solution.


Cheers
Amos

On 22/02/24 10:14, Miha Miha wrote:

Hi Amos,

It took me some time to check and verify.
I'm posting my findings here just to complete the thread.

Regarding this one:


On 8/02/24 02:19, Miha Miha wrote:

Hi Francesco,

I still get an issue, although a slightly different one:

#gpg --verify squid-6.7.tar.gz.asc squid-6.7.tar.gz
gpg: Signature made Tue 06 Feb 2024 10:51:28 PM EET using ? key ID FEF6E865
gpg: Can't check signature: Invalid public key algorithm


On Thu, Feb 8, 2024 at 7:58 AM Amos Jeffries  wrote:

The error mentions algorithm, so also check the ciphers/algorithms
supported by your GPG agent. The new key uses the EDDSA cipher instead
of typical RSA.


Indeed, the problem is with my gpg agent - gpg (GnuPG) 2.0.22 which
doesn't support EDDSA. My system is CentOS7 and it sticks to GnuPG
2.0.x

I did another test on Amazon Linux with gpg (GnuPG) 2.3.7 (it supports
EDDSA) and there I was able to verify the package with the given pub
key.

All questions are clarified. Thank you!

Regards,
Mihail

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Squid Segment Violation with authorization

2024-02-15 Thread Amos Jeffries

On 16/02/24 15:30, Eternal Dreamer wrote:

Hi!
When I'm trying to send curl request with provided basic 
proxy-authorization credentials through my proxy I see Segment Violation 
error in my logs and empty reply from server. Command is:
curl -v --proxy-basic --proxy-user login:password --proxy 
http://192.168.3.19:8080  https://google.com 



In squid.conf I have 3 directives:

http_access allow some_acl
http_access allow some_acl some_acl_user_auth some_special_domain 
http_all_port http_all_proto
http_access allow some_acl some_acl_user_auth some_special_domain 
CONNECT https_port


If I comment first one authorization works fine and it looks good.



Authorize or Authenticate?

Different things and you are mixing them up in these rules.


But 
with all lines I even can't authorize to special domains without Segment 
Violation error.



The issue is likely somewhere else in what you have configured Squid to 
do. The initial "allow some_acl" line *authorizes* access, without 
*authenticating*. Resulting in there being no credentials for anything 
that Squid needs to do later.



If it helps this arrangement is clearer and does almost the same thing:

 http_access allow some_acl
 http_access deny !some_special_domain
 http_access deny !some_acl_user_auth
 http_access allow CONNECT https_port
 http_access allow http_all_port http_all_proto




I've tried to use different versions of squid from 3.5 to 7.0.
Squid before v5.0.1 ignores Proxy-Authorization header when it's not 
needed and works fine with this configuration.


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Error files removed from 6.7

2024-02-14 Thread Amos Jeffries



On 15/02/24 05:01, Stephen Borrill wrote:
I see the translations of error messages have been removed from 6.7 
compared to 6.6 (and earlier), but I see no mention of this in the 
changelog:

https://github.com/squid-cache/squid/blob/552c2ceef220f3bbcdbedf194eae419fc791098e/ChangeLog

Was this change intentional and, if so, why isn't it documented?


No it was not intentional, hopefully will be fixed with the next release 
due on 3rd March.


As a workaround the files from 6.6 can be used, or the latest langpack 
available separately at 



Cheers
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Anyone build Squid for on multiarch ie arm and arm64?

2024-02-13 Thread Amos Jeffries

On 13/02/24 07:22, ngtech1ltd wrote:

I have couple RouterOS devices which supports containers with the next CPU 
arches:
• x86_64
• arm64
• armv6
• armv7

And I was wondering if someone bothered compiling squid containers for these 
arches?

I know that there are packages for Debian and Ubuntu but these are not 6.x 
squid but rather 5.x.


Debian packages of Squid are up to 6.x if you base the container on the 
"Testing" repository.


FWIW; I'm not sure if publishing built Docker containers is much use 
compared to providing the docker configuration file + extras needed to 
build the container as-needed.



Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Can't verify the signature of squid-6.7.tar.gz

2024-02-07 Thread Amos Jeffries



On 8/02/24 02:19, Miha Miha wrote:

Hi Francesco,

I still get an issue, although a slightly different one:

#gpg --verify squid-6.7.tar.gz.asc squid-6.7.tar.gz
gpg: Signature made Tue 06 Feb 2024 10:51:28 PM EET using ? key ID FEF6E865
gpg: Can't check signature: Invalid public key algorithm




The error mentions algorithm, so also check the ciphers/algorithms 
supported by your GPG agent. The new key uses the EDDSA cipher instead 
of typical RSA.





When I try to import the public keys (pgp.asc file) I see:

#gpg --import pgp.asc

...
gpg: key FEF6E865: no valid user IDs
gpg: this may be caused by a missing self-signature
...

All the rest keys have an user and e-mail.

When I list the imported pub keys with   gpg --list-keys I see
multiple keys, but not the FEF6E865

May be the pub key hasn't been properly imported?



Please check the contents of squid-6.7.tar.gz.asc. The full key ID 
should be provided there (FEF6E865 is one of its short-forms).


If you have any doubts about the keyring (pgp.asc file), you can try to 
fetch a fresh copy of it from <http://master.squid-cache.org/pgp.asc>




FTR; this is what I get working from a clean /tmp/squid pseudo-chroot 
directory to avoid my actual trusted+known keys:


## mkdir /tmp/squid

## wget http://master.squid-cache.org/pgp.asc

## gpg --homedir /tmp/squid --import pgp.asc
gpg: WARNING: unsafe permissions on homedir '/tmp/squid'
gpg: keybox '/tmp/squid/pubring.kbx' created
gpg: key B268E706FF5CF463: 1 duplicate signature removed
gpg: key B268E706FF5CF463: 4 signatures not checked due to missing keys
gpg: /tmp/squid/trustdb.gpg: trustdb created
gpg: key B268E706FF5CF463: public key "Amos Jeffries 
" imported

gpg: key 4250AB432402F2F8: 1 signature not checked due to a missing key
gpg: key 4250AB432402F2F8: public key "Duane Wessels 
" imported

gpg: key E75E90C039CC33DB: 202 signatures not checked due to missing keys
gpg: key E75E90C039CC33DB: public key "Henrik Nordstrom 
" imported

gpg: key 867BF9A9FBD3EB8E: 605 signatures not checked due to missing keys
gpg: key 867BF9A9FBD3EB8E: public key "Robert Collins 
" imported

gpg: key CD6DBF8EF3B17D3E: 1 signature not checked due to a missing key
gpg: key CD6DBF8EF3B17D3E: public key "Amos Jeffries (Squid Signing Key) 
" imported
gpg: key 28F85029FEF6E865: public key "Francesco Chemolli (code signing 
key) " imported
gpg: key 3AEBEC6EC66648FD: public key "Francesco Chemolli (kinkie) 
" imported

gpg: Total number processed: 7
gpg:   imported: 7
gpg: no ultimately trusted keys found

## wget http://master.squid-cache.org/Versions/v6/squid-6.7.tar.gz
## wget http://master.squid-cache.org/Versions/v6/squid-6.7.tar.gz.asc

## gpg --homedir /tmp/squid --verify squid-6.7.tar.gz.asc squid-6.7.tar.gz
gpg: WARNING: unsafe permissions on homedir '/tmp/squid'
gpg: Signature made Wed 07 Feb 2024 09:51:28 NZDT
gpg:using EDDSA key 29B4B1F7CE03D1B1DED22F3028F85029FEF6E865
gpg: Good signature from "Francesco Chemolli (code signing key) 
" [unknown]

gpg: WARNING: This key is not certified with a trusted signature!
gpg:  There is no indication that the signature belongs to the 
owner.

Primary key fingerprint: 29B4 B1F7 CE03 D1B1 DED2  2F30 28F8 5029 FEF6 E865



HTH
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] stale-if-error returning a 502

2024-02-07 Thread Amos Jeffries

On 8/02/24 07:45, Robin Carlisle wrote:

Hi,

I have just started my enhanced logging journey and have a small snippet 
below that might illuminate the issue ...


/2024/02/07 17:06:39.212 kid1| 88,3| client_side_reply.cc(507) 
handleIMSReply: origin replied with error 502, forwarding to client due 
to fail_on_validation_err/




Please check the log for the earlier 22,3 line saying "checking 
freshness of URI: ".


All the 22,3 lines between there and your found 88,3 line will tell the 
story of why refresh was done. That will give an hint about why the 
fail_on_validation_err flag was set.



Cheers
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Is Squid 6 production ready?

2024-01-31 Thread Amos Jeffries

On 1/02/24 11:22, Miha Miha wrote:

On 10/01/24 12:18, Miha Miha wrote:

Release note of latest Squid 6.6 says: "...not deemed ready for
production use..."  For comparison Squid 5.1 was 'ready'. When v6 is
expected to be ready for prod systems?


On Fri, Jan 12, 2024 at 3:37 PM Amos Jeffries wrote:
Sorry, that is an oversight in the release notes text. Removing it now.

Squid 6 is production ready.


Hi Amos,
I still see the 6.6 release note unchanged. Could you please adjust.




The page is auto-generated from the release source code. It is too late 
to change the 6.6 package. The documentation has been updated already 
for 6.7 release.


Cheers
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Security advisories are not accessible

2024-01-29 Thread Amos Jeffries

Thanks for the notice.

This appears to be a github issue that has been occuring to many other 
projects for at least 5hrs now. For now we can only hope that it gets 
resolved soon



Cheers
Amos

On 30/01/24 01:50, Adam Majer wrote:

Hi,

http://www.squid-cache.org/Versions/v6/ lists security advisories link as

    https://github.com/squid-cache/squid/security/advisories

But going there "There aren’t any published security advisories". There 
are links to individual patches.


Thanks,
- Adam
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] offline mode not working for me

2024-01-20 Thread Amos Jeffries

On 20/01/24 02:05, Robin Carlisle wrote:


I do have 1 followup question which I think is unrelated, let me know if 
etiquette demands I create a new post for this. When I test using 
chromium browser, chromium sends OPTION requests- which I think is 
something to do with CORS.   These always cause cache MISS from squid,.. 
I think because the return code is 204...?




No, the reason is HTTP specification (RFC 9110 section 9.3.7):
   "Responses to the OPTIONS method are not cacheable."

If these actually are CORS (might be several other things also), then 
there is important differences in the response headers per-visitor. 
These cannot be cached, and Squid does not know how to correctly 
generate for those headers. So having Squid auto-respond is not a good idea.



Cheers
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] offline mode not working for me

2024-01-18 Thread Amos Jeffries

On 19/01/24 03:53, Robin Carlisle wrote:
Hi, Hoping someone can help me with this issue that I have been 
struggling with for days now.   I am setting up squid on an ubuntu PC to 
forward HTTPS requests to an API and an s3 bucket under my control on 
amazon AWS.  The reason I am setting up the proxy is two-fold...


1) To reduce costs from AWS.
2) To provide content to the client on the ubuntu PC if there is a 
networking issue somewhere in between the ubuntu PC and AWS.


Item 1 is going well so far.   Item 2 is not going well.   Setup details ...


...



When network connectivity is BAD, I get errors and a cache MISS.   In 
this test case I unplugged the ethernet cable from the back on the 
ubuntu-pc ...


*# /var/log/squid/access.log*
1705588717.420     11 127.0.0.1 NONE_NONE/200 0 CONNECT 
stuff.amazonaws.com:443  - 
HIER_DIRECT/3.135.162.228  -
1705588717.420      0 127.0.0.1 NONE_NONE/503 4087 GET 
https://stuff.amazonaws.com/api/v1/stuff/stuff.json 
 - HIER_NONE/- 
text/html


*# extract from /usr/bin/proxy-test output*
< HTTP/1.1 503 Service Unavailable
< Server: squid/5.7
< Mime-Version: 1.0
< Date: Thu, 18 Jan 2024 14:38:37 GMT
< Content-Type: text/html;charset=utf-8
< Content-Length: 3692
< X-Squid-Error: ERR_CONNECT_FAIL 101
< Vary: Accept-Language
< Content-Language: en
< X-Cache: MISS from ubuntu-pc
< X-Cache-Lookup: NONE from ubuntu-pc:3129
< Via: 1.1 ubuntu-pc (squid/5.7)
< Connection: close

I have also seen it error in a different way with a 502 but with the 
same ultimate result.


My expectation/hope is that squid would return the cached object on any 
network failure in between ubuntu-pc and the AWS endpoint - and continue 
to return this cached object forever.   Is this something squid can do? 
   It would seem that offline_mode should do this?





FYI,  offline_mode is not a guarantee that a URL will always HIT. It is 
simply a form of "greedy" caching - where Squid will take actions to 
ensure that full-size objects are fetched whenever it lacks one, and 
serve things as stale HITs when a) it is not specifically prohibited, 
and b) a refresh/fetch is not working.



The URL you are testing with should meet your expected behaviour due to 
the "Cache-Control: public, stale-of-error" header alone.

  Regardless of offline_mode configuration.


That said, getting a 5xx response when there is an object already in 
cache seems like something is buggy to me.


A high level cache.log will be needed to figure out what is going on 
(see https://wiki.squid-cache.org/SquidFaq/BugReporting#full-debug-output).
Be aware this list does not permit large posts so please provide a link 
to download in your reply not attachment.



Cheers
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Is Squid 6 production ready?

2024-01-12 Thread Amos Jeffries

On 10/01/24 12:18, Miha Miha wrote:

Release note of latest Squid 6.6 says: "...not deemed ready for
production use..."  For comparison Squid 5.1 was 'ready'. When v6 is
expected to be ready for prod systems?



Sorry, that is an oversight in the release notes text. Removing it now.

Squid 6 is production ready.


Cheers
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] squid hangs and dies and can not be killed - needs system reboot

2023-12-19 Thread Amos Jeffries


On 19/12/23 16:29, Amish wrote:

Hi Alex,

Thank you for replying.

On 19/12/23 01:14, Alex Rousskov wrote:

On 2023-12-18 09:35, Amish wrote:


I use Arch Linux and today I updated squid from squid 5.7 to squid 6.6.


> Dec 18 13:01:24 mumbai squid[604]: kick abandoning conn199

I do not know whether the above problem is the primary problem in your 
setup, but it is a red flag. Transactions on the same connection may 
get stuck after that message; it is essentially a Squid bug.


I am not sure at all, but this bug might be related to Bug 5187 
workaround that went into Squid v6.2 (commit c44cfe7): 
https://bugs.squid-cache.org/show_bug.cgi?id=5187


Does Squid accept new TCP connections after it enters what you 
describe as a dead state? For example, does "telnet 127.0.0.1 8080" 
establishes a connection if executed on the same machine as Squid?


Yes it establishes connection. But I do not know what to do next. 
Browser showed "Connection timed out" message. But I believe browser's 
also connected but nothing happened afterwards.




Ah ... that port is an interception port. It should *not* connect.

Please ensure your firewall contains the "-t mangle" rules for each 
interception port you use. As shown at 






> kill -9 does nothing

Is it possible that you are trying to kill the wrong process? You 
should be killing this process AFAICT:


> root 601  0.0  0.2  73816 22528 ?    Ss   12:59 0:02
> /usr/bin/squid -f /etc/squid/btnet/squid.btnet.conf --foreground -sYC


I did not clarify but all processes needed SIGKILL and vanished except 
the Dead squid process which still remained.


# systemctl stop squid

Dec 19 08:46:38 mumbai systemd[1]: squid.service: State 'stop-sigterm' 
timed out. Killing.



FWIW, Squid default shutdown grace period for clients to disconnect is 
longer that systemd typically is willing to wait for a service shutdown.


Please set "shutdown_lifetime 10 seconds" in your squid.conf.


Dec 19 08:46:38 mumbai systemd[1]: squid.service: Killing process 601 
(squid) with signal SIGKILL.
Dec 19 08:46:38 mumbai systemd[1]: squid.service: Killing process 604 
(squid) with signal SIGKILL.


This is systemd running the command " kill -9 604 ".

Per the Squid code: "XXX: In SMP mode, uncatchable SIGKILL only kills 
the master process".


You can try SIGTERM instead, and repeat up to 3 times if the first does 
not close the process.




HTH
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] IP based user identification/authentication

2023-12-07 Thread Amos Jeffries

On 7/12/23 15:34, Andrey K wrote:

Hello,

I was interested if I can configure some custom external helper that 
will be called before any authentication helpers and can perform user 
identification/authentication based on the client src-IP address.


Well, yes and no.



The order of authentication and authorization helpers is determined by 
what order you configure http_access tests.


So "yes" in that you can call it before authentication, and have it tell 
you what "user" it *thinks* is using that IP.



However, ...

It can look up in the external system information about the user logged 
in to the IP address and return the username and some annotation 
information on success.


Users do not "log into IP address" and ...



If the user has been identified, no subsequent authentications are required.
Identified users can be authorized later using standard squid mechanisms 
(for example, ldap user groups membership).


This feature can be especially useful in "transparent" proxy 
configurations where 407-"Proxy Authentication Required" response code 
is not applicable.



... with interception the user agent is not aware of the proxy 
existence. So it *will not* provide the credentials necessary for 
authentication. Not to the proxy, nor a helper.


So "no".

This is not a way to authenticate. It is a way to **authorize**. The 
difference is very important.


For more info lookup "captive portal" on how this type of configuration 
is done and used.



Cheers
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


[squid-users] [squid-announce] [ADVISORY] SQUID-2023:9 Denial of Service in HTTP Collapsed Forwarding

2023-12-01 Thread Amos Jeffries

__

   Squid Proxy Cache Security Update Advisory SQUID-2023:9
__

Advisory ID:   | SQUID-2023:9
Date:  | December 1, 2023
Summary:   | Denial of Service
   | in HTTP Collapsed Forwarding
Affected versions: | Squid 3.5 -> 3.5.28
   | Squid 4.x -> 4.17
   | Squid 5.x -> 5.9
Fixed in version:  | Squid 6.0.1
__

Problem Description:

 Due to a Use-After-Free bug Squid is vulnerable to a Denial of
 Service attack against collapsed forwarding

__

Severity:

 This problem allows a remote client to perform Denial of
 Service attack on demand when Squid is configured with collapsed
 forwarding.

CVSS Score of 8.6


__

Updated Packages:

This bug is fixed by Squid version 6.0.1.

 If you are using a prepackaged version of Squid then please refer
 to the package vendor for availability information on updated
 packages.

__

Determining if your version is vulnerable:

 Run the following command to identify how (and whether)
 your Squid has been configured with collapsed forwarding:

squid -k parse 2>&1 | grep collapsed_forwarding


 All Squid-3.5 up to and including 5.9 configured with
 "collapsed_forwarding on" are vulnerable.

 All Squid-3.5 up to and including 5.9 configured with
 "collapsed_forwarding off" are not vulnerable.

 All Squid-3.5 up to and including 5.9 configured without any
 "collapsed_forwarding" directive are not vulnerable.

__

Workaround:

 Remove all collapsed_forwarding lines from your squid.conf.

__

Contact details for the Squid project:

 For installation / upgrade support on binary packaged versions
 of Squid: Your first point of contact should be your binary
 package vendor.

 If you install and build Squid from the original Squid sources
 then the  mailing list is your
 primary support point. For subscription details see
 .

 For reporting of non-security bugs in the latest STABLE release
 the squid bugzilla database should be used
 .

 For reporting of security sensitive bugs send an email to the
  mailing list. It's a closed
 list (though anyone can post) and security related bug reports
 are treated in confidence until the impact has been established.

__

Credits:

This vulnerability was discovered by Joshua Rogers of Opera
Software.

Fixed by The Measurement Factory.

__

Revision history:

 2022-09-03 18:41:32 UTC Patches Released
 2023-10-12 11:53:02 UTC Initial Report
__
END
___
squid-announce mailing list
squid-annou...@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-announce
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


[squid-users] [squid-announce] [ADVISORY] SQUID-2023:8 Denial of Service in Helper Process management

2023-12-01 Thread Amos Jeffries

__

   Squid Proxy Cache Security Update Advisory SQUID-2023:8
__

Advisory ID:   | SQUID-2023:8
Date:  | December 1, 2023
Summary:   | Denial of Service
   | in Helper Process management
Affected versions: | Squid 2.x -> 2.7.STABLE9
   | Squid 3.x -> 3.5.28
   | Squid 4.x -> 4.17
   | Squid 5.x -> 5.9
   | Squid 6.x -> 6.4
Fixed in version:  | Squid 6.5
__

Problem Description:

 Due to an Incorrect Check of Function Return Value
 bug Squid is vulnerable to a Denial of Service
 attack against its Helper process management.

__

Severity:

 This problem allows a trusted client or remote server to perform
 a Denial of Service attack when the Squid proxy is under load.


CVSS Score of 8.6


__

Updated Packages:

This bug is fixed by Squid version 6.5.

 In addition, patches addressing this problem for the stable
 releases can be found in our patch archives:

Squid 6:
 

 If you are using a prepackaged version of Squid then please refer
 to the package vendor for availability information on updated
 packages.

__

Determining if your version is vulnerable:

 Squid older than 5.0 have not been tested and should be
 assumed to be vulnerable.

 All Squid-5.x up to and including 5.9 are vulnerable.

 All Squid-6.x up to and including 6.4 are vulnerable.

__

Workaround:

 There is no known workaround for this issue.

__

Contact details for the Squid project:

 For installation / upgrade support on binary packaged versions
 of Squid: Your first point of contact should be your binary
 package vendor.

 If you install and build Squid from the original Squid sources
 then the  mailing list is your
 primary support point. For subscription details see
 .

 For reporting of non-security bugs in the latest STABLE release
 the squid bugzilla database should be used
 .

 For reporting of security sensitive bugs send an email to the
  mailing list. It's a closed
 list (though anyone can post) and security related bug reports
 are treated in confidence until the impact has been established.

__

Credits:

This vulnerability was discovered by Joshua Rogers of Opera
Software.

Fixed by The Measurement Factory.

__

Revision history:

 2023-10-12 11:53:02 UTC Initial Report
 2023-10-27 21:27:20 UTC Patch Released
__
END
___
squid-announce mailing list
squid-annou...@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-announce
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


[squid-users] [squid-announce] [ADVISORY] SQUID-2023:7 Denial of Service in HTTP Message Processing

2023-12-01 Thread Amos Jeffries

__

   Squid Proxy Cache Security Update Advisory SQUID-2023:7
__

Advisory ID:   | SQUID-2023:7
Date:  | December 1, 2023
Summary:   | Denial of Service in HTTP Message Processing
Affected versions: | Squid 2.2 -> 2.7.STABLE9
   | Squid 3.x -> 3.5.28
   | Squid 4.x -> 4.17
   | Squid 5.x -> 5.9
   | Squid 6.x -> 6.4
Fixed in version:  | Squid 6.5
__

Problem Description:

 Due to a Buffer Overread bug Squid is vulnerable to a Denial of
 Service attack against Squid HTTP Message processing.

__

Severity:

 This problem allows a remote attacker to perform Denial of
 Service when sending easily crafted HTTP Messages.


CVSS Score of 8.6


__

Updated Packages:

 This bug is fixed by Squid version 6.5.

 In addition, patches addressing this problem for the stable
 releases can be found in our patch archives:

Squid 5 and older:
 

Squid 6:
 

 If you are using a prepackaged version of Squid then please refer
 to the package vendor for availability information on updated
 packages.

__

Determining if your version is vulnerable:

 All Squid-2.2 up to and including 4.17 have not been tested
 and should be assumed to be vulnerable.

 All Squid-5.x up to and including 5.9 are vulnerable.

 All Squid-6.x up to and including 6.4 are vulnerable.

__

Workaround:

 There is no workaround for this issue.

__

Contact details for the Squid project:

 For installation / upgrade support on binary packaged versions
 of Squid: Your first point of contact should be your binary
 package vendor.

 If you install and build Squid from the original Squid sources
 then the  mailing list is your
 primary support point. For subscription details see
 .

 For reporting of non-security bugs in the latest STABLE release
 the squid bugzilla database should be used
 .

 For reporting of security sensitive bugs send an email to the
  mailing list. It's a closed
 list (though anyone can post) and security related bug reports
 are treated in confidence until the impact has been established.

__

Credits:

 This vulnerability was discovered by Joshua Rogers of Opera
 Software.

 Fixed by The Measurement Factory.

__

Revision history:

 2023-10-12 11:53:02 UTC Initial Report
 2023-10-25 19:41:45 UTC Patches Released
__
END
___
squid-announce mailing list
squid-annou...@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-announce
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


[squid-users] [squid-announce] [ADVISORY] SQUID-2023:4 Denial of Service in SSL Certificate validation

2023-12-01 Thread Amos Jeffries

__

Squid Proxy Cache Security Update Advisory SQUID-2023:4
__

Advisory ID:   | SQUID-2023:4
Date:  | November 2, 2023
Summary:   | Denial of Service in SSL Certificate validation
Affected versions: | Squid 3.3 -> 3.5.28
   | Squid 4.x -> 4.16
   | Squid 5.x -> 5.9
   | Squid 6.x -> 6.3
Fixed in version:  | Squid 6.4
__

Problem Description:

 Due to an Improper Validation of Specified Index
 bug Squid is vulnerable to a Denial of Service
 attack against SSL Certificate validation.

__

Severity:

 This problem allows a remote server to perform Denial of
 Service against Squid Proxy by initiating a TLS Handshake with
 a specially crafted SSL Certificate in a server certificate
 chain.

 This attack is limited to HTTPS and SSL-Bump.

CVSS Score of 8.6


__

Updated Packages:

This bug is fixed by Squid version 6.4.

 In addition, patches addressing this problem for the stable
 releases can be found in our patch archives:

Squid 5:
 

Squid 6:
 

 If you are using a prepackaged version of Squid then please refer
 to the package vendor for availability information on updated
 packages.

__

Determining if your version is vulnerable:

 All Squid older than 3.3.0.1 are not vulnerable.

 All Squid-3.3 up to and including 3.4.14 compiled without
   --enable-ssl are not vulnerable.

 All Squid-3.3 up to and including 3.4.14 compiled using
   --enable-ssl are vulnerable.

 All Squid-3.5 up to and including 3.5.28 compiled without
   --with-openssl are not vulnerable.

 All Squid-3.5 up to and including 3.5.28 compiled using
   --with-openssl  are vulnerable.

 All Squid-4.x up to and including 4.16 compiled without
   --with-openssl are not vulnerable.

 All Squid-4.x up to and including 4.16 compiled using
   --with-openssl  are vulnerable.

 Squid-5.x up to and including 5.9 compiled without
   --with-openssl are not vulnerable.

 All Squid-5.x up to and including 5.9 compiled using
   --with-openssl  are vulnerable.

 All Squid-6.x up to and including 6.3 compiled without
   --with-openssl are not vulnerable.

 All Squid-6.x up to and including 6.3 compiled using
   --with-openssl  are vulnerable.

__

Workaround:

Either,

 Disable use of SSL-Bump features:
  * Remove all ssl-bump options from http_port and https_port
  * Remove all ssl_bump directives from squid.conf

Or,

  Rebuild Squid using --without-openssl.

__

Contact details for the Squid project:

 For installation / upgrade support on binary packaged versions
 of Squid: Your first point of contact should be your binary
 package vendor.

 If you install and build Squid from the original Squid sources
 then the  mailing list is your
 primary support point. For subscription details see
 .

 For reporting of non-security bugs in the latest STABLE release
 the squid bugzilla database should be used
 .

 For reporting of security sensitive bugs send an email to the
  mailing list. It's a closed
 list (though anyone can post) and security related bug reports
 are treated in confidence until the impact has been established.

__

Credits:

 This vulnerability was discovered by Joshua Rogers of Opera
 Software.

 Fixed by Andreas Weigel

__

Revision history:

 2023-10-12 11:53:02 UTC Initial Report
__
END
___
squid-announce mailing list
squid-annou...@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-announce
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


  1   2   3   4   5   6   7   8   9   10   >