Re: [squid-users] Validation of IP address for SSL spliced connections

2024-05-30 Thread Alex Rousskov

On 2024-05-30 02:30, Rik Theys wrote:

On 5/29/24 11:31 PM, Alex Rousskov wrote:

On 2024-05-29 17:06, Rik Theys wrote:

On 5/29/24 5:29 PM, Alex Rousskov wrote:

On 2024-05-29 05:01, Rik Theys wrote:
squid doesn't seem to validate that the IP address we're connecting 
to is valid for the specified name in the SNI header?


That observation matches my reading of Squid Host header forgery 
detection code which says "we do not yet handle CONNECT tunnels 
well, so ignore for them". To validate that theory, use 
"debug_options ALL,3" and look for "SECURITY ALERT: Host header 
forgery detected" messages in cache.log.


I've enabled this debug option, but I never see the security alert in 
the logs. Maybe it was introduced in more recent versions? I'm 
currently using Squid 5.5 that comes with Rocky Linux 9.4.


The code logging "SECURITY ALERT: Host header forgery detected" 
messages is present in v5.5, but perhaps it is not triggered in that 
version (or even in modern/supported Squids) when I expect it to be 
triggered. Unfortunately, there are too many variables for me to 
predict what exactly went wrong in your particular test case without 
doing a lot more work (and I cannot volunteer to do that work right now).


Looking at https://wiki.squid-cache.org/KnowledgeBase/HostHeaderForgery, 
it always seems to mention the Host header. It has no mention of 
performing the same checks for the SNI value. Since we're peeking at the 
request, we can't see the actual Host header being sent.


As Amos has explained, SslBump at step2 is supposed to relay TLS Client 
Hello information via fake CONNECT request headers. SNI should go into 
CONNECT Host header and CONNECT target pseudo-header. That fake CONNECT 
request should then be checked for forgery.


Whether all of the above actually happens is an open question. I bet a 
short answer is "no". I am not just being extra cautious here based on 
overall poor SslBump code quality! I believe there are "real bugs" on 
that code path because we have fixed some of them (and I hope to find 
the time to post a polished version of those fixes for the official 
review in the foreseeable future). For an example that fuels my 
concerns, see the following unofficial commit message:

https://github.com/measurement-factory/squid/commit/462aedcc


I believe that for my use-case (only splice certain domains and prevent 
connecting to a wrong IP address), there's currently no solution then.


I suspect that there is currently no solution that does not involve 
writing complex external ACL helpers or complex Squid code fixes.



I guess that explains why if I add "%ssl::logformat for the access log, the field is always "-"?


It may explain that, but other problems may lead to the same "no 
certificate" result as well, of course. You can kind of check by using 
stare/bump instead of peek/splice -- if you see certificate details 
logged in that bumping test, then it is more likely that Squid just 
does not get a plain text certificate in peeking configurations.


I've updated the configuration to use stare/bump instead. The field is 
then indeed added to the log file. A curl request that forces the 
connection to a different IP address then also fails because the 
certificate isn't valid for the name. There's no mention of the Host 
header not matching the IP address, but I assume that check comes after 
the certificate check then.


In most cases, the forgery check should happen before the certificate 
check. I suspect that it does not happen at all in your test case.



HTH,

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Validation of IP address for SSL spliced connections

2024-05-30 Thread Amos Jeffries

On 30/05/24 18:30, Rik Theys wrote:

Hi,

On 5/29/24 11:31 PM, Alex Rousskov wrote:

On 2024-05-29 17:06, Rik Theys wrote:

On 5/29/24 5:29 PM, Alex Rousskov wrote:

On 2024-05-29 05:01, Rik Theys wrote:



squid doesn't seem to validate that the IP address we're connecting 
to is valid for the specified name in the SNI header?


That observation matches my reading of Squid Host header forgery 
detection code which says "we do not yet handle CONNECT tunnels 
well, so ignore for them". To validate that theory, use 
"debug_options ALL,3" and look for "SECURITY ALERT: Host header 
forgery detected" messages in cache.log.


I've enabled this debug option, but I never see the security alert in 
the logs. Maybe it was introduced in more recent versions? I'm 
currently using Squid 5.5 that comes with Rocky Linux 9.4.


The code logging "SECURITY ALERT: Host header forgery detected" 
messages is present in v5.5, but perhaps it is not triggered in that 
version (or even in modern/supported Squids) when I expect it to be 
triggered. Unfortunately, there are too many variables for me to 
predict what exactly went wrong in your particular test case without 
doing a lot more work (and I cannot volunteer to do that work right now).


Looking at https://wiki.squid-cache.org/KnowledgeBase/HostHeaderForgery, 
it always seems to mention the Host header. It has no mention of 
performing the same checks for the SNI value. Since we're peeking at the 
request, we can't see the actual Host header being sent.




FYI, The SSL-Bump feature uses a CONNECT tunnel at the HTTP layer to 
transfer the HTTPS (or encrypted non-HTTPS) content through Squid. The 
SNI value, cert altSubjectName, or raw-IP (whichever most-trusted is 
available) received from peek/bump gets used as the Host header on that 
internal CONNECT tunnel.


The Host header forgery check at HTTP layer is performed on that 
HTTP-level CONNECT request regardless of whether a specific SNI-vs-IP 
check was done by the TLS logic. Ideally both layers would do it, but 
SSL-Bump permutations/complexity makes that hard.




And indeed: if I perform the same test for HTTP traffic, I do see the 
error message:


curl http://wordpress.org --connect-to wordpress.org:80:8.8.8.8:80


I believe that for my use-case (only splice certain domains and prevent 
connecting to a wrong IP address), there's currently no solution then. 
Squid would also have to perform a similar check as the Host check for 
the SNI information. Maybe I can perform the same function with an 
external acl as you've mentioned. I will look into that later. Thanks 
for your time.



IIRC there is at least one SSL-Bump permutation which does server name 
vs IP validation (in a way, not explicitly). But that particular code 
path is not always taken and the SSL-Bump logic does not go out of its 
way to lookup missing details. So likely you are just not encountering 
the rare case that SNI gets verified.




HTH
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Validation of IP address for SSL spliced connections

2024-05-29 Thread Rik Theys

Hi,

On 5/29/24 11:31 PM, Alex Rousskov wrote:

On 2024-05-29 17:06, Rik Theys wrote:

On 5/29/24 5:29 PM, Alex Rousskov wrote:

On 2024-05-29 05:01, Rik Theys wrote:



squid doesn't seem to validate that the IP address we're connecting 
to is valid for the specified name in the SNI header?


That observation matches my reading of Squid Host header forgery 
detection code which says "we do not yet handle CONNECT tunnels 
well, so ignore for them". To validate that theory, use 
"debug_options ALL,3" and look for "SECURITY ALERT: Host header 
forgery detected" messages in cache.log.


I've enabled this debug option, but I never see the security alert in 
the logs. Maybe it was introduced in more recent versions? I'm 
currently using Squid 5.5 that comes with Rocky Linux 9.4.


The code logging "SECURITY ALERT: Host header forgery detected" 
messages is present in v5.5, but perhaps it is not triggered in that 
version (or even in modern/supported Squids) when I expect it to be 
triggered. Unfortunately, there are too many variables for me to 
predict what exactly went wrong in your particular test case without 
doing a lot more work (and I cannot volunteer to do that work right now).


Looking at https://wiki.squid-cache.org/KnowledgeBase/HostHeaderForgery, 
it always seems to mention the Host header. It has no mention of 
performing the same checks for the SNI value. Since we're peeking at the 
request, we can't see the actual Host header being sent.


And indeed: if I perform the same test for HTTP traffic, I do see the 
error message:


curl http://wordpress.org --connect-to wordpress.org:80:8.8.8.8:80


I believe that for my use-case (only splice certain domains and prevent 
connecting to a wrong IP address), there's currently no solution then. 
Squid would also have to perform a similar check as the Host check for 
the SNI information. Maybe I can perform the same function with an 
external acl as you've mentioned. I will look into that later. Thanks 
for your time.





Looking at the logs, I'm also having problems determining where each 
ssl-bump step is started.


Yes, it is a known problem (even for developers). There are also bugs 
related to step boundaries.



Peeking at the server certificates happens at step3. In many modern 
use cases, server certificates are encrypted, so a _peeking_ Squid 
cannot see them. To validate, Squid has to bump the tunnel 
(supported today but problematic for other reasons) or be enhanced 
to use out-of-band validation tricks (that come with their own set 
of problems).


I guess that explains why if I add "%ssl::logformat for the access log, the field is always "-"?


It may explain that, but other problems may lead to the same "no 
certificate" result as well, of course. You can kind of check by using 
stare/bump instead of peek/splice -- if you see certificate details 
logged in that bumping test, then it is more likely that Squid just 
does not get a plain text certificate in peeking configurations.


I've updated the configuration to use stare/bump instead. The field is 
then indeed added to the log file. A curl request that forces the 
connection to a different IP address then also fails because the 
certificate isn't valid for the name. There's no mention of the Host 
header not matching the IP address, but I assume that check comes after 
the certificate check then.


Regards,

Rik


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Validation of IP address for SSL spliced connections

2024-05-29 Thread Alex Rousskov

On 2024-05-29 17:06, Rik Theys wrote:

On 5/29/24 5:29 PM, Alex Rousskov wrote:

On 2024-05-29 05:01, Rik Theys wrote:

acl allowed_clients src "/etc/squid/allowed_clients"
acl allowed_domains dstdomain "/etc/squid/allowed_domains"



http_access allow allowed_clients allowed_domains
http_access allow allowed_clients CONNECT
http_access deny all


Please note that the second http_access rule in the above 
configuration allows CONNECT tunnels to prohibited domains (i.e. 
domains that do not match allowed_domains). Consider restricting your 
"allow...CONNECT" rule to step1. For example:


    http_access allow allowed_clients step1 CONNECT

Thanks, I've updated my configuration.



Please do test any suggested changes. There are too many variables here 
for me to guarantee that a particular set of http_access and ssl_bump 
rules works as expected.



squid doesn't seem to validate that the IP address we're connecting 
to is valid for the specified name in the SNI header?


That observation matches my reading of Squid Host header forgery 
detection code which says "we do not yet handle CONNECT tunnels well, 
so ignore for them". To validate that theory, use "debug_options 
ALL,3" and look for "SECURITY ALERT: Host header forgery detected" 
messages in cache.log.


I've enabled this debug option, but I never see the security alert in 
the logs. Maybe it was introduced in more recent versions? I'm currently 
using Squid 5.5 that comes with Rocky Linux 9.4.


The code logging "SECURITY ALERT: Host header forgery detected" messages 
is present in v5.5, but perhaps it is not triggered in that version (or 
even in modern/supported Squids) when I expect it to be triggered. 
Unfortunately, there are too many variables for me to predict what 
exactly went wrong in your particular test case without doing a lot more 
work (and I cannot volunteer to do that work right now).



Looking at the logs, I'm also having problems determining where each 
ssl-bump step is started.


Yes, it is a known problem (even for developers). There are also bugs 
related to step boundaries.



Peeking at the server certificates happens at step3. In many modern 
use cases, server certificates are encrypted, so a _peeking_ Squid 
cannot see them. To validate, Squid has to bump the tunnel (supported 
today but problematic for other reasons) or be enhanced to use 
out-of-band validation tricks (that come with their own set of problems).


I guess that explains why if I add "%ssl::for the access log, the field is always "-"?


It may explain that, but other problems may lead to the same "no 
certificate" result as well, of course. You can kind of check by using 
stare/bump instead of peek/splice -- if you see certificate details 
logged in that bumping test, then it is more likely that Squid just does 
not get a plain text certificate in peeking configurations.



Is there a way to configure squid to validate that the server 
certificate is valid for the host specified in the SNI header?


IIRC, that validation happens automatically in modern Squid versions 
when Squid receives an (unencrypted) server certificate.



Do you happen to known which version of Squid introduced that check?


IIRC, Squid v5.5 has that code.


HTH,

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Validation of IP address for SSL spliced connections

2024-05-29 Thread Rik Theys

Hi,

On 5/29/24 5:29 PM, Alex Rousskov wrote:

On 2024-05-29 05:01, Rik Theys wrote:

acl allowed_clients src "/etc/squid/allowed_clients"
acl allowed_domains dstdomain "/etc/squid/allowed_domains"



http_access allow allowed_clients allowed_domains
http_access allow allowed_clients CONNECT
http_access deny all


Please note that the second http_access rule in the above 
configuration allows CONNECT tunnels to prohibited domains (i.e. 
domains that do not match allowed_domains). Consider restricting your 
"allow...CONNECT" rule to step1. For example:


    http_access allow allowed_clients step1 CONNECT

Thanks, I've updated my configuration.



squid doesn't seem to validate that the IP address we're connecting 
to is valid for the specified name in the SNI header?


That observation matches my reading of Squid Host header forgery 
detection code which says "we do not yet handle CONNECT tunnels well, 
so ignore for them". To validate that theory, use "debug_options 
ALL,3" and look for "SECURITY ALERT: Host header forgery detected" 
messages in cache.log.


I've enabled this debug option, but I never see the security alert in 
the logs. Maybe it was introduced in more recent versions? I'm currently 
using Squid 5.5 that comes with Rocky Linux 9.4.


Looking at the logs, I'm also having problems determining where each 
ssl-bump step is started.


Please note that in many environments forgery detection does not work 
well (for cases where it is performed) due to clients and Squid seeing 
different sets of IP addresses for the same host name. There are 
numerous complains about that in squid-users archives.



For example, if I add "wordpress.org" to my allowed_domains list, the 
following request is allowed:


curl -v https://wordpress.org --connect-to wordpress.org:443:8.8.8.8:443

8.8.8.8 is not a valid IP address for wordpress.org. This could be 
used to bypass the restrictions.


Agreed.


Is there an option in squid to make it perform a forward DNS lookup 
for the domain from the SNI information from step1


FYI: SNI comes from step2. step1 looks at TCP/IP client info.


to validate that the IP address we're trying to connect to is 
actually valid for that host? In the example above, a DNS lookup for 
wordpress.org would return 198.143.164.252 as the IP address. This is 
not the IP address we're trying to connect to, so squid should block 
the request.


AFAICT, there is no built-in support for that in current Squid code. 
One could enhance Squid or write an external ACL to perform that kind 
of validation. See above for details/caveats.



Similar question for the server certificate: I've configured the 
'ssl_bump peek step2 https_domains' line so squid can peek at the 
server certificate.


Peeking at the server certificates happens at step3. In many modern 
use cases, server certificates are encrypted, so a _peeking_ Squid 
cannot see them. To validate, Squid has to bump the tunnel (supported 
today but problematic for other reasons) or be enhanced to use 
out-of-band validation tricks (that come with their own set of problems).


I guess that explains why if I add "%ssl::for the access log, the field is always "-"?





Is there a way to configure squid to validate that the server 
certificate is valid for the host specified in the SNI header?


IIRC, that validation happens automatically in modern Squid versions 
when Squid receives an (unencrypted) server certificate.



Do you happen to known which version of Squid introduced that check?

Regards,

Rik


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Validation of IP address for SSL spliced connections

2024-05-29 Thread Alex Rousskov

On 2024-05-29 05:01, Rik Theys wrote:

acl allowed_clients src "/etc/squid/allowed_clients"
acl allowed_domains dstdomain "/etc/squid/allowed_domains"



http_access allow allowed_clients allowed_domains
http_access allow allowed_clients CONNECT
http_access deny all


Please note that the second http_access rule in the above configuration 
allows CONNECT tunnels to prohibited domains (i.e. domains that do not 
match allowed_domains). Consider restricting your "allow...CONNECT" rule 
to step1. For example:


http_access allow allowed_clients step1 CONNECT


squid doesn't seem to validate that the IP address we're connecting 
to is valid for the specified name in the SNI header?


That observation matches my reading of Squid Host header forgery 
detection code which says "we do not yet handle CONNECT tunnels well, so 
ignore for them". To validate that theory, use "debug_options ALL,3" and 
look for "SECURITY ALERT: Host header forgery detected" messages in 
cache.log.


Please note that in many environments forgery detection does not work 
well (for cases where it is performed) due to clients and Squid seeing 
different sets of IP addresses for the same host name. There are 
numerous complains about that in squid-users archives.



For example, if I add "wordpress.org" to my allowed_domains list, the 
following request is allowed:


curl -v https://wordpress.org --connect-to wordpress.org:443:8.8.8.8:443

8.8.8.8 is not a valid IP address for wordpress.org. This could be used 
to bypass the restrictions.


Agreed.


Is there an option in squid to make it perform a forward DNS lookup for 
the domain from the SNI information from step1


FYI: SNI comes from step2. step1 looks at TCP/IP client info.


to validate that the IP 
address we're trying to connect to is actually valid for that host? In 
the example above, a DNS lookup for wordpress.org would return 
198.143.164.252 as the IP address. This is not the IP address we're 
trying to connect to, so squid should block the request.


AFAICT, there is no built-in support for that in current Squid code. One 
could enhance Squid or write an external ACL to perform that kind of 
validation. See above for details/caveats.



Similar question for the server certificate: I've configured the 
'ssl_bump peek step2 https_domains' line so squid can peek at the server 
certificate.


Peeking at the server certificates happens at step3. In many modern use 
cases, server certificates are encrypted, so a _peeking_ Squid cannot 
see them. To validate, Squid has to bump the tunnel (supported today but 
problematic for other reasons) or be enhanced to use out-of-band 
validation tricks (that come with their own set of problems).



Is there a way to configure squid to validate that the 
server certificate is valid for the host specified in the SNI header?


IIRC, that validation happens automatically in modern Squid versions 
when Squid receives an (unencrypted) server certificate.



HTH,

Alex.

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


[squid-users] Validation of IP address for SSL spliced connections

2024-05-29 Thread Rik Theys

Hi,

I'm configuring squid as a transparent proxy where local outbound 
traffic is redirect to a local squid process using tproxy.


I would like to limit the domains the host can contact by having an 
allow list. I have the following config file:


--

acl allowed_clients src "/etc/squid/allowed_clients"

acl allowed_domains dstdomain "/etc/squid/allowed_domains"

acl SSL_ports port 443
acl Safe_ports port 80
acl Safe_ports port 443
acl CONNECT method CONNECT

acl step1 at_step SslBump1
acl step2 at_step SslBump2
acl step3 at_step SslBump3

# Additional access control lists
acl https_domains ssl::server_name "/etc/squid/allowed_domains"

# Deny requests to certain unsafe ports
http_access deny !Safe_ports

# Deny CONNECT to other than secure SSL ports
http_access deny CONNECT !SSL_ports

# Only allow cachemgr access from localhost
http_access allow localhost manager
http_access deny manager

# We strongly recommend the following be uncommented to protect innocent
# web applications running on the proxy server who think the only
# one who can access services on "localhost" is a local user
http_access deny to_localhost

#
# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
#
http_access allow allowed_clients allowed_domains
http_access allow allowed_clients CONNECT

# Example rule allowing access from your local networks.
# Adapt localnet in the ACL section to list your (internal) IP networks
# from where browsing should be allowed
#http_access allow localnet
#http_access allow localhost

# And finally deny all other access to this proxy
http_access deny all

# Squid normally listens to port 3128
http_port 3128
http_port 3129 tproxy
https_port 3130 tproxy ssl-bump cert=/etc/squid/cert/local_ca.pem

# SSL bump configuration
ssl_bump peek step1
ssl_bump peek step2 https_domains
ssl_bump splice step3 https_domains
ssl_bump terminate all

--

When the Host header in an intercepted request matches a domain on the 
allowed_domains list, the request is allowed. Otherwise it's denied as 
expected.


But squid doesn't seem to validate that the IP address we're connecting 
to is valid for the specified name in the SNI header?


For example, if I add "wordpress.org" to my allowed_domains list, the 
following request is allowed:


curl -v https://wordpress.org --connect-to wordpress.org:443:8.8.8.8:443

8.8.8.8 is not a valid IP address for wordpress.org. This could be used 
to bypass the restrictions.


Is there an option in squid to make it perform a forward DNS lookup for 
the domain from the SNI information from step1 to validate that the IP 
address we're trying to connect to is actually valid for that host? In 
the example above, a DNS lookup for wordpress.org would return 
198.143.164.252 as the IP address. This is not the IP address we're 
trying to connect to, so squid should block the request.


Similar question for the server certificate: I've configured the 
'ssl_bump peek step2 https_domains' line so squid can peek at the server 
certificate. Is there a way to configure squid to validate that the 
server certificate is valid for the host specified in the SNI header?



Regards,

Rik
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users