How to configure DH groups for TLS 1.3

2024-05-02 Thread Froehlich, Dominik
Hello everyone,

I’m hardening HAProxy for CVE-2002-20001 (DHEAT attack) at the moment.

For TLS 1.2 I’m using the “tune.ssl.default-dh-param” option to limit the key 
size to 2048 bit so that an attacker can’t force huge keys and thus lots of CPU 
cycles on the server.

However, I’ve noticed that the property has no effect on TLS 1.3 connections. 
An attacker can still negotiate an 8192-bit key and brick the server with 
relative ease.

I’ve found an OpenSSL blog article about the issue:   
https://www.openssl.org/blog/blog/2022/10/21/tls-groups-configuration/index.html

As it seems, this used to be a non-issue with OpenSSL 1.1.1 because it only 
supported EC groups, not finite field ones but in OpenSSL 3.x it is again 
possible to select the vulnerable groups, even with TLS 1.3.

The article mentions a way of configuring OpenSSL with a “Groups” setting to 
restrict the number of supported DH groups, however I haven’t found any HAProxy 
config option equivalent.

The closest I’ve gotten is the “curves” property: 
https://docs.haproxy.org/2.8/configuration.html#5.1-curves

However, I think it only restricts the available elliptic curves in a ECDHE 
handshake, but it does not prevent a TLS 1.3 client from selecting a non-ECDHE 
prime group, for example “ffdhe8192”.

The article provides example configurations for NGINX and Apache, but is there 
any way to restrict the DH groups (e.g to just ECDHE) for TLS 1.3 for HAProxy, 
too?


Best Regards,
Dominik


Re: How to check if a domain is known to HAProxy

2024-04-03 Thread Froehlich, Dominik
Hello Willian,

Thank you for your response.

I fear that strict-sni won’t get us far. The issue is that the SNI is just fine 
(it is in the crt-list), however we also need to check if the host-header is 
part of the crt-list. E.g.

curl https://my-host.domain.com<https://my-host.domain.com/> -H “host: 
other-host.otherdomain.com”

so here we check for the SNI “my-host.domain.com” automatically via crt-list.

but in the next step we select the backend based on the host-header, but only 
if it also is present in the crt-list (which we use as a list of valid domains 
hosted on the platform)

so based on what you said we can’t do that, we would do something like

http-request set-var(txn.forwarded_host) req.hdr(host),host_only,lower

acl is_known_domain var(txn.forwarded_host),map_dom(/domains.map) -m found

http request-deny if ! is_known_domain

where /domains.map is basically a copy of the crt-list like that:

*.domain.com 1
*.otherdomain.com 1

So, this works, though it is ugly because I need to do double-maintenance of 
the crt-list.
Even if I used strict-sni, you could still run into the issue that SNI is on 
the crt-list, but the host-header is not.



From: William Lallemand 
Date: Wednesday, 3. April 2024 at 11:31
To: Froehlich, Dominik 
Cc: haproxy@formilux.org 
Subject: Re: How to check if a domain is known to HAProxy
On Wed, Apr 03, 2024 at 07:47:44AM +, Froehlich, Dominik wrote:
> Subject: How to check if a domain is known to HAProxy
> Hello everyone,
>
> This may be kind of a peculiar request.
>
> We have the need to block requests that are not in the crt-list of our 
> frontend.
>
> So, the expectation would be that HAProxy does a lookup of the domain (as it 
> does for the crt-list entry) but for domain-fronted requests, i.e. we have to 
> check both the SNI and the host header.
>
> What makes it difficult is that we still want to allow domain-fronting, but 
> only if the host header also matches an entry in the crt-list.
>
> At the moment, I don’t see any way of doing this programmatically, and the 
> crt-list lookup based on the SNI is completely within HAProxy logic.
>
> Is there any way to access the crt-list via an ACL or similar? The 
> alternative would be to maintain the list twice and add it as a map or list 
> to the HAProxy config and then maybe do a custom host matching via LUA script 
> etc. but I really would like to avoid that.
>
> Any hints from the community?
>

Hello,

You can't access the crt-list from the ACL, however if you are using the
`strict-sni` keyword, you will be sure that the requested SNI will be in
your crt-list. And then you can compare the host header with the SNI.

There is an example in the strcmp keyword documentation:

   http-request set-var(txn.host) hdr(host)
   # Check whether the client is attempting domain fronting.
   acl ssl_sni_http_host_match ssl_fc_sni,strcmp(txn.host) eq 0


https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdocs.haproxy.org%2F2.9%2Fconfiguration.html%23strcmp&data=05%7C02%7Cdominik.froehlich%40sap.com%7Cef9d69783ff54043a83708dc53c0deae%7C42f7676cf455423c82f6dc2d99791af7%7C0%7C0%7C638477335041142353%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&sdata=d8jyQKbe7ODqCI%2BCklprFW9LC67b5yXwHHJYJEQhRGk%3D&reserved=0<https://docs.haproxy.org/2.9/configuration.html#strcmp>

Regards,

--
William Lallemand


How to check if a domain is known to HAProxy

2024-04-03 Thread Froehlich, Dominik
Hello everyone,

This may be kind of a peculiar request.

We have the need to block requests that are not in the crt-list of our frontend.

So, the expectation would be that HAProxy does a lookup of the domain (as it 
does for the crt-list entry) but for domain-fronted requests, i.e. we have to 
check both the SNI and the host header.

What makes it difficult is that we still want to allow domain-fronting, but 
only if the host header also matches an entry in the crt-list.

At the moment, I don’t see any way of doing this programmatically, and the 
crt-list lookup based on the SNI is completely within HAProxy logic.

Is there any way to access the crt-list via an ACL or similar? The alternative 
would be to maintain the list twice and add it as a map or list to the HAProxy 
config and then maybe do a custom host matching via LUA script etc. but I 
really would like to avoid that.

Any hints from the community?



Re: Question regarding option redispatch interval

2023-12-30 Thread Froehlich, Dominik
Hello everyone,

I’ve reached out on Slack on the matter and got the info that the  
setting for redispatch only applies to session persistence, not connection 
failures which will redispatch after every retry regardless of the interval.

If it’s true, it would confirm my observations (using regular GET requests) 
that the redispatching happened after every retry, with intervals set at -1, 1 
and even at 30.

Can someone from the dev team confirm this? If it’s true it would be nice to 
reflect that in the docs, stating that the interval is only relevant for 
session persistence.

Thanks, and happy new year to everyone!

D

From: Froehlich, Dominik 
Date: Friday, 22. December 2023 at 15:13
To: haproxy@formilux.org 
Subject: Question regarding option redispatch interval
You don't often get email from dominik.froehl...@sap.com. Learn why this is 
important<https://aka.ms/LearnAboutSenderIdentification>
Hello,

I’m trying to enable retries with redispatch on my HAProxy (v2.7.11)

Here is my config for testing:

defaults
  option redispatch
  retries 6
  timeout connect 500ms

frontend myfrontend
  bind :443 ssl crt /etc/cert/server.pem crt-list /crt-list

  default_backend test

backend test
  server alice localhost:8080
  server bob1 localhost:8081
  server bob2 localhost:8083
  server bob3 localhost:8084
  server bob4 localhost:8085
  server bob5 localhost:8086



So I have 6 servers in the backend, out of which only the “alice” server works. 
All of the “bob” servers don’t respond.

When I run a request against HAProxy, it always works and I can observe using 
tcpdump that HAProxy will try each server (up to 6 times) until it hits the 
working “Alice” one.

This is what I want, however the docs state otherwise:

https://docs.haproxy.org/2.7/configuration.html?q=enable_redispatch#4.2-option%20redispatch


 The optional integer value that controls how often redispatches
   occur when retrying connections. Positive value P indicates a
   redispatch is desired on every Pth retry, and negative value
   N indicate a redispatch is desired on the Nth retry prior to the
   last retry. For example, the default of -1 preserves the
   historical behavior of redispatching on the last retry, a
   positive value of 1 would indicate a redispatch on every retry,
   and a positive value of 3 would indicate a redispatch on every
   third retry. You can disable redispatches with a value of 0.

I did not provide any interval, so my assumption would be the default of -1 
applies, which should mean “redispatching on the last retry”.

So, I would expect that HAProxy would try e.g. “bob4” for 5 times, then select 
“bob5” for the 6th retry and ultimately fail and return a 503. But that’s not 
the behavior I observe.

To me, it looks like the default “redispatch” value seems to be 1 instead of -1.

Can someone provide guidance here?

BR,
Dominik


Question regarding option redispatch interval

2023-12-22 Thread Froehlich, Dominik
Hello,

I’m trying to enable retries with redispatch on my HAProxy (v2.7.11)

Here is my config for testing:

defaults
  option redispatch
  retries 6
  timeout connect 500ms

frontend myfrontend
  bind :443 ssl crt /etc/cert/server.pem crt-list /crt-list

  default_backend test

backend test
  server alice localhost:8080
  server bob1 localhost:8081
  server bob2 localhost:8083
  server bob3 localhost:8084
  server bob4 localhost:8085
  server bob5 localhost:8086



So I have 6 servers in the backend, out of which only the “alice” server works. 
All of the “bob” servers don’t respond.

When I run a request against HAProxy, it always works and I can observe using 
tcpdump that HAProxy will try each server (up to 6 times) until it hits the 
working “Alice” one.

This is what I want, however the docs state otherwise:

https://docs.haproxy.org/2.7/configuration.html?q=enable_redispatch#4.2-option%20redispatch


 The optional integer value that controls how often redispatches
   occur when retrying connections. Positive value P indicates a
   redispatch is desired on every Pth retry, and negative value
   N indicate a redispatch is desired on the Nth retry prior to the
   last retry. For example, the default of -1 preserves the
   historical behavior of redispatching on the last retry, a
   positive value of 1 would indicate a redispatch on every retry,
   and a positive value of 3 would indicate a redispatch on every
   third retry. You can disable redispatches with a value of 0.

I did not provide any interval, so my assumption would be the default of -1 
applies, which should mean “redispatching on the last retry”.

So, I would expect that HAProxy would try e.g. “bob4” for 5 times, then select 
“bob5” for the 6th retry and ultimately fail and return a 503. But that’s not 
the behavior I observe.

To me, it looks like the default “redispatch” value seems to be 1 instead of -1.

Can someone provide guidance here?

BR,
Dominik


HAProxy 2.7.7: Unexpected messages during shutdown after upgrade

2023-05-15 Thread Froehlich, Dominik
Hi everyone,

We have deployed 2.7.7 recently, to verify the CPU spike fixes we observed in 
https://github.com/haproxy/haproxy/issues/2046

The spikes seem to be fixed now. However, we are now observing log messages 
during shutdown that weren’t there before:

May 12, 2023 @ 11:56:24.000   Proxy health_check_http_tcp-scheduler stopped 
(cumulated conns: FE: 0, BE: 0).
May 12, 2023 @ 11:56:24.000   Proxy http-in stopped (cumulated conns: FE: 
166, BE: 0).
May 12, 2023 @ 11:56:24.000   Proxy st_http_req_rate stopped (cumulated 
conns: FE: 0, BE: 0).
May 12, 2023 @ 11:56:24.000   Proxy https-in stopped (cumulated conns: FE: 
29570, BE: 0).
May 12, 2023 @ 11:56:24.000   Proxy st_tcp_conn_rate stopped (cumulated 
conns: FE: 0, BE: 0).
May 12, 2023 @ 11:56:24.000   Proxy http-routers-http2 stopped (cumulated 
conns: FE: 0, BE: 14768).
May 12, 2023 @ 11:56:24.000   Proxy stats stopped (cumulated conns: FE: 1, 
BE: 0).
May 12, 2023 @ 11:56:24.000   Proxy health_check_http_url stopped 
(cumulated conns: FE: 1811, BE: 0).
May 12, 2023 @ 11:56:24.000   Proxy tcp-frontend_scheduler stopped 
(cumulated conns: FE: 3, BE: 0).
May 12, 2023 @ 11:56:24.000   Proxy  stopped (cumulated conns: 
FE: 0, BE: 0).
May 12, 2023 @ 11:56:24.000   Proxy tcp-scheduler stopped (cumulated conns: 
FE: 0, BE: 3).
May 12, 2023 @ 11:56:24.000   Proxy http-routers-http1 stopped (cumulated 
conns: FE: 0, BE: 134385).

It seems one such message is logged per each “Proxy” (which I assume is an 
internal handler for listeners, backends, frontends etc.).

We don’t suspect this to be a bug, but wanted to ask for clarification about 
the purpose of those log messages and if they present a problem or not.

Best regards,
Dominik Froehlich, SAP BTP


OpenSSL 1.1.1 vs 3.0 client cert verify "x509_strict" issues

2022-12-12 Thread Froehlich, Dominik
Hello HAproxy community!

We’ve recently updated from OpenSSL 1.1.1 to OpenSSL 3.0 for our HAproxy 
deployment.

We are now seeing some client certificates getting denied with these error 
messages:

“SSL client CA chain cannot be verified”/“error:0A86:SSL 
routines::certificate verify failed” 30/0A86

We found out that for this CA certificate, the error was

X509_V_ERR_MISSING_SUBJECT_KEY_IDENTIFIER


This error is only thrown if we run openssl verify with the “-x509_strict” 
option. The same call (even with the “-x509_strict” option) on OpenSSL 1.1.1 
returned OK and verified.

As this was a bit surprising to us and we now have a customer who can’t use 
their client certificate anymore, we wanted to ask for some details on the 
OpenSSL verify check in HAproxy:


  *   How does HAproxy call the “verify” command in OpenSSL?
  *   Does HAproxy use the “x509_strict” option programmatically?
  *   Is there a flag in HAproxy that would allow us to temporarily disable the 
“strict” setting so that the customer has time to update their PKI?
  *   If there is no flag, we could temporarily patch out the code that uses 
the flag, can you give us some pointers?


Thanks a lot for your help!

Dominik Froehlich, SAP


Re: 2.5: Possibility to upgrade http/1.0 clients to http/1.1?

2022-05-11 Thread Froehlich, Dominik
Hi Willy,

Thanks for the fruitful discussion!

I’ve opened https://github.com/haproxy/haproxy/issues/1691 to track this 
feature request.

Best Regards,
Dominik

From: Willy Tarreau 
Date: Monday, 9. May 2022 at 10:59
To: Froehlich, Dominik 
Cc: haproxy@formilux.org 
Subject: Re: 2.5: Possibility to upgrade http/1.0 clients to http/1.1?
Hi Dominik,

On Mon, May 09, 2022 at 08:46:20AM +, Froehlich, Dominik wrote:
> Hi Willy,
>
> Thanks for your response.
>
> Yes, I agree an option that can be turned on would be the most feasible
> solution for us.
>
> I can think of a similar option like we have for "option
> h1-case-adjust-bogus-server<https://eur03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fdocs.haproxy.org%2F2.5%2Fconfiguration.html%234.2-option%2520h1-case-adjust-bogus-server&data=05%7C01%7Cdominik.froehlich%40sap.com%7C6e04c3109f9e4cc46e5c08da319a3063%7C42f7676cf455423c82f6dc2d99791af7%7C0%7C0%7C637876835545280746%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=mJ%2F8UJxsg%2FNJYB78I4Tc16FE%2BSyUsVBG44W7BD5jvoI%3D&reserved=0>"
> that allows you to tell HAProxy whether to touch header casing or not.

Yes I think it would make sense because overall when you know you have
to relax this, it's more based on the LB's location than on a specific
frontend or backend, so it could just be a global option.

> Could be called "option h1-reject-get-head-delete-with-payload-bogus-client" 
> or something.

Note, we want to leave it disabled by default so that only those with
this unusual requirement would have to set the option, so the option's
name should translate that it has to accept the requests instead, so it
could be something like "h1-no-method-restriction-for-payload" or
"h1-accept-payload-with-any-method", or something like this.

> As for your question how the clients ended up sending a body with such 
> requests:
>
> The API the client is using allows them to send a DELETE request with a JSON
> document that lists all the resources to be deleted.

Ah I see, that kind of makes sense. I'm wondering if Elastic Search does
not do something similar, I seem to remember some discussions around this
when we were working on tightening the rule in the spec.

> So instead of
>
> DELETE /books/1
>
> They would send
>
> DELETE /books
> {
>   "delete": [1,2,3]
> }
>
> Weird, I know but it allowed them to delete more than one resource at a time.
> (of course, this could also be put on the path, but well... that's how they
> did it back then)

It's possible that it's easier to specify complex sets. It definitely
helps anticipate possible trouble to be expected at some places. Thanks
for the explanation!

Willy


Re: 2.5: Possibility to upgrade http/1.0 clients to http/1.1?

2022-05-09 Thread Froehlich, Dominik
Hi Willy,

Thanks for your response.

Yes, I agree an option that can be turned on would be the most feasible 
solution for us.
I can think of a similar option like we have for “option 
h1-case-adjust-bogus-server<http://docs.haproxy.org/2.5/configuration.html#4.2-option%20h1-case-adjust-bogus-server>”
 that allows you to tell HAProxy whether to touch header casing or not.

Could be called “option h1-reject-get-head-delete-with-payload-bogus-client” or 
something.

As for your question how the clients ended up sending a body with such requests:

The API the client is using allows them to send a DELETE request with a JSON 
document that lists all the resources to be deleted.

So instead of

DELETE /books/1

They would send

DELETE /books
{
  “delete”: [1,2,3]
}

Weird, I know but it allowed them to delete more than one resource at a time. 
(of course, this could also be put on the path, but well… that’s how they did 
it back then)


Best Regards,
Dominik

From: Willy Tarreau 
Date: Sunday, 8. May 2022 at 11:36
To: Froehlich, Dominik 
Cc: haproxy@formilux.org 
Subject: Re: 2.5: Possibility to upgrade http/1.0 clients to http/1.1?
Hello Dominik,

On Thu, May 05, 2022 at 07:55:06AM +, Froehlich, Dominik wrote:
> Hello everyone,
>
> We recently bumped our HAproxy deployment to 2.5 and are now getting hit by 
> this fix:
>
> MEDIUM: mux-h1: Reject HTTP/1.0 GET/HEAD/DELETE requests with a payload
>
>
> https://eur03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgit.haproxy.org%2F%3Fp%3Dhaproxy-2.5.git%3Ba%3Dblob_plain%3Bf%3DCHANGELOG&data=05%7C01%7Cdominik.froehlich%40sap.com%7C467ec29be90d4e1fee0c08da30d64b24%7C42f7676cf455423c82f6dc2d99791af7%7C0%7C0%7C637875994185150797%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=Yvqb67xhddXiqNo9oikA9GjX6QgMPUzURq%2B1BiTBEFg%3D&reserved=0
>
> The issue is we have many legacy customers using very old systems and we
> can't tell all of them to rewrite their clients to http/1.1.

Of course... Do you have an idea how they ended up sending a body with
such requests ? Maybe adding a comment on the issue below for posterity
could be useful for future versions of the HTTP spec:

  
https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fhttpwg%2Fhttp-core%2Fissues%2F904&data=05%7C01%7Cdominik.froehlich%40sap.com%7C467ec29be90d4e1fee0c08da30d64b24%7C42f7676cf455423c82f6dc2d99791af7%7C0%7C0%7C637875994185150797%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=DZWvpaaPgEvkznNrwBIohppy1B59D0U78mS1JrX3EA0%3D&reserved=0

> I get the security fix to prevent request smuggling where some servers ignore
> the body and treat it as another request, I'm not arguing that.
>
> However, I was wondering if it was possible to intercept HTTP/1.0 client
> requests and upgrade them to HTTP/1.1 without hitting the rejection code of
> the commit here:
> https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fhaproxy%2Fhaproxy%2Fcommit%2Fe136bd12a32970bc90d862d5fe09ea1952b62974&data=05%7C01%7Cdominik.froehlich%40sap.com%7C467ec29be90d4e1fee0c08da30d64b24%7C42f7676cf455423c82f6dc2d99791af7%7C0%7C0%7C637875994185150797%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=PuLtA4U1I%2FtKU4GsiDZ1Fse8WUFU20mHhW6kVSG%2FK2g%3D&reserved=0

"Upgrading" the version must really never ever be done, as a lot of HTTP
semantics changed between 1.0 and 1.1, and by telling a server that the
client is 1.1, the server will be allowed to use chunking, trailers,
100-continue and a lot of other stuff that 1.0 clients are unable to
understand.

For your use case, I guess the best solution would be to add an option
(possibly a global one) to relax that rule. It's self-contained inside
an "if" statement so it should be quite easy to add such an option and
be done with it, because when you need it, you definitely know that you
need it, you'll find the keyword in the doc and the accompanying warning
about the security impacts. Also the HTTP spec now says "unless support
is indicated via other means", so the intent clearly is to make this
configurable on a case-by-case basis.

> This way we would not have to downgrade to HAproxy 2.4 again - which would be
> very unfortunate as we need many of the nice features of 2.5.

We'll discuss this with Christopher tomorrow, but I'm not worried about
this for now.

Thanks!
Willy


2.5: Possibility to upgrade http/1.0 clients to http/1.1?

2022-05-05 Thread Froehlich, Dominik
Hello everyone,

We recently bumped our HAproxy deployment to 2.5 and are now getting hit by 
this fix:

MEDIUM: mux-h1: Reject HTTP/1.0 GET/HEAD/DELETE requests with a payload


http://git.haproxy.org/?p=haproxy-2.5.git;a=blob_plain;f=CHANGELOG

The issue is we have many legacy customers using very old systems and we can’t 
tell all of them to rewrite their clients to http/1.1.

I get the security fix to prevent request smuggling where some servers ignore 
the body and treat it as another request, I’m not arguing that.

However, I was wondering if it was possible to intercept HTTP/1.0 client 
requests and upgrade them to HTTP/1.1 without hitting the rejection code of the 
commit here: 
https://github.com/haproxy/haproxy/commit/e136bd12a32970bc90d862d5fe09ea1952b62974

This way we would not have to downgrade to HAproxy 2.4 again – which would be 
very unfortunate as we need many of the nice features of 2.5.


Thanks a lot!


Re: FW: Question regarding backend connection rates

2021-11-22 Thread Froehlich, Dominik
Hi Willy,

Thanks for the response, yes I think that clarifies the rates for me.

I have another question you probably could help me with:

For ongoing connections (not total), the stats page shows a tooltip stating


  *   Current Active Connections
  *   Current Used Connections
  *   Current Idle Connections (broken down into safe and unsafe idle 
connections)

What is the difference between active and used connections? Which number 
combined with idle connections reflects the current number of open connections 
on the OS level? (i.e. using resources like fds, buffers, ports)
My ultimate goal is to answer the question “how loaded is this machine?” vs. a 
limit of open connections.

What’s the difference between safe and unsafe idle connections? Is it related 
to the http-reuse directive, e.g. private vs. non-private reusable connections?

Thank you so much,
D

From: Willy Tarreau 
Date: Saturday, 20. November 2021 at 10:01
To: Froehlich, Dominik 
Cc: haproxy@formilux.org 
Subject: Re: FW: Question regarding backend connection rates
Hi Dominik,

On Fri, Nov 19, 2021 at 08:42:40AM +, Froehlich, Dominik wrote:
> However, the number of "current sessions" at the backend is almost 0 all the
> time (between 0 and 5, the number of servers). When I look at the "total
> sessions" at the backend after the test, it tells me that 99% of connections
> have been reused. So in my book, when a connection is reused no new
> connection needs to be opened, that's why I am so stumped about the backend
> session rate. If 99% of sessions are reused, why is the rate of new sessions
> not 0?

This is because the "sessions" counter indicates the number of sessions
that used this backend. Sessions are idle in the frontend waiting for a
new request, and once the request is analysed by the frontend rules, it's
routed to a backend, at which point the counter is incremented. As such,
in a backend you'll essentially see as many sessions as requests.

The "new connections" counter, that was added after keep-alive support
was introduced many years ago will, however, indicate the real number of
new connections that had to be established to servers. And this is the
same for each "server" line by the way.

I've sometimes been wondering whether it could make sense to change the
"sessions/total" column in the stats page to show the number of new
connections instead, but it seems to me that it would not bring much
value and will only result in causing confusion to existing users. Given
that in both cases one will have to hover on the field to get the details,
it would not help I guess.

Hoping this helps,
Willy


Re: Supported certificate formats?

2021-08-03 Thread Froehlich, Dominik
Hi,

My question may have been a little misleading:
To be clear: The HAproxy config is PEM only for both the server certificate and 
the CA-file for client certificates.

The issue is that the client uses a p7b binary certificate and chain to connect 
to HAproxy. HAproxy then responds with a “unknown CA” error, even though the 
root of the client certificate is part of the CA-file.

That got me to think HAproxy maybe does not support clients using non-PEM 
client certificates. But I could not find any source as to what is actually 
supported.

Best regards,
D

From: Илья Шипицин 
Date: Monday, 2. August 2021 at 20:14
To: "Froehlich, Dominik" 
Subject: Re: Supported certificate formats?

if you are familiar with Wireshark, I suggest to capture Client Hello <--> 
Server Hello.
certificates are displayed there, so you can see whether haproxy sends its 
certificate (and chain) or not.


my money would be on "if haproxy does not complain on config, so it loaded it 
properly, including certificates"

пн, 2 авг. 2021 г. в 17:28, Froehlich, Dominik 
mailto:dominik.froehl...@sap.com>>:
Hi,

We have an issue with a client certificate in DER (binary) encoded PKCS7 format 
(.p7b).
The file contains the full certificate chain and the CA-file at HAproxy matches 
the root CA of the chain, so it should work.

However, the client connecting receives an “unknown CA” alert and HAproxy says 
“SSL client certificate not trusted”

My strong suspicion is that HAproxy only supports PEM (text) encoded CRT format 
when connecting but I haven’t found a definitive source
in the documentation. There are only examples using PEM so assume this is the 
only supported format.

Can someone confirm / deny this or point me to a list of supported formats for 
certificates?

Thanks a lot,
Dominik


Supported certificate formats?

2021-08-02 Thread Froehlich, Dominik
Hi,

We have an issue with a client certificate in DER (binary) encoded PKCS7 format 
(.p7b).
The file contains the full certificate chain and the CA-file at HAproxy matches 
the root CA of the chain, so it should work.

However, the client connecting receives an “unknown CA” alert and HAproxy says 
“SSL client certificate not trusted”

My strong suspicion is that HAproxy only supports PEM (text) encoded CRT format 
when connecting but I haven’t found a definitive source
in the documentation. There are only examples using PEM so assume this is the 
only supported format.

Can someone confirm / deny this or point me to a list of supported formats for 
certificates?

Thanks a lot,
Dominik


Re: SNI spoofing in HAproxy?

2021-07-05 Thread Froehlich, Dominik
Hi Tim,

I've played around with your solution a bit and I think I may have found two 
issues with it:

- It doesn't check if the client uses SNI at all and it will deny the request 
if no SNI is used
- It fails if the client adds a port to the host header

So to my understanding, it is perfectly fine for a client to not use SNI if 
there is only one certificate to be used.
HAproxy should deny requests only if SNI and host header don't match AND there 
is a value in SNI set.

As for the second part, I've come across some clients that set the port number 
in the host header, even if it is a well-known port.
HAproxy should accept that and compare SNI and host header based on name only, 
since SNI will never contain a port.

Here is my iteration of your solution:

  http-request set-var(txn.host) hdr(host),field(1,:)
  acl ssl_sni_http_host_match ssl_fc_sni,strcmp(txn.host) eq 0
  http-request deny deny_status 421 if !ssl_sni_http_host_match { 
ssl_fc_has_sni }

- I am using the field converter to strip away any ports from the host header
- I only deny requests that actually use SNI

What are your thoughts on this? I know that technically this would still allow 
clients to do this:

curl https://myhost.com:443 -H "host: myhost.com:1234"

this would then pass and not be denied. 
But I don't see any other choice since SNI will never contain a port, I must 
ignore it in the comparison.

Best regards,
Dominik

On 25.06.21, 11:07, "Tim Düsterhus"  wrote:

    Dominik,

On 6/25/21 10:42 AM, Froehlich, Dominik wrote:
> Your code sends a 421 if the SNI and host header don't match.
> Is this the recommended behavior? The RFC is pretty thin here:
> 
> "   Since it is possible for a client to present a different server_name
> in the application protocol, application server implementations that
> rely upon these names being the same MUST check to make sure the
> client did not present a different name in the application protocol."
> (from https://datatracker.ietf.org/doc/html/rfc6066#section-11.1 )

Indeed it is. See the HTTP/2 RFC: 
https://datatracker.ietf.org/doc/html/rfc7540#section-9.1.1.

Additionally I would recommend using single-domain certificates (if 
possible), then a regular browser will never see a 421, because it will 
not perform connection reuse.

> My initial idea was to simply pave over any incoming headers with the SNI:
> 
>>  http-request set-header Host %[ssl_fc_sni] if { ssl_fc_has_sni }
> 
> This wouldn't abort the request with a 421 but I am not sure if I MUST 
abort it to be compliant.

This is equivalent to routing based off the SNI directly and it's as 
unsafe. If a browser reuses a TCP connection for an unrelated host you 
will send the request to the wrong backend.

> Regarding domain fronting, I thought I might be able to have the cake and 
eat it, too if I still allowed it but only prevented it for mTLS
> Requests. Maybe like this:
> 
>>  http-request set-header Host %[ssl_fc_sni] if { ssl_c_used }
> 
> 

Only performing checks when `ssl_c_used` is true should be safe. But I 
*strongly* recommend sending a 421 instead of attempting to fiddle 
around with the 'host' header. It's just too easy to accidentally 
perform incorrect routing. It would then look like this:

> http-request   set-var(txn.host)hdr(host)
> http-request   deny deny_status 400 unless { req.hdr_cnt(host) eq 1 }
> # Verify that SNI and Host header match if a client certificate is 
sent
> http-request   deny deny_status 421 if { ssl_c_used } ! { 
ssl_fc_sni,strcmp(txn.host) eq 0 }

Best regards
Tim Düsterhus



Re: SNI spoofing in HAproxy?

2021-06-25 Thread Froehlich, Dominik
Tim,

Thank you for your reply.

Your code sends a 421 if the SNI and host header don't match.
Is this the recommended behavior? The RFC is pretty thin here:

"   Since it is possible for a client to present a different server_name
   in the application protocol, application server implementations that
   rely upon these names being the same MUST check to make sure the
   client did not present a different name in the application protocol."
(from https://datatracker.ietf.org/doc/html/rfc6066#section-11.1 )

My initial idea was to simply pave over any incoming headers with the SNI:

>   http-request set-header Host %[ssl_fc_sni] if { ssl_fc_has_sni }

This wouldn't abort the request with a 421 but I am not sure if I MUST abort it 
to be compliant.

Regarding domain fronting, I thought I might be able to have the cake and eat 
it, too if I still allowed it but only prevented it for mTLS
Requests. Maybe like this:

>   http-request set-header Host %[ssl_fc_sni] if { ssl_c_used }


What are your thoughts?

Best regards,
Dom

On 24.06.21, 16:05, "Tim Düsterhus"  wrote:

    Dominik,

On 6/24/21 3:29 PM, Froehlich, Dominik wrote:
> Not sure if you would call this a security issue, hence I am asking this 
on the mailing list prior to opening a github issue:

This is also known as "Domain Fronting" 
(https://en.wikipedia.org/wiki/Domain_fronting). It's not necessarily a 
security issue and in fact might be the desired behavior in certain 
circumstances.

> I’ve noticed that it is really easy to bypass the check on client 
certificates of a domain when the client can present a valid certificate for 
another domain.

Indeed. I also use client certificates for authentication and I'm aware 
of this issue. That's why I validate that the SNI and Host match up for 
my set up. In fact I added the 'strcmp' converter to HAProxy just for 
this purpose. The documentation for 'strcmp' also gives an explicit 
example on how to use it to prevent domain fronting:

> http-request set-var(txn.host) hdr(host)
> # Check whether the client is attempting domain fronting.
> acl ssl_sni_http_host_match ssl_fc_sni,strcmp(txn.host) eq 0

https://cbonte.github.io/haproxy-dconv/2.4/configuration.html#7.3.1-strcmp

My full rules look like this:

>   # Verify that SNI and Host header match
>   http-request   set-var(txn.host)hdr(host)
>   http-request   deny deny_status 400 unless { req.hdr_cnt(host) eq 1 }
>   http-request   deny deny_status 421 unless { 
ssl_fc_sni,strcmp(txn.host) eq 0 }

-

> […]
> 
> My questions:
> 
>*   HAproxy does seem to treat SNI (L5) and HTTP Host Header (L7) as 
unrelated. Is this true?

Yes.

>*   Applications offloading TLS to HAproxy usually trust that mTLS 
requests coming in are validated correctly. They usually don’t revalidate the 
entire certificate again and only check for the subject’s identity. Is there a 
way to make SNI vs host header checking more strict?

Yes, see above.

>*   What’s the best practice to dispatch mTLS requests to backends? 
I’ve used a host header based approach here but it shows the above 
vulnerabilities.

You *must* use the 'Host' header for routing. Using the SNI value for 
routing is even more unsafe. But you also must validate that the SNI 
matches up or that the client presented a valid certificate.

Best regards
Tim Düsterhus



SNI spoofing in HAproxy?

2021-06-24 Thread Froehlich, Dominik
Hi,

Not sure if you would call this a security issue, hence I am asking this on the 
mailing list prior to opening a github issue:

I’ve noticed that it is really easy to bypass the check on client certificates 
of a domain when the client can present a valid certificate for another domain.

Consider this HAproxy config:

global
  log /dev/log len 4096 format rfc3164 syslog info

defaults
  log global
  mode http
  timeout connect 5s
  timeout client 1h
  timeout server 5s

frontend myfrontend
  bind :443 ssl crt /etc/cert/server.pem crt-list /crt-list
  log-format "%ci:%cp [%tr] (%ID) %ft %b/%s %TR/%Tw/%Tc/%Tr/%Tt %ST %B %CC %CS 
%tsc %ac/%fc/%bc/%sc/%rc %sq/%bq %hr %hs %{+Q}r %sslc %sslv"

  capture request header Host len 256

  http-request set-header X-SSL-Client  %[ssl_c_used]if 
{ ssl_c_used }
  http-request set-header X-SSL-Client-Session-ID   %[ssl_fc_session_id,hex] if 
{ ssl_c_used }
  http-request set-header X-SSL-Client-Verify   %[ssl_c_verify]  if 
{ ssl_c_used }
  http-request set-header X-SSL-Client-Subject-DN   %{+Q}[ssl_c_s_dn]if 
{ ssl_c_used }
  http-request set-header X-SSL-Client-Subject-CN   %{+Q}[ssl_c_s_dn(cn)]if 
{ ssl_c_used }
  http-request set-header X-SSL-Client-Issuer-DN%{+Q}[ssl_c_i_dn]if 
{ ssl_c_used }
  http-request set-header X-SSL-Client-NotBefore%{+Q}[ssl_c_notbefore]   if 
{ ssl_c_used }
  http-request set-header X-SSL-Client-NotAfter %{+Q}[ssl_c_notafter]if 
{ ssl_c_used }

  use_backend bob if { hdr(host) -m dom bob.com }
  use_backend alice if { hdr(host) -m dom alice.com }

  default_backend alice

backend alice
  server alice localhost:8080 check

backend bob
  server bob localhost:8081 check

---
So this HAproxy hosts two domains alice.com and bob.com. It uses the following 
crt-list to make TLS connections:

/etc/cert/server.pem [ca-file /alice.ca.pem verify required] *.alice.com
/etc/cert/server.pem [ca-file /bob.ca.pem verify required] *.bob.com

---
So any client connecting to alice.com must present a certificate signed by the 
Alice CA and any client connecting to bob.com must present a certificate signed 
by the Bob CA.


However, I noticed that HAproxy does allow me to “spoof” the host header to 
bob.com even though I did a TLS handshake with alice.com. The request will be 
forwarded to bob.com with the alice.com certificate:

curl -v -k --cert alice.com.crt --key alice.com.key --resolve 
www.alice.com:9443:127.0.0.1 
https://www.alice.com:9443/headers -H "host: www.bob.com"

* Added www.alice.com:9443:127.0.0.1 to DNS cache
* Hostname www.alice.com was found in DNS cache
*   Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to www.alice.com (127.0.0.1) port 9443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
*   CAfile: /etc/ssl/cert.pem
  CApath: none
(…)
> GET /headers HTTP/1.1
> Host: www.bob.com
> User-Agent: curl/7.64.1
> Accept: */*
>
< HTTP/1.1 200 OK
< date: Thu, 24 Jun 2021 13:07:17 GMT
< content-length: 578
< content-type: text/plain; charset=utf-8
<
Hello from Bob!

- Headers Received -
Accept : [*/*]
User-Agent : [curl/7.64.1]
X-Ssl-Client : [1]
X-Ssl-Client-Issuer-Dn : 
[/C=US/ST=California/L=AliceLand/O=Alice.com/CN=Alice.com Root 
CA/emailAddress=ad...@alice.com]
X-Ssl-Client-Notafter : [220624125634Z]
X-Ssl-Client-Notbefore : [210624125634Z]
X-Ssl-Client-Session-Id : 
[D941ECCAACAFEBC5CB3AE17794B54DC3DFC7549C401DB20D7EC5ADC48244D3D0]
X-Ssl-Client-Subject-Cn : [Alice]
X-Ssl-Client-Subject-Dn : 
[/C=US/ST=Michigan/L=Detroit/O=Alice.com/CN=Alice/emailAddress=al...@alice.com]
X-Ssl-Client-Verify : [0]

---
So basically anyone who can get a client certificate from Alice.com can use it 
to also connect to Bob.com without getting validated against Bob’s CA.

I’ve tested this with HAproxy 2.2.14.

My questions:

  *   HAproxy does seem to treat SNI (L5) and HTTP Host Header (L7) as 
unrelated. Is this true?
  *   Applications offloading TLS to HAproxy usually trust that mTLS requests 
coming in are validated correctly. They usually don’t revalidate the entire 
certificate again and only check for the subject’s identity. Is there a way to 
make SNI vs host header checking more strict?
  *   What’s the best practice to dispatch mTLS requests to backends? I’ve used 
a host header based approach here but it shows the above vulnerabilities.


Best regards,
Dom


HAproxy soft reload timeout?

2021-02-04 Thread Froehlich, Dominik
Hi,

I am currently experimenting with the HAproxy soft reload functionality (USR2 
to HAproxy master process).
From what I understood, a new worker is started and takes over the listening 
sockets while the established connections remain on the previous worker until 
they are finished.
The worker then terminates itself once all work is done.

I’ve noticed there are some quite long-lived connections on my system (e.g. 
websockets or tcp-keepalive connections from slow servers). So when I am doing 
the reload, the previous instance
of HAproxy basically lives as long as the last connection is still going. Which 
potentially could go on forever.

So when I keep reloading HAproxy because the config has changed, I could end up 
with dozens, even hundreds of HAproxy instances running with old connections.

My question: Is there a way to tell the old haproxy instances how much time 
they have to get done with their work and leave?
I know it’s a tradeoff. I want my users to not be disrupted in their 
connections but I also need to protect the ingress machines from overloading.

Any best practices here?

Cheers,
D


HAproxy 2.2.5 possible bug in ssl crt-list socket commands?

2020-12-11 Thread Froehlich, Dominik
Hi,

I am trying to implement a dynamic certificate updater for my crt-list in 
HAproxy 2.2.5.
I have noticed that somehow, when I update an existing certificate and add it 
to the crt-list twice, I can never remove it again.

Here is what I am doing at the moment:

Step 1: Add a new certificate

echo "new ssl cert mycert.pem " | nc -U haproxy.sock
echo -e "set ssl cert mycert.pem <<\n$(cat mycert.pem)\n" | nc -U haproxy.sock
echo "commit ssl cert mycert.pem " | nc -U haproxy.sock

Step 2: Reference the certificate in crt-list

echo -e "add ssl crt-list my-crt-list <<\nmycert.pem [verify: none] 
myhost.mydomain.com\n" | nc -U haproxy.sock

Step 3: openssl check

openssl s_client -connect localhost:443 -servername myhost.mydomain.com
(…)
subject=… /CN=myhost.mydomain.com

So it works as expected.

Step 4: Update the certificate again

echo "new ssl cert mycert.pem " | nc -U haproxy.sock. (already exists)
echo -e "set ssl cert mycert.pem <<\n$(cat mycert.pem)\n" | nc -U haproxy.sock
echo "commit ssl cert mycert.pem " | nc -U haproxy.sock

Step 5: Add another reference in crt-list

echo -e "add ssl crt-list my-crt-list <<\nmycert.pem [verify: none] 
myhost.mydomain.com\n" | nc -U haproxy.sock

There are now 2 references in the crt-list:
mycert.pem:7 [verify none] myhost.mydomain.com
mycert.pem:8 [verify none] myhost.mydomain.com

Step 6: Remove the old reference

echo "del ssl crt-list my-crt-list mycert.pem:7 " | nc -U haproxy.sock

This was my original idea of how to update certificates without a downtime.

However, I found an odd issue when trying to finally delete the crt-list entry 
and the certificate thereafter:

Step 7: Remove the last reference

echo "del ssl crt-list my-crt-list mycert.pem:8 " | nc -U haproxy.sock

The crt-list now contains no more references to the certificate.

Step 8: Delete certificate

echo "del ssl cert mycert.pem" | nc -U haproxy.sock
Can't remove the certificate: certificate 'mycert.pem' in use, can't be deleted!

Huh? Checking certificate status:

echo "show ssl cert mycert.pem" | nc -U haproxy.sock
Filename: mycert.pem
Status: Used

So I can’t delete the cert even though there is no reference to it.
But there seems to be still a “hidden” reference somehow, because I can still 
see it in TLS:

Step 9: openssl check again

openssl s_client -connect localhost:443 -servername myhost.mydomain.com
(…)
subject=… /CN=myhost.mydomain.com

This should not be. The crt-list no longer contains any SNI filter for this 
domain, so it should fall back to the default certificate.


Is this a bug or am I just using the socket API wrong somehow?


Logging mTLS handshake errors

2020-11-18 Thread Froehlich, Dominik
Hi everyone,

Some of our customers are using mTLS to authenticate clients. There have been 
complaints that some certificates don’t work
but we don’t know why. To shed some light on the matter, I’ve tried to add more 
info to our log format regarding TLS validation:

log-format "%ci:%cp [%tr] (%ID) %ft %b/%s %TR/%Tw/%Tc/%Tr/%Tt %ST %B %CC %CS 
%tsc %ac/%fc/%bc/%sc/%rc %sq/%bq %hr %hs %{+Q}r %sslc %sslv %[ssl_fc_has_sni] 
%[ssl_c_used] %[ssl_fc_has_crt] %[ssl_c_err] %[ssl_c_ca_err]"


The new elements are

%[ssl_fc_has_sni] %[ssl_c_used] %[ssl_fc_has_crt] %[ssl_c_err] %[ssl_c_ca_err]

As I wanted to know if there is a validation error I added ssl_c_err so I would 
be able to look it up in openssl later.

However, whenever I try the config with a bad certificate (e.g. expired, not 
yet valid, etc.) I don’t see the log entry at all.
Instead I just get:

https-in/1: SSL client certificate not trusted

Only after I added

crt-ignore-err all

to the bind directive, I did see the actual error code in the log. But then, 
the certificate would always validate which is not what I want of course.

Any chance to get a meaningful log message on bad certificates?


Best regards,
Dominik


Re: Several CVEs in Lua 5.4

2020-07-29 Thread Froehlich, Dominik
Hi Lukas,

Thanks for the reply. 
My query goes along the lines of which Lua version is compatible with HAproxy 
and contains fixes to those CVEs.
I could not find a specific instruction as to which Lua version can be used to 
build HAproxy / has been tested for production use.

We are consuming a bundled version (currently HAproxy 1.9.15 with Lua 5.3.5) 
but I don't know if it is safe to bump the Lua version only.

Thanks and regards,
D

On 29.07.20, 11:06, "Lukas Tribus"  wrote:

Hello,

On Wed, 29 Jul 2020 at 10:23, Froehlich, Dominik
 wrote:
>
> Hello everyone,
>
> Not sure if this is already addressed. Today I got a CVE report of 
several issues with Lua 5.3.5 up to 5.4.
>
> I believe Lua 5.4 is currently recommended to build with HAproxy 2.x?
>
> Before I open an issue on github I would like to ask if these are already 
known / addressed:

I don't understand, specifically what are you asking us to do here?
It's not like we ship LUA ...


Lukas



Several CVEs in Lua 5.4

2020-07-29 Thread Froehlich, Dominik
Hello everyone,

Not sure if this is already addressed. Today I got a CVE report of several 
issues with Lua 5.3.5 up to 5.4.
I believe Lua 5.4 is currently recommended to build with HAproxy 2.x?

Before I open an issue on github I would like to ask if these are already known 
/ addressed:

Lua 5.3.5 has a use-after-free in lua_upvaluejoin in lapi.c.
https://nvd.nist.gov/vuln/detail/CVE-2019-6706

Lua through 5.4.0 mishandles the interaction between stack resizes and garbage 
collection.
https://nvd.nist.gov/vuln/detail/CVE-2020-15888

Lua through 5.4.0 has a getobjname heap-based buffer over-read because 
youngcollection in lgc.c uses markold for an insufficient number of list 
members.
https://nvd.nist.gov/vuln/detail/CVE-2020-15889


Best regards,
D


MINOR: http: Fixed small typo in parse_http_return

2020-04-17 Thread Froehlich, Dominik
Hi,

While looking for the solution for another problem I found a couple of small 
typos in a warning.

Thanks for review/merge.

Regards,
Dominik Froehlich
dominik.fro...@gmail.com
dominik.froehl...@sap.com



0001-MINOR-http-Fixed-small-typo-in-parse_http_return.patch
Description: 0001-MINOR-http-Fixed-small-typo-in-parse_http_return.patch