Re: [ANNOUNCE] haproxy-2.2.18

2021-11-05 Thread Vincent Bernat
 ❦  5 November 2021 17:05 -06, Jim Freeman:

> Might this (or something 2.4-ish) be heading towards bullseye-backports ?
> https://packages.debian.org/search?keywords=haproxy
> https://packages.debian.org/bullseye-backports/

2.4 will be in bullseye-backports.
-- 
Don't patch bad code - rewrite it.
- The Elements of Programming Style (Kernighan & Plauger)



Re: [ANNOUNCE] haproxy-2.2.18

2021-11-05 Thread Jim Freeman
Might this (or something 2.4-ish) be heading towards bullseye-backports ?
https://packages.debian.org/search?keywords=haproxy
https://packages.debian.org/bullseye-backports/

Thanks,
...jfree

On Fri, Nov 5, 2021 at 8:51 AM Christopher Faulet 
wrote:

> Hi,
>
> HAProxy 2.2.18 was released on 2021/11/04. It added 66 new commits
> after version 2.2.17.
>
...


Re: [PATCH] CLEANUP: slz: Mark `reset_refs` as static

2021-11-05 Thread Tim Düsterhus

Willy,

On 11/5/21 4:43 PM, Willy Tarreau wrote:

Yes, sorry, but I will really deal with that stuff *after* we're done
with haproxy's release.


Yes, absolutely.

I just came across this point on my list while I was double-checking 
whether I had anything important / any loose ends for 2.5 and thought 
I'd sent an email so that I don't forget about this myself :-)


Best regards
Tim Düsterhus



Re: [PATCH] CLEANUP: slz: Mark `reset_refs` as static

2021-11-05 Thread Willy Tarreau
On Fri, Nov 05, 2021 at 02:17:51PM +0100, Tim Düsterhus wrote:
> Willy,
> 
> On 10/11/21 5:15 PM, Tim Düsterhus wrote:
> > > > > > please also apply to https://github.com/wtarreau/libslz/.
> > > > > > [...]
> > > > > 
> > > > > Now applied, thanks!
> > > > 
> > > > Not seeing anything in the libslz repository yet. Did you forget to 
> > > > push?
> > > > :-)
> > > 
> > > No, I've applied to the haproxy copy only for now, will do slz later,
> > > and after double-checking that no legacy code still uses the function
> > > (becaure in the very first incarnation it had to be called from the
> > > main code, which was not very handy). But I guess that any such code
> > > I could find would just be test code.
> > 
> > Reminder in case you forgot about this and did not just get not around
> > to it.
> > 
> 
> Another reminder, because you did not reply to my previous.

Yes, sorry, but I will really deal with that stuff *after* we're done
with haproxy's release.

Thanks,
Willy



Re: [EXTERNAL] Re: [PATCH 1/2] MINOR: jwt: Make invalid static JWT algorithms an error in `jwt_verify` converter

2021-11-05 Thread Remi Tricot-Le Breton

Hello,

On 05/11/2021 08:48, Willy Tarreau wrote:

Hi Tim,

On Thu, Nov 04, 2021 at 07:12:04PM +0100, Tim Düsterhus wrote:

Your patch is already merged and the bug is fixed. However I'd like to
comment on the reasons behind why I refactored the whole function to use the
ist API:

I *strongly* dislike code that just works because of some implicit
assumptions that might or might not hold after future changes. This is C,
every messed up boundary check can easily result in some major issues.

In this specific case the implicit assumption is that all supported
algorithms are exactly 5 characters long. This is true with the current
implementation in HAProxy which just supports the HS, RS, ES, and PS
families. However the JOSE specification [1] also specifies other values for
the 'alg' field (though most of them for encrypted instead of just signed
tokens) and they have differing lengths.

If someone in the future wants to add support for a 6 character algorithm,
then they need to remember to add the length check to not run into the same
strncmp() issue like you did. My experience is that if a mistake is made
once, it is going to be made again.

The ist API already contains a large number of functions for *safe* handling
of strings that are not NUL terminated. It is very hard to misuse them,
because they all include the appropriate checks. If you see isteq(dynamic,
ist("static")) then it is very clear that this will match the string
"statis" exactly. For strncmp you will need to perform the length check
yourself. You also need to remember to manually subtract 1 for the trailing
NUL when using sizeof, whereas this is done automatically by the ist()
function.

While I am not an expert in optimization and getting the last percent of
performance out of a function, I don't think the ist API is going to be
measurably slower than strncmp. It is used everywhere within HTX after all!
For the JWT the OpenSSL crypto is going to take up the majority of the time
anyway.
We are definitely not yet at a point where performance is the biggest 
issue and I did not code it with this as my main focus. I simply use 
standard APIs because I know them better than ist.

Hope this help to clarify why I'm strongly pushing the use of the ist API
and why I've contributed a fair number of additional functions in the past
when I encountered situations where the current functions were not ergonomic
to use. One example is the istsplit() function which I added for
uri_normalizer.c. It makes tokenization of strings very easy and safe.
I get why you like this API, it is indeed there to avoid silly mistakes 
but in this case replacing all the standard calls by ist ones to fix a 
simple bug (fixed by a single line patch) seemed a bit overkill.



I overall generally agree with your arguments above as I created ist exactly
to address such trouble that always end up with someone spend 3 days on a
but in someone else's code just to discover a logic mistake such as a
string function called on a string that's not always nul-terminated, or
reverse scanning that can occasionally continue before the beginning of
the string etc.

However, there is an important aspect to keep in mind, which is that Rémi
is the maintainer of this part and he needs to use what he feels at ease
with. I can easily understand that he's not much familiar with the usage
of ist.
That's exactly it. I'm not refusing to use this API, I just don't know 
it enough yet to use it spontaneously (especially when I don't interact 
with other parts of the code which might encourage me to use internal 
types).

While the functions are correctly documented and generally useful,
their coverage is not always complete (and you had a few times to add new
ones yourself), and we don't have a simple index of what's available with
a one-liner description, which doesn't help to get familiar with them.
Massive +1 for this one. Scrolling through the entire ist.h file to look 
for the function that answers to your need (if there is even one) is 
quite painful.



In addition there are places where the input text is still made of nul
terminated strings, that require an extra conversion that can also be
error-prone, or just would add some code whose benefit is not immediately
perceptible.

I, too, hope that over time we make most of the code that deals with
strings coming from user data evolve towards a more generalized use of
ist for the reasons you mention. Safer, easier, friendlier API when
you're in a code area that already makes use of them. I will just not
force anyone to do that, though I can encourage by showing how to use
it. However you can be sure that I'll complain very loudly if I have
to debug something that was caused by unsafe code that could have been
avoided this way.

The performance aspect is indeed something that ought not be neglected
in two domains:
   - CPU: the functions are made to be inlined, and as their end is
 known, usually this results in trivial operations that can be
   

[ANNOUNCE] haproxy-2.2.18

2021-11-05 Thread Christopher Faulet

Hi,

HAProxy 2.2.18 was released on 2021/11/04. It added 66 new commits
after version 2.2.17.

After the 2.3, it is the 2.2 turn. This one contains almost the same fixes,
stacked over the last 2 months. Willy already done the hard part describing
them. Thus, I'm shamelessly stealing everything from the 2.3.15
announcement:

  - if an HTTP/1 message was blocked for analysis waiting for some more
room, sometimes it could remain stuck indefinitely, leaving a few
never-expiring entries in "show sess".

  - A very old bug was fixed in the Lua part. The wrong function was used
to start Lua tasks leading to a process freeze if the call was
performed when the time was wrapping, one millisecond every 49.7
days. On this exact millisecond, a lua task was able to be queued
with no expiration date, preventing all subsequent timers from being
seen as expired.

  - Some bugs were fixed on the filters management to properly handle
client aborts and to be sure to always release allocated filters when
a stream is released.

  - The LDAP health-check was fixed to make it compatible with Active
Directory servers. The response parsing was improved to also support
servers using multi-bytes length-encoding. Active Directory servers
seems to systematically encode messages or elements length on 4 bytes
while others are using 1-byte length-encoding if possible. Now, 1, 2
and 4 bytes length-encoding are now supported. It should be good
enough to enable LDAP health-check on Active Directory

  - The build system was improved in many ways. Several -Wundef warnings
were fixed.

  - HTTP "TE" header is now sanitized when a request is sent to a server.
Only "trailers" token is sent. It is mandatory because HAProxy only
understand chunked encoding. Other transfer encoding are not
supported.

  - A bug on health-check was fixed when a sample fetch depending on the
execution context was used in a tcpcheck rulesets defined in a
defaults section.

  - tcp-request and tcp-response content rules evaluation is now
interrupted if a read error or the end of input is detected on the
corresponding channel. This change fixes a known bug in HAProxy 2.3
and prior. However, it does not seem to affect 2.4.

  - resolvers: there were a large number of structural issues in the
code, and quite frankly we're not proud of the solutions but it's
impossible to do something more elegant in the current state without
a major rewrite. So what matters here is that all race conditions are
addressed and that the code works reliably. While the 2.5 fixes add a
lookup tree to perform significant CPU savings on SRV records, that
code was not backported because it adds further changes that do not
seem necessary in the current situation. We got the confirmation from
one of the reporters that the issue is now fixed.

  - an interesting bug in the ring API caused boundary checks for the
wrapping at the end of the buffer to be shifted by one both in the
producer and the consumer, thus they both cancel each other and are
not observable... until the byte after the buffer is not mapped or
belongs to another area. One crash was met on boot (since startup
messages are duplicated into a ring for later retrieval), and it is
possible that those sending logs over TCP might have faced it as
well, otherwise it's extremely unlikely to be observed outside of
these use cases.

  - using the tarpit could lead to head-of-line blocking of an H2
connection as the pending data were not drained. And in other
protocols, the presence of these pending data could cause a wakeup
loop between the mux and the stream, which usually ended in the
process being detected as faulty and being killed by the safety
checks.

  - the h2spec tests in the CI were regularly failing on a few tests
expecting HTTP/2 GOAWAY frames that were sent (even seen in strace).
The problem was that we didn't perform a graceful shutdown and that
this copes badly with bidirectional communications as unread pending
data cause the connection to be reset and the frame to be lost. This
was addressed by performing a clean shutdown. It's unlikely that
anyone ever noticed this given that this essentially happens when
communication errors are reported (i.e. when the client has little
reason to complain).

  - some users complained that TLS handshakes were renewed too often in
some cases. Emeric found that with the migration to the muxes in
1.9-2.0 we've lost the clean shutdown on end of connection that's
also used to commit the TLS session cache entry. For HTTP/2 this was
addressed as a side effect of the fix above, and for HTTP/1, a fix
was produced to also perform a clean shutdown on keep-alive
connections (it used to work fine only for close ones).

  - the validity checks for sample fetch functions were 

Deleting only one of multiple headers based on value

2021-11-05 Thread Nick Muerdter
Hi,

I have a use-case where I'm trying to delete only specific "Set-Cookie" headers 
from a response. Since the "Set-Cookie" header can be repeated multiple times, 
the response might look like:

Set-Cookie: foo1=value1
Set-Cookie: foo2=value2
Set-Cookie: bar=value3

In my case, I'd only like to delete the "Set-Cookie: foo\d+" lines, but keep 
the "Set-Cookie: bar" line. Is this something that's possible in HAProxy 2?

I think this could previously be accomplished with the old "rspidel" option 
since it operated on matching full header lines. However, I can't figure out 
any way to accomplish this with the newer "http-response del-header" option 
since it deletes all headers matching a name without a way to target specific 
lines based on the value (as far as I can tell).

The "http-response replace-header" option can sort of be used to replace the 
value of these specific header lines based on values, but you can't actually 
delete the lines. For example, this is basically the same issue that's 
described here: 
https://stackoverflow.com/questions/67203673/cant-remove-just-one-set-cookie-header-in-haproxy-2-2
 Where a suggestion is to do something like "http-response replace-header 
Set-Cookie foo\d+ \2" which would result in an empty "Set-Cookie: " header, or 
"http-response replace-header Set-Cookie foo\d+ bogus-cookie=bogus-value". But 
those "Set-Cookie" header lines still remain, instead of being deleted entirely.

So is this deletion of specific header lines with a specific name and value 
(while keeping other header lines of the same name) something that's possible? 
Or is that no longer possible without something like respidel that could match 
both the header name and value? This might be an uncommon use-case, but I was 
just trying to figure out if I was missing some other way to accomplish this.

Thank you!
Nick



Re: [PATCH] halog stuff

2021-11-05 Thread Tim Düsterhus , WoltLab GmbH

Willy,

On 10/28/21 6:50 PM, Willy Tarreau wrote:

Just one thing (but do not worry, I'll rearrange it later), I initially
took great care of limiting the output to 80 columns for having suffered
a few times from it, and I think it broke when time ranges were added.


This did not happen, yet. So here's a reminder before 2.5.

When you do, please also add the missing ']' in front of '< log'  in the 
help text. I accidentally removed it in 
66255f7bbf9dfa18545d96f87d7a0f6fb8684d1c.


Best regards
Tim Düsterhus
Developer WoltLab GmbH

--

WoltLab GmbH
Nedlitzer Str. 27B
14469 Potsdam

Tel.: +49 331 96784338

duester...@woltlab.com
www.woltlab.com

Managing director:
Marcel Werk

AG Potsdam HRB 26795 P



Re: [PATCH] CLEANUP: slz: Mark `reset_refs` as static

2021-11-05 Thread Tim Düsterhus

Willy,

On 10/11/21 5:15 PM, Tim Düsterhus wrote:

please also apply to https://github.com/wtarreau/libslz/.
[...]


Now applied, thanks!


Not seeing anything in the libslz repository yet. Did you forget to push?
:-)


No, I've applied to the haproxy copy only for now, will do slz later,
and after double-checking that no legacy code still uses the function
(becaure in the very first incarnation it had to be called from the
main code, which was not very handy). But I guess that any such code
I could find would just be test code.


Reminder in case you forgot about this and did not just get not around
to it.



Another reminder, because you did not reply to my previous.

Best regards
Tim Düsterhus



Re: OCSP with dynamic SSL storage

2021-11-05 Thread Marco Corte

Il 2021-11-05 13:11 Marco Corte ha scritto:

Hi all.

I have a bind section that contains
... ssl crt ZZZ.pem ...

where ZZZ.pem is actually a full path.

If I upload a new certificate/key to ZZZ.pem and a corresponding OCSP
response to ZZZ.pem.ocsp and do a

# systemctl reload haproxy.service


then the certificate and the OCSP stapling are correct.
Moreover I can update the OCSP, when needed

# printf "set ssl ocsp-response <<\n$(base64 ZZZ.pem.ocsp)\n\n" |
socat /run/haproxy/admin.sock stdio
OCSP Response updated!



If, after updating the files, I use the following procedure, I am not
able to update the OCSP response

# printf "set ssl cert ZZZ.pem <<\n$(cat ZZZ.pem\n\ncommit ssl cert
ZZZ.pem\n" | socat /run/haproxy/admin.sock stdio
Transaction created for certificate ZZZ.pem!

Committing ZZZ.pem..
Success!

# printf "set ssl ocsp-response <<\n$(base64 ZZZ.pem.ocsp)\n\n" |
socat /run/haproxy/admin.sock stdio
OCSP single response: Certificate ID does not match any certificate or 
issuer.



Since the two files ZZZ.pem and ZZZ.pem.ocsp are always the same, I
suspect that I am doing something wrong.
Am I skipping any step?

Thank you
Ciao!

.marcoc

Please note that I may have messed up with some commands while
anonymizing them in this email.


I forgot to mention the version: haproxy v2.4.8 on Ubuntu 18.04



OCSP with dynamic SSL storage

2021-11-05 Thread Marco Corte

Hi all.

I have a bind section that contains
... ssl crt ZZZ.pem ...

where ZZZ.pem is actually a full path.

If I upload a new certificate/key to ZZZ.pem and a corresponding OCSP 
response to ZZZ.pem.ocsp and do a


# systemctl reload haproxy.service


then the certificate and the OCSP stapling are correct.
Moreover I can update the OCSP, when needed

# printf "set ssl ocsp-response <<\n$(base64 ZZZ.pem.ocsp)\n\n" | socat 
/run/haproxy/admin.sock stdio

OCSP Response updated!



If, after updating the files, I use the following procedure, I am not 
able to update the OCSP response


# printf "set ssl cert ZZZ.pem <<\n$(cat ZZZ.pem\n\ncommit ssl cert 
ZZZ.pem\n" | socat /run/haproxy/admin.sock stdio

Transaction created for certificate ZZZ.pem!

Committing ZZZ.pem..
Success!

# printf "set ssl ocsp-response <<\n$(base64 ZZZ.pem.ocsp)\n\n" | socat 
/run/haproxy/admin.sock stdio
OCSP single response: Certificate ID does not match any certificate or 
issuer.



Since the two files ZZZ.pem and ZZZ.pem.ocsp are always the same, I 
suspect that I am doing something wrong.

Am I skipping any step?

Thank you
Ciao!

.marcoc

Please note that I may have messed up with some commands while 
anonymizing them in this email.




Re: [EXTERNAL] Re: [PATCH 1/2] MINOR: jwt: Make invalid static JWT algorithms an error in `jwt_verify` converter

2021-11-05 Thread Willy Tarreau
Hi Tim,

On Thu, Nov 04, 2021 at 07:12:04PM +0100, Tim Düsterhus wrote:
> Your patch is already merged and the bug is fixed. However I'd like to
> comment on the reasons behind why I refactored the whole function to use the
> ist API:
> 
> I *strongly* dislike code that just works because of some implicit
> assumptions that might or might not hold after future changes. This is C,
> every messed up boundary check can easily result in some major issues.
> 
> In this specific case the implicit assumption is that all supported
> algorithms are exactly 5 characters long. This is true with the current
> implementation in HAProxy which just supports the HS, RS, ES, and PS
> families. However the JOSE specification [1] also specifies other values for
> the 'alg' field (though most of them for encrypted instead of just signed
> tokens) and they have differing lengths.
> 
> If someone in the future wants to add support for a 6 character algorithm,
> then they need to remember to add the length check to not run into the same
> strncmp() issue like you did. My experience is that if a mistake is made
> once, it is going to be made again.
> 
> The ist API already contains a large number of functions for *safe* handling
> of strings that are not NUL terminated. It is very hard to misuse them,
> because they all include the appropriate checks. If you see isteq(dynamic,
> ist("static")) then it is very clear that this will match the string
> "statis" exactly. For strncmp you will need to perform the length check
> yourself. You also need to remember to manually subtract 1 for the trailing
> NUL when using sizeof, whereas this is done automatically by the ist()
> function.
> 
> While I am not an expert in optimization and getting the last percent of
> performance out of a function, I don't think the ist API is going to be
> measurably slower than strncmp. It is used everywhere within HTX after all!
> For the JWT the OpenSSL crypto is going to take up the majority of the time
> anyway.
> 
> Hope this help to clarify why I'm strongly pushing the use of the ist API
> and why I've contributed a fair number of additional functions in the past
> when I encountered situations where the current functions were not ergonomic
> to use. One example is the istsplit() function which I added for
> uri_normalizer.c. It makes tokenization of strings very easy and safe.

I overall generally agree with your arguments above as I created ist exactly
to address such trouble that always end up with someone spend 3 days on a
but in someone else's code just to discover a logic mistake such as a
string function called on a string that's not always nul-terminated, or
reverse scanning that can occasionally continue before the beginning of
the string etc.

However, there is an important aspect to keep in mind, which is that Rémi
is the maintainer of this part and he needs to use what he feels at ease
with. I can easily understand that he's not much familiar with the usage
of ist. While the functions are correctly documented and generally useful,
their coverage is not always complete (and you had a few times to add new
ones yourself), and we don't have a simple index of what's available with
a one-liner description, which doesn't help to get familiar with them.
In addition there are places where the input text is still made of nul
terminated strings, that require an extra conversion that can also be
error-prone, or just would add some code whose benefit is not immediately
perceptible.

I, too, hope that over time we make most of the code that deals with
strings coming from user data evolve towards a more generalized use of
ist for the reasons you mention. Safer, easier, friendlier API when
you're in a code area that already makes use of them. I will just not
force anyone to do that, though I can encourage by showing how to use
it. However you can be sure that I'll complain very loudly if I have
to debug something that was caused by unsafe code that could have been
avoided this way.

The performance aspect is indeed something that ought not be neglected
in two domains:
  - CPU: the functions are made to be inlined, and as their end is
known, usually this results in trivial operations that can be
much faster than the usual ones. However the functions are designed
to deal optimally with short strings (like keywords, cookies etc)
and to result in optimal code for small loops (i.e. no function
call and the smallest possible setup code). Using them on large
blocks will result in lower performance than regular functions.
Looking up a string inside a body for example ought not be done
with them ;

  - size: the strings use two parts and they are thus larger than a
single pointer. In some structures the space could be extremely
tight and the cost of storing the length can sometimes cancel some
of the gains. But this is quite rare as almost all of our strings
are accompanied with a length nowadays.

My general rule 

Re: [PATCH] REGTESTS: Use `feature cmd` for 2.5+ tests (2)

2021-11-05 Thread Christopher Faulet

Le 11/4/21 à 21:12, Tim Duesterhus a écrit :

This patch effectively is identical to 7ba98480cc5b2ede0fd4cca162959f66beb82c82.


Merged, thanks Tim!

--
Christopher Faulet