Re: [PATCH] BUILD: ssl: Build with new cryptographic library AWS-LC

2023-08-09 Thread Илья Шипицин
shall we unfreeze this activity?

вт, 18 июл. 2023 г. в 10:46, William Lallemand :

> On Tue, Jul 18, 2023 at 09:11:33AM +0200, Willy Tarreau wrote:
> > I'll let the SSL maintainers check all this, but my sentiment is that in
> > general if there are differences between the libs, it would be better if
> > we have a special define for this one as well. It's easier to write and
> > maintain "#if defined(OPENSSL_IS_BORINGSSL) || defined(OPENSSL_IS_AWSLC)"
> > than making it appear sometimes as one of them, sometimes as the other.
> > That's what we had a long time ago and it was a real pain, every single
> > move in any lib would cause breakage somewhere. Being able to reliably
> > identify a library and handle its special cases is much better.
>
> I agree, we could even add a build option OPENSSL_AWSLC=1 like we've
> done with wolfssl, since this is a variant of the Openssl API. Then
> every supported features could be activated with the HAVE_SSL_* defines
> in openssl-compat.h. Discovering the features with libreSSL and
> boringSSL version defines was a real mess, we are probably going to end
> up with a matrix of features supported by different libraries.
>
> I'm seeing multiple defines that can be useful in haproxy:
>
> - OPENSSL_IS_AWSLC could be used as Willy said, that could enough and we
>   maybe won't need the build option.
>
> - OPENSSL_VERSION_NUMBER it seems to be set to 0x1010107f but is this
>   100% compatible with the openssl 1.1.1 API?
>
> - AWSLC_VERSION_NUMBER_STRING It seems to be the OPENSSL_VERSION_TEXT
>   counterpart but I don't see the equivalent as a number, in
>   OpenSSL there is OPENSSL_VERSION_NUMBER which is used for doing #if
>   (OPENSSL_VERSION_NUMBER >= 0x1010107f) in the code for example, this
>   is really important for maintenance if we want to support multiple
>   versions of aws-lc.
>
> - AWSLC_API_VERSION maybe this would be enough instead of the
>   VERSION_NUMBER. We could activate the HAVE_SSL_* defines using
>   OPENSSL_VERSION_NUMBER and this.
>
> > > To Alex's concern on API compatibility: yes AWS-LC is aiming to
> provide a
> > > more stable API. We already run integration tests with 6 other
> projects [2]
> > > including HAProxy. This will help ensure API compatibility going
> forward.
> > > What is your specific concern with ABI compatibility? Are you looking
> to take
> > > the haproxy executable built with OpenSSL libcrypto/libssl and drop in
> AWS-LC
> > > without recompiling haproxy? Or do that between AWS-LC libcrypto/libssl
> > > versions?
> >
> > I personally have no interest in cross-libs ABI compatibility because
> > that does not make much sense, particularly when considering that Openssl
> > does not support QUIC so by definition there will be many symbol-level
> > differences. Regarding aws-lc's libs over time, yes for the users it
> > would be desirable that within a stable branch it's possible to update
> > the library or the application in any order without having to rebuild
> > the application. We all know that it's something that only becomes
> > possible once the lib stabilizes enough to avoid invasive backports in
> > stable branches. I don't know what the current status is for aws-lc's
> > stable branches at the moment.
> >
>
> Agreed, cross-libs ABI is not useful, but the ABI should remain stable
> between minor releases so the library package could be updated without
> rebuilding every software that depends on it.
>
> Regards,
>
>
> --
> William Lallemand
>
>


Re: [PATCH 0/2] CI changes

2023-08-09 Thread Willy Tarreau
On Sun, Aug 06, 2023 at 12:07:37AM +0200, Ilya Shipitsin wrote:
> fixed  'Unknown argument "groupinstall" for command "dnf5"'
> coverity scan CI rewritten without travis-ci wrapper

Both patches mergde with the typo fixed. Thanks Ilya!
Willy



[ANNOUNCE] haproxy-2.6.15

2023-08-09 Thread Willy Tarreau
Hi,

HAProxy 2.6.15 was released on 2023/08/09. It added 73 new commits
after version 2.6.14.

As mentioned in the 2.8.2 announce, some moderate security issues were
addressed.

The high severity issues addressed in this version are the following:

  - performing multiple large-header replacements at once can sometimes
overwrite parts of the contents of the headers if header size is
increased. This may happen with the "replace-header" action, when the
buffer gets too fragmented, a temporary one is needed to realign it,
then they are permutted. But if this happens more than once, the
allocated temporary buffer could be the one that had just been used,
where live data will be overwritten but the new ones. This can cause
garbage to appear in headers, and might possibly trigger some asserts
depending on the damage and where this passes. This issue was reported
by Christian Ruppert.

  - the H3 decoder used to properly reject malformed header names, but
forgot to do so for header values, as was already done for H2. This
could theoretically be used to attack servers behind, though for this
to happen, one would need to have a QUIC listener and a tool permitting
to send such malformed bytes (not granted).

  - the check for invalid characters on content-length header values doesn't
reject empty headers, which can pass through. And since they don't have
a value, they're not merged with next ones, so it is possible to pass
a request that has both an empty content-length and a populated one.
Such requests are invalid and the vast majority of servers will reject
them. But there are certainly still a few non-compliant servers that
will only look at one of them, considering the empty value equals zero
and be fooled with this. Thus the problem is not as much for mainstream
users as for those who develop their own HTTP stack or who purposely use
haproxy to protect a known-vulnerable server, because these ones may be
at risk. This issue was reported by Ben Kallus of Dartmouth College and
Narf Industries. A CVE was filed for this one. There is a work-around,
though: simply rejecting requests containing an empty content-length
header will do the job:

 http-request deny if { hdr_len(content-length) 0 }

Then there are a bunch of lower severity ones, particularly:

  - the URL fragments (the part that follows '#') are not allowed to be
sent on the wire, and their handling on the server side has long been
ambiguous. Historically most servers would trim them, nowadays with
stronger specification requirements most of them tend to simply reject
the request as invalid. Till now we did neither of these, so they
could appear at the end of the "path" sample fetch contents. It can be
problematic in case path_end is used to route requests. For example,
a rule doing routing "{ path_end .png .jpg }" to a static server could
very well match "index.html#.png". The question of how best to proceed
in this case was asked to other HTTP implementers and the consensus was
clearly that this should be actively rejected, which is even specifically
mandated in certain side-protocol specs. A measurement on haproxy.org
shows that such requests appear at a rate of roughly 1 per million, and
are either emitted by poorly written crawlers that copy-paste blocks of
text, or are sent by vulnerability scanners. Thus a check was added for
this corner case which is now blocked by default. In case anyone would
discover that they're hosting a bogus application relying on this, this
can be reverted using "option accept-invalid-http-request". This issue
was reported by Seth Manesse and Paul Plasil.

  - in H3, the FIN bit could be handled before the last frame was processed,
triggering an internal error.

  - H3: the presence of a content-length header was not reported internally,
causing the FCGI mux on the backend to stall during uploads from QUIC to
FCGI.

  - listener: the proxy's lock is needed in relax_listener(), otherwise we
risk a deadlock through an ABBA pattern that could happen when a listener
gets desaturated.

  - logging too large messages to a ring can cause their loss, due to the
maxlen parameter not being accurately calculated.

  - quic: a few issues affect the retry tokens (used when a listener is
under flood): a check was missing on the dcid, which could probably
be used to try to create more than one connection per token; the
internal tick was used for the timestamp used in tokens instead of
the wall-clock time, causing a risk that a token will fail to
validate against another node from the same cluster; finally the
initial vector used for random token generation was not strong
enough. Missing parenthesis in the PTO calculation formula could
possibly result in obscure bugs such as a connection probing
 

Re: sc-set-gpt with expression: internal error, unexpected rule->from=0, please report this bug!

2023-08-09 Thread Aurelien DARRAGON
>> I have no idea what causes it at the moment. A few things you could try,
>> in any order, to help locate the bug:
>>
>>   - check if it accepts it using "http-request sc-set-gpt" instead of
>> "tcp-request connection" so that we know if it's related to the ruleset
>> or something else ;
>>
> 
> Thanks, that seems to narrow the problem down.
> 
> "http-request sc-set-gpt" does work, so does "tcp-request session". I.e.
> the bug seems to depend on "tcp-request connection".
> 
> "session" works for me, for setting session variables it might even be
> necessary, but those might be avoidable by setting the conditional
> directly.
> (But not trivially since "sub()" only takes values or variables
> but not fetches and "-m int gt " only seem to takes direct
> values).

Indeed, according to both doc and code, sc-set-gpt and sc-set-gpt0 are
available from:

- tcp-request session
- tcp-request content
- tcp-response content
- http-request
- http-response
- http-after-response

But, according to the doc, they are also available from:
- tcp-request connection

But the switch-cases in parse_set_gpt(), action_set_gpt(), and
action_set_gpt0() from stick_table.c don't allow this case, so it looks
like it was forgotten indeed when the expr support was added for
sc-set-gpt0 in 0d7712dff0 ("MINOR: stick-table: allow sc-set-gpt0 to set
value from an expression").

We have the same issue for the sc-add-gpc action which was greatly
inspired from set-gpt, where the switch cases defined in parse_add_gpc()
and action_add_gpc() from stick_table.c don't allow tcp-request
connection as origin.

Please find the attached patches that should help solve the above issues.

AurelienFrom b66b401ddb36a4c686fa0df965492da204ba66a8 Mon Sep 17 00:00:00 2001
From: Aurelien DARRAGON 
Date: Wed, 9 Aug 2023 17:39:29 +0200
Subject: [PATCH 2/2] BUG/MINOR: stktable: allow sc-add-gpc from tcp-request
 connection

Following the previous commit's logic, we enable the use of sc-add-gpc
from tcp-request connection since it was probably forgotten in the first
place for sc-set-gpt0, and since sc-add-gpc was inspired from it, it also
lacks its.

As sc-add-gpc was implemented in 5a72d03a58 ("MINOR: stick-table: implement
the sc-add-gpc() action"), this should only be backported to 2.8
---
 src/stick_table.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/src/stick_table.c b/src/stick_table.c
index 363269f01..b11e94961 100644
--- a/src/stick_table.c
+++ b/src/stick_table.c
@@ -2913,6 +2913,7 @@ static enum act_return action_add_gpc(struct act_rule *rule, struct proxy *px,
 			value = (unsigned int)(rule->arg.gpc.value);
 		else {
 			switch (rule->from) {
+			case ACT_F_TCP_REQ_CON: smp_opt_dir = SMP_OPT_DIR_REQ; break;
 			case ACT_F_TCP_REQ_SES: smp_opt_dir = SMP_OPT_DIR_REQ; break;
 			case ACT_F_TCP_REQ_CNT: smp_opt_dir = SMP_OPT_DIR_REQ; break;
 			case ACT_F_TCP_RES_CNT: smp_opt_dir = SMP_OPT_DIR_RES; break;
@@ -3013,6 +3014,7 @@ static enum act_parse_ret parse_add_gpc(const char **args, int *arg, struct prox
 			return ACT_RET_PRS_ERR;
 
 		switch (rule->from) {
+		case ACT_F_TCP_REQ_CON: smp_val = SMP_VAL_FE_CON_ACC; break;
 		case ACT_F_TCP_REQ_SES: smp_val = SMP_VAL_FE_SES_ACC; break;
 		case ACT_F_TCP_REQ_CNT: smp_val = SMP_VAL_FE_REQ_CNT; break;
 		case ACT_F_TCP_RES_CNT: smp_val = SMP_VAL_BE_RES_CNT; break;
-- 
2.34.1

From 0b3586a30a8181316477140daf56dd3309b1f6f1 Mon Sep 17 00:00:00 2001
From: Aurelien DARRAGON 
Date: Wed, 9 Aug 2023 17:23:32 +0200
Subject: [PATCH 1/2] BUG/MINOR: stktable: allow sc-set-gpt(0) from tcp-request
 connection

Both the documentation and original developer intents seem to suggest
that sc-set-gpt/sc-set-gpt0 actions should be available from
tcp-request connection.

Yet because it was probably forgotten when expr support was added to
sc-set-gpt0 in 0d7712dff0 ("MINOR: stick-table: allow sc-set-gpt0 to
set value from an expression") it doesn't work and will report this
kind of errors:
 "internal error, unexpected rule->from=0, please report this bug!"

Fixing the code to comply with the documentation and the expected
behavior.

This must be backported to every stable versions.

[for < 2.5, as only sc-set-gpt0 existed back then, the patch must be
manually applied to skip irrelevant parts]
---
 src/stick_table.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/src/stick_table.c b/src/stick_table.c
index a2aa9c451..363269f01 100644
--- a/src/stick_table.c
+++ b/src/stick_table.c
@@ -2656,6 +2656,7 @@ static enum act_return action_set_gpt(struct act_rule *rule, struct proxy *px,
 			value = (unsigned int)(rule->arg.gpt.value);
 		else {
 			switch (rule->from) {
+			case ACT_F_TCP_REQ_CON: smp_opt_dir = SMP_OPT_DIR_REQ; break;
 			case ACT_F_TCP_REQ_SES: smp_opt_dir = SMP_OPT_DIR_REQ; break;
 			case ACT_F_TCP_REQ_CNT: smp_opt_dir = SMP_OPT_DIR_REQ; break;
 			case ACT_F_TCP_RES_CNT: smp_opt_dir = SMP_OPT_DIR_RES; break;
@@ -2724,6 +2725,7 @@ static enum act_return action_set_gpt0(struct act_rule *rule, st

[ANNOUNCE] haproxy-2.7.10

2023-08-09 Thread Willy Tarreau
Hi,

HAProxy 2.7.10 was released on 2023/08/09. It added 84 new commits
after version 2.7.9.

As mentioned in the 2.8.2 announce, some moderate security issues were
addressed.

The high severity issues addressed in this version are the following:

  - performing multiple large-header replacements at once can sometimes
overwrite parts of the contents of the headers if header size is
increased. This may happen with the "replace-header" action, when the
buffer gets too fragmented, a temporary one is needed to realign it,
then they are permutted. But if this happens more than once, the
allocated temporary buffer could be the one that had just been used,
where live data will be overwritten but the new ones. This can cause
garbage to appear in headers, and might possibly trigger some asserts
depending on the damage and where this passes. This issue was reported
by Christian Ruppert.

  - the H3 decoder used to properly reject malformed header names, but
forgot to do so for header values, as was already done for H2. This
could theoretically be used to attack servers behind, though for this
to happen, one would need to have a QUIC listener and a tool permitting
to send such malformed bytes (not granted).

  - the check for invalid characters on content-length header values doesn't
reject empty headers, which can pass through. And since they don't have
a value, they're not merged with next ones, so it is possible to pass
a request that has both an empty content-length and a populated one.
Such requests are invalid and the vast majority of servers will reject
them. But there are certainly still a few non-compliant servers that
will only look at one of them, considering the empty value equals zero
and be fooled with this. Thus the problem is not as much for mainstream
users as for those who develop their own HTTP stack or who purposely use
haproxy to protect a known-vulnerable server, because these ones may be
at risk. This issue was reported by Ben Kallus of Dartmouth College and
Narf Industries. A CVE was filed for this one. There is a work-around,
though: simply rejecting requests containing an empty content-length
header will do the job:

 http-request deny if { hdr_len(content-length) 0 }

Then there are a bunch of lower severity ones, particularly:

  - the URL fragments (the part that follows '#') are not allowed to be
sent on the wire, and their handling on the server side has long been
ambiguous. Historically most servers would trim them, nowadays with
stronger specification requirements most of them tend to simply reject
the request as invalid. Till now we did neither of these, so they
could appear at the end of the "path" sample fetch contents. It can be
problematic in case path_end is used to route requests. For example,
a rule doing routing "{ path_end .png .jpg }" to a static server could
very well match "index.html#.png". The question of how best to proceed
in this case was asked to other HTTP implementers and the consensus was
clearly that this should be actively rejected, which is even specifically
mandated in certain side-protocol specs. A measurement on haproxy.org
shows that such requests appear at a rate of roughly 1 per million, and
are either emitted by poorly written crawlers that copy-paste blocks of
text, or are sent by vulnerability scanners. Thus a check was added for
this corner case which is now blocked by default. In case anyone would
discover that they're hosting a bogus application relying on this, this
can be reverted using "option accept-invalid-http-request". This issue
was reported by Seth Manesse and Paul Plasil.

  - the bwlim filter could cause a spinning loop in process_stream() due
to an expiration timer that was not reset.

  - in H3, the FIN bit could be handled before the last frame was processed,
triggering an internal error.

  - H3: the presence of a content-length header was not reported internally,
causing the FCGI mux on the backend to stall during uploads from QUIC to
FCGI.

  - listener: the proxy's lock is needed in relax_listener(), otherwise we
risk a deadlock through an ABBA pattern that could happen when a listener
gets desaturated.

  - logging too large messages to a ring can cause their loss, due to the
maxlen parameter not being accurately calculated.

  - quic: when the free space in the buffer used to redispatch datagrams
wraps at the end, new datagrams may be dropped until it empties, due to
the buffer appearing full. This causes excess retransmits when multiple
connections come from the same IP:port.

  - quic: a few issues affect the retry tokens (used when a listener is
under flood): a check was missing on the dcid, which could probably
be used to try to create more than one connection per token; the
internal tick 

[ANNOUNCE] haproxy-2.8.2

2023-08-09 Thread Willy Tarreau
Hi,

HAProxy 2.8.2 was released on 2023/08/09. It added 73 new commits
after version 2.8.1.

It's one of these rare moments where I'm happy that we're a bit late on
releases, because it allowed us to include a backport for a vulnerability
reported this morning, saving all of us an extra release!

The high severity issues addressed in this version are the following:

  - performing multiple large-header replacements at once can sometimes
overwrite parts of the contents of the headers if header size is
increased. This may happen with the "replace-header" action, when the
buffer gets too fragmented, a temporary one is needed to realign it,
then they are permutted. But if this happens more than once, the
allocated temporary buffer could be the one that had just been used,
where live data will be overwritten but the new ones. This can cause
garbage to appear in headers, and might possibly trigger some asserts
depending on the damage and where this passes. This issue was reported
by Christian Ruppert.

  - the H3 decoder used to properly reject malformed header names, but
forgot to do so for header values, as was already done for H2. This
could theoretically be used to attack servers behind, though for this
to happen, one would need to have a QUIC listener and a tool permitting
to send such malformed bytes (not granted).

  - the check for invalid characters on content-length header values doesn't
reject empty headers, which can pass through. And since they don't have
a value, they're not merged with next ones, so it is possible to pass
a request that has both an empty content-length and a populated one.
Such requests are invalid and the vast majority of servers will reject
them. But there are certainly still a few non-compliant servers that
will only look at one of them, considering the empty value equals zero
and be fooled with this. Thus the problem is not as much for mainstream
users as for those who develop their own HTTP stack or who purposely use
haproxy to protect a known-vulnerable server, because these ones may be
at risk. This issue was reported by Ben Kallus of Dartmouth College and
Narf Industries. A CVE was filed for this one. There is a work-around,
though: simply rejecting requests containing an empty content-length
header will do the job:

 http-request deny if { hdr_len(content-length) 0 }

Then there are a bunch of lower severity ones, particularly:

  - the URL fragments (the part that follows '#') are not allowed to be
sent on the wire, and their handling on the server side has long been
ambiguous. Historically most servers would trim them, nowadays with
stronger specification requirements most of them tend to simply reject
the request as invalid. Till now we did neither of these, so they
could appear at the end of the "path" sample fetch contents. It can be
problematic in case path_end is used to route requests. For example,
a rule doing routing "{ path_end .png .jpg }" to a static server could
very well match "index.html#.png". The question of how best to proceed
in this case was asked to other HTTP implementers and the consensus was
clearly that this should be actively rejected, which is even specifically
mandated in certain side-protocol specs. A measurement on haproxy.org
shows that such requests appear at a rate of roughly 1 per million, and
are either emitted by poorly written crawlers that copy-paste blocks of
text, or are sent by vulnerability scanners. Thus a check was added for
this corner case which is now blocked by default. In case anyone would
discover that they're hosting a bogus application relying on this, this
can be reverted using "option accept-invalid-http-request". This issue
was reported by Seth Manesse and Paul Plasil.

  - the bwlim filter could cause a spinning loop in process_stream() due
to an expiration timer that was not reset.

  - in H3, the FIN bit could be handled before the last frame was processed,
triggering an internal error.

  - H3: the presence of a content-length header was not reported internally,
causing the FCGI mux on the backend to stall during uploads from QUIC to
FCGI.

  - Lua/queue: some queued items could leak and progressively cause a slowdown
of queue:push().

  - listener: the proxy's lock is needed in relax_listener(), otherwise we
risk a deadlock through an ABBA pattern that could happen when a listener
gets desaturated.

  - logging too large messages to a ring can cause their loss, due to the
maxlen parameter not being accurately calculated.

  - quic: when the free space in the buffer used to redispatch datagrams
wraps at the end, new datagrams may be dropped until it empties, due to
the buffer appearing full. This causes excess retransmits when multiple
connections come from the same IP:port.

  - quic:

Re: sc-set-gpt with expression: internal error, unexpected rule->from=0, please report this bug!

2023-08-09 Thread Johannes Naab
Hi Willy,

On 8/9/23 13:48, Willy Tarreau wrote:
> Hi Johannes,
> 
> On Wed, Aug 09, 2023 at 01:02:29PM +0200, Johannes Naab wrote:
>> Hi,
>>
>> I'm trying to use a stick table with general purpose tags (gpt) to do longer
>> term (beyond the window itself) maximum connection rate tracking:
>> - stick table with conn_rate and one gpt
>> - update/set gpt0 if the current conn_rate is greater than what is stored in 
>> the gpt.
>>
>> But I have trouble setting the gpt even from a trivial sample expression,
>> erroring during config parsing with `internal error, unexpected rule->from=0,
>> please report this bug!`.
> 
> At first glance I can't find a reason why your config would not work,
> so you've definitely discovered a bug.
> 
> I have no idea what causes it at the moment. A few things you could try,
> in any order, to help locate the bug:
> 
>   - check if it accepts it using "http-request sc-set-gpt" instead of
> "tcp-request connection" so that we know if it's related to the ruleset
> or something else ;
> 

Thanks, that seems to narrow the problem down.

"http-request sc-set-gpt" does work, so does "tcp-request session". I.e.
the bug seems to depend on "tcp-request connection".

"session" works for me, for setting session variables it might even be
necessary, but those might be avoidable by setting the conditional
directly.
(But not trivially since "sub()" only takes values or variables
but not fetches and "-m int gt " only seem to takes direct
values).

"tcp-request connection" state could be helpful to avoid TLS handshakes.


>   - please also try sc0-set-gpt(0) instead of sc-set-gpt(0,0), maybe there
> is something wrong in the latter's parser.
> 

That does not seem to make any difference.

>   - does your other test with "int(1)" as the expression also fail or did
> it work ? If it did work, maybe forcing a cat to integer on the variable
> using "var(proc.baz),add(0)" could work.
> 

Any expression fails in "tcp-request connection", even the more trivial
"int(1)", "var(proc.baz),add(0)" does fail as well.

> In any case some feedback on these points could be useful. The last two
> ones would be safe workarounds if they work.
> 
> 

For completeness a running/working config for tracking the max conn_rate
(https://xkcd.com/979/):

```
frontend foo
bind :::8080 v4v6
default_backend bar
tcp-request connection track-sc0 src table stick1

## track max conn_rate
tcp-request session set-var(sess.prev_conn_rate) sc_get_gpt(0,0,stick1)
tcp-request session set-var(sess.cur_conn_rate) sc_conn_rate(0,stick1)
tcp-request session sc-set-gpt(0,0) var(sess.cur_conn_rate) if { 
var(sess.cur_conn_rate),sub(sess.prev_conn_rate) -m int gt 0 }

http-response set-header cur-conn-rate %[var(sess.cur_conn_rate)]
http-response set-header prev-conn-rate %[var(sess.prev_conn_rate)]

backend stick1
stick-table type ipv6 size 1m expire 1h store conn_rate(10s),gpt(1)
```

Thanks!
Johannes


>> Config, output, and haproxy -vv below.
>>
>> Should this work, or do I misunderstand what sc-set-gpt can achieve?
> 
> For me it should work, and if there's a corner case that makes it
> impossible with your config, I'm not seeing it and we should report it
> in a much more user-friendly way!
> 
> Thanks!
> Willy
> 




Re: sc-set-gpt with expression: internal error, unexpected rule->from=0, please report this bug!

2023-08-09 Thread Willy Tarreau
Hi Johannes,

On Wed, Aug 09, 2023 at 01:02:29PM +0200, Johannes Naab wrote:
> Hi,
> 
> I'm trying to use a stick table with general purpose tags (gpt) to do longer
> term (beyond the window itself) maximum connection rate tracking:
> - stick table with conn_rate and one gpt
> - update/set gpt0 if the current conn_rate is greater than what is stored in 
> the gpt.
> 
> But I have trouble setting the gpt even from a trivial sample expression,
> erroring during config parsing with `internal error, unexpected rule->from=0,
> please report this bug!`.

At first glance I can't find a reason why your config would not work,
so you've definitely discovered a bug.

I have no idea what causes it at the moment. A few things you could try,
in any order, to help locate the bug:

  - check if it accepts it using "http-request sc-set-gpt" instead of
"tcp-request connection" so that we know if it's related to the ruleset
or something else ;

  - please also try sc0-set-gpt(0) instead of sc-set-gpt(0,0), maybe there
is something wrong in the latter's parser.

  - does your other test with "int(1)" as the expression also fail or did
it work ? If it did work, maybe forcing a cat to integer on the variable
using "var(proc.baz),add(0)" could work.

In any case some feedback on these points could be useful. The last two
ones would be safe workarounds if they work.


> Config, output, and haproxy -vv below.
> 
> Should this work, or do I misunderstand what sc-set-gpt can achieve?

For me it should work, and if there's a corner case that makes it
impossible with your config, I'm not seeing it and we should report it
in a much more user-friendly way!

Thanks!
Willy



sc-set-gpt with expression: internal error, unexpected rule->from=0, please report this bug!

2023-08-09 Thread Johannes Naab
Hi,

I'm trying to use a stick table with general purpose tags (gpt) to do longer 
term (beyond the window itself) maximum connection rate tracking:
- stick table with conn_rate and one gpt
- update/set gpt0 if the current conn_rate is greater than what is stored in 
the gpt.

But I have trouble setting the gpt even from a trivial sample expression, 
erroring during config parsing with `internal error, unexpected rule->from=0, 
please report this bug!`.

Config, output, and haproxy -vv below.

Should this work, or do I misunderstand what sc-set-gpt can achieve?

Best regards,
Johannes


config
```
global
log stdout format raw local0
stats socket /run/haproxy/admin.sock mode 660 level admin
stats timeout 30s

set-var proc.baz int(3)

defaults
log global
modehttp
timeout connect 5000
timeout client  5
timeout server  5

frontend foo
bind :::8080 v4v6
default_backend bar
tcp-request connection track-sc0 src table stick1
tcp-request connection sc-set-gpt(0,0) var(proc.baz)
# tcp-request connection sc-set-gpt(0,0) int(1)
http-response set-header conn-rate %[sc_get_gpt(0,0,stick1)]

## track max conn_rate
#tcp-request connection set-var(sess.prev_conn_rate) 
sc_get_gpt(0,0,stick1)
#tcp-request connection set-var(sess.cur_conn_rate) 
sc_conn_rate(0,stick1)
#tcp-request connection sc-set-gpt(0,0) var(sess.cur_conn_rate) if { 
var(sess.cur_conn_rate),sub(sess.prev_conn_rate) -m int gt 0 }

backend bar
server localhost 127.0.0.1:80

backend stick1
stick-table type ipv6 size 1m expire 1h store conn_rate(10s),gpt(1)
```

error
```
# ./haproxy -f ~/haproxy.cfg
[NOTICE]   (139304) : haproxy version is 2.9-dev2-227317-63
[NOTICE]   (139304) : path to executable is ./haproxy
[ALERT](139304) : config : parsing [/root/haproxy.cfg:19] : internal error, 
unexpected rule->from=0, please report this bug!
[ALERT](139304) : config : Error(s) found in configuration file : 
/root/haproxy.cfg
[ALERT](139304) : config : Fatal errors found in configuration.
```

`haproxy -vv` (initally on 2.6, but it still occurs in recent git)
```
HAProxy version 2.9-dev2-227317-63 2023/08/09 - https://haproxy.org/
Status: development branch - not safe for use in production.
Known bugs: https://github.com/haproxy/haproxy/issues?q=is:issue+is:open
Running on: Linux 5.15.0-73-generic #80-Ubuntu SMP Mon May 15 15:18:26 UTC 2023 
x86_64
Build options :
  TARGET  = linux-glibc
  CPU = generic
  CC  = cc
  CFLAGS  = -O2 -g -Wall -Wextra -Wundef -Wdeclaration-after-statement 
-Wfatal-errors -Wtype-limits -Wshift-negative-value -Wshift-overflow=2 
-Wduplicated-cond -Wnull-dereference -fwrapv -Wno-address-of-packed-member 
-Wno-unused-label -Wno-sign-compare -Wno-unused-parameter -Wno-clobbered 
-Wno-missing-field-initializers -Wno-cast-function-type -Wno-string-plus-int 
-Wno-atomic-alignment
  OPTIONS = USE_OPENSSL=1 USE_SYSTEMD=1 USE_PCRE=1
  DEBUG   = -DDEBUG_STRICT -DDEBUG_MEMORY_POOLS

Feature list : -51DEGREES +ACCEPT4 +BACKTRACE -CLOSEFROM +CPU_AFFINITY +CRYPT_H 
-DEVICEATLAS +DL -ENGINE +EPOLL -EVPORTS +GETADDRINFO -KQUEUE -LIBATOMIC 
+LIBCRYPT +LINUX_SPLICE +LINUX_TPROXY -LUA -MATH -MEMORY_PROFILING +NETFILTER 
+NS -OBSOLETE_LINKER +OPENSSL -OPENSSL_WOLFSSL -OT +PCRE -PCRE2 -PCRE2_JIT 
-PCRE_JIT +POLL +PRCTL -PROCCTL -PROMEX -PTHREAD_EMULATION -QUIC 
-QUIC_OPENSSL_COMPAT +RT +SHM_OPEN +SLZ +SSL -STATIC_PCRE -STATIC_PCRE2 
+SYSTEMD +TFO +THREAD +THREAD_DUMP +TPROXY -WURFL -ZLIB

Default settings :
  bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with multi-threading support (MAX_TGROUPS=16, MAX_THREADS=256, default=2).
Built with OpenSSL version : OpenSSL 3.0.2 15 Mar 2022
Running on OpenSSL version : OpenSSL 3.0.2 15 Mar 2022
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2 TLSv1.3
OpenSSL providers loaded : default
Built with network namespace support.
Built with libslz for stateless compression.
Compression algorithms supported : identity("identity"), deflate("deflate"), 
raw-deflate("deflate"), gzip("gzip")
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT 
IP_FREEBIND
Built with PCRE version : 8.39 2016-06-14
Running on PCRE version : 8.39 2016-06-14
PCRE library supports JIT : no (USE_PCRE_JIT not set)
Encrypted password support via crypt(3): yes
Built with gcc compiler version 11.4.0

Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available multiplexer protocols :
(protocols marked as  cannot be specified using 'proto' keyword)
 h2 : mode=HTTP  side=FE|BE  mux=H2flags=HTX|HOL_RISK|NO_UPG
   fcgi : mode=HTTP  side=BE mux=FCGI  flags=HTX|HOL_RISK|NO_UPG
   : mode=HTT