Re: [PATCH] BUG/MINOR: tools: fix parsing "us" unit for timers

2021-04-02 Thread Thayne McCombs
Oh, and this should be backported to all supported versions.



[PATCH] BUG/MINOR: tools: fix parsing "us" unit for timers

2021-04-02 Thread Thayne McCombs
Commit c20ad0d8dbd1bb5707bbfe23632415c3062e046c (BUG/MINOR: tools:  make
parse_time_err() more strict on the timer validity) broke parsing the "us"
unit in timers. It caused `parse_time_err()` to return the string "s",
which indicates an error.

Now if the "u" is followed by an "s" we properly continue processing the
time instead of immediately failing.

This fixes https://github.com/haproxy/haproxy/issues/1209
---
 src/tools.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/src/tools.c b/src/tools.c
index 546ad4416..4924ad1a0 100644
--- a/src/tools.c
+++ b/src/tools.c
@@ -2326,6 +2326,7 @@ const char *parse_time_err(const char *text, unsigned 
*ret, unsigned unit_flags)
if (text[1] == 's') {
idiv = 100;
text++;
+   break;
}
return text;
case 'm': /* millisecond : "ms" or minute: "m" */
--
2.31.1




[ANNOUNCE] haproxy-2.4-dev15

2021-04-02 Thread Willy Tarreau
Hi,

HAProxy 2.4-dev15 was released on 2021/04/02. It added 69 new commits
after version 2.4-dev14.

I feel like we haven't done much this week due to the time spent dealing
with the recent regressions in 2.3 and 2.2 :-/

With this said, we could still merge some long-pending stuff and continue
the cleanups:

  - Christopher finally merged his long-term stabilization updates for
the "tcp-request content" rule sets. The problem with this ruleset
nowadays is that when used with HTTP, the L6 matches (those relying
on req.len, req.payload) mean nothing as they just see the internal
HTX contents. There is an emulation layer in place to decode HTTP on
the fly but for TCP level it is meaningless. But these were sometimes
needed in setups where a TCP frontend branches to an HTTP backend,
leading to an implicit TCP->HTTP upgrade, in which case the rules
would apply to TCP for the first request, or to HTTP for the next
ones. And to add to the fun, I don't even remember what happens if
a TCP->HTTP upgrade is done during a frontend-to-backend transition
and an H2 upgrade is required, since all requests will have to pass
in turn through the frontend again. Well, no need to enter into the
long list of details, it's become a complete mess. We figured that
the root cause of the problem was that users have valid reasons to
use tcp-request rules in TCP frontend and to switch to HTTP backends,
as that it was not possible to use http-request rules in the frontend.
What was done was the addition of a new "switch-mode" action to the
tcp-request ruleset, which ends the TCP analysis and switches to HTTP,
where HTTP rules can be used. This will result in the ability to write
cleaner configs in the future, where TCP is used only for TCP and HTTP
is used everywhere else. Of course current working configs continue to
work, but we can hope that over the course of a few years the tricky
and unreliable ones will fade away (I think most users already noticed
that TCP rules don't work exactly the same with H1 and H2 and tried to
achieve something better).

  - Amaury added a long-awaited feature which is a diagnostic mode for the
config: certain constructions are valid but suspicious, and we've often
been hesitating about adding a warning or not. For me the rule has
always been quite simple: there must always be a way to express a valid
config without any warning, to encourage users to fix them. But because
of this certain mistakes are hard to spot and can cause trouble. This
was the goal of the diag-mode: start haproxy with -dD and watch the
suggestions. It may report things that are totally valid for you but
uncommon, or others that are the cause of your trouble. Since the
addition is new, only a few checks were added (servers with weight 0
which sometimes result from a transient bug in a config generator,
servers with the same cookie value, nbthread being specified more than
once, out-of-order global sections). But the goal is to add more over
time now that the infrastructure is in place, and these are things we
can easily decide to backport later if they help users troubleshoot
their setups.

  - I cleaned up the tests/ and contrib/ directories. The tests/ directory
is now split into conf (test configs), exp (experimental stuff for
developers), unit (unit tests for certain code areas). I expect it to
become dirty again over time, it's not a big deal. The contrib/
directory however was a bit more challenging. I managed to establish a
classification between the following groups:
  - development tools (code generators, debugging aids, etc). These
were moved to dev/. Those depending on any include file are now
built from the main makefile with automatic compiler options so
that we don't take a shower of warnings anymore. In addition this
will ensure that certain flags match what is used elsewhere.

  - admin tools (halog, systemd unit, selinux configs etc) were moved
to admin/. Again those which need some includes are now built from
the main makefile (e.g. halog).

  - optional addons which depend on 3rd-party products or popular tools
(device detection, promex, opentracing) were moved to addons/. Some
were slightly renamed (51d->51degrees, prometheus-exporter->promex,
opentracing->ot) so that they all have a USE_xxx equivalent that
matches the same name. Now using USE_PROMEX=1 is enough to build
the prometheus exporter, no need for EXTRA_OBJS=... anymore. Some
parts of the makefile could be moved there as opentracing does.
Note, I think that some of the doc for the device detection addons
could be moved to their respective directories, which would further
simplify their discovery by users and even possibly their

Re: Patch: DOC: clarify that compression works for HTTP/2

2021-04-02 Thread Willy Tarreau
On Mon, Mar 29, 2021 at 12:47:06PM +0200, Julien Pivotto wrote:
> Dear,
> 
> Please find a patch attached with a small fix for the documentation.

Thank you Julien, now applied!
Willy



Re: 'show errors' - logging & reasons

2021-04-02 Thread Willy Tarreau
Hi Robin,

On Thu, Apr 01, 2021 at 06:03:39PM +, Robin H. Johnson wrote:
> Hi,
> 
> I'm wondering if there is any ongoing development or improvement plans
> around the 'show errors' functionality?

We should improve it. There used to be a wish of keeping a history in a
rotating buffer, but in the mean time H2 arrived with muxes and errors
are now produced at a level where it's a bit more complicated to pass
contextual info. But at least they're still there.

> This has come out of cases where we upgraded HAProxy 1.8 -> 2.2, and
> $work customers started reporting requests that previously worked fine
> now return 400 Invalid Request errors.

That's never good. Often it indicates that they've long been doing
something wrong without ever noticing and that for various reasons
it's not possible anymore.

> Two things stand out:
> - way to reliably capture that output, not being limited to the last
>   error

Ideally I'd like to have a global ring buffer to replace all per-proxy
ones (which also consume a lot of RAM). We could imagine keeping the
ability to have a per-proxy copy based on a config option.

> - messaging about WHY a given position is an error
>   - partial list of reasons I've seen so far included below

In a capture, the indicated position is *always* an invalid character.
It may require to have a look at the standards to know why, but it
seems particularly difficult to me to start to emit the list of all
permitted characters whenever a forbidden one is met. I remember that
we emit the parser's state, maybe this is what should be turned to a
more human-readable form to indicate "in header field name" or "in
header field value" or stuff like this which can help the reader
figure why the char is not welcome (maybe because they expect that
a previous one had switched the state).

> Partial list of low-level invalid request reasons
> - path/queryparams has character that was supposed to be encoded
> - header key invalid character for given position
> - header line malformed (no colon!)
> - header value invalid relative to prior pieces of request**

For the last one we will not have a position because the request is
*semantically* invalid, it's not a parsing issue.

> ** This one is bugging me: user requests with an absolute URI as the
> urlpath, but the hostname in that URI does not match the hostname in the
> Host header.

This is mandated by the standard:

   https://tools.ietf.org/html/rfc7230#section-5.4

   If the target URI includes an authority component, then a
   client MUST send a field-value for Host that is identical to that
   authority component, excluding any userinfo subcomponent and its "@"
   delimiter (Section 2.7.1).
   ...
   A server MUST respond with a 400 (Bad Request) status code to any
   HTTP/1.1 request message that lacks a Host header field and to any
   request message that contains more than one Host header field or a
   Host header field with an invalid field-value.

Do you regularly encounter this case ? If so maybe we could have an
option to relax the rule in certain cases. The standard allows proxies
to ignore the provided Host field and rewrite it from the authority.
Note that we're not a proxy by a gateway and it's often the case that
a number of other gateways (and possibly proxies) have been met from
the client, so we wouldn't do that by default but with a compelling
case I woudln't find this problematic.

Regards,
Willy



Re: [PATCH] BUG/MINOR: opentracing: initialization after establishing daemon mode

2021-04-02 Thread Willy Tarreau
Hi Miroslav,

On Fri, Apr 02, 2021 at 01:44:31PM +0200, Miroslav Zagorac wrote:
> Hello,
> 
> This patch solves the problem reported in github issue #1204, where the
> OpenTracing filter cannot communicate with the selected tracer if HAProxy is
> run in daemon mode.
> 
> This patch should be backported to all branches where the OpenTracing filter
> is located.

Now merged, thank you!

Willy



Re: Clients occasionally see truncated responses

2021-04-02 Thread Willy Tarreau
On Thu, Apr 01, 2021 at 06:48:59PM -0700, Nathan Konopinski wrote:
> Sorry about that. Here's a capture of the problem. Clients report errors
> when they get two TLS app data packet responses. Nginx always sends a
> single and haproxy usually but not always sends a single. The content
> length of the response is always the same, just under 13.9kb.[image:
> mailing_list.jpg]

Well, it should be irrelevant, as the TLS layer is free to segment its
records as it desires. It's even valid to send everything in 1-byte
records. So either these two records happen when there's something else
and are just the visible symptom of another problem, or the client
doesn't process records correctly.

At this point I have no idea, and TLS tends to leave me speechless as
usual since it's designed to prevent debugging :-(

Willy



[ANNOUNCE] haproxy-2.2.13

2021-04-02 Thread William Lallemand
Hi,

HAProxy 2.2.13 was released on 2021/04/02. It added 2 new commits
after version 2.2.12.

This version is released shortly after the 2.2.12 because a regression was
found. Indeed, people using multi-certificates bundle are not able to
start haproxy anymore, this does not happen if you use separate certificates
in your configuration. This regression is 2.2 only.

The multi-certificates bundle is an old feature which was implemented for old
version of OpenSSL to achieve ECDSA + RSA on the same bind line, with new
versions of OpenSSL it is not required anymore.

A lot of people are still using this feature, but it is recommended to migrate
to separate crt keywords in your configuration if you have at least OpenSSL
1.1.0, you can even have per-certificate configuration using a crt-list which
was not possible with a multi-certificates bundle.

As usual, it is strongly encouraged to update to this version.

#
Please find the usual URLs below :
   Site index   : http://www.haproxy.org/
   Discourse: http://discourse.haproxy.org/
   Slack channel: https://slack.haproxy.org/
   Issue tracker: https://github.com/haproxy/haproxy/issues
   Wiki : https://github.com/haproxy/wiki/wiki
   Sources  : http://www.haproxy.org/download/2.2/src/
   Git repository   : http://git.haproxy.org/git/haproxy-2.2.git/
   Git Web browsing : http://git.haproxy.org/?p=haproxy-2.2.git
   Changelog: http://www.haproxy.org/download/2.2/src/CHANGELOG
   Cyril's HTML doc : http://cbonte.github.io/haproxy-dconv/


---
Complete changelog :
William Lallemand (2):
  BUG/MEDIUM: ssl: ckch_inst->ctx not assigned with multi-bundle 
certificates
  REGTESTS: ssl: "set ssl cert" and multi-certificates bundle

---

-- 
William Lallemand



Minnesota Nonprofit Workers Form Local Union

2021-04-02 Thread Inside Charity
Minnesota Nonprofit Workers Form Local Union

         

**(Because of your important work with**CHARITABLE ORGANIZATIONS

**)**VISIT HERE

or press Handshake below

 

https://jimmylarose.lt.acemlnc.com/Prod/link-tracker?redirectUrl=aHR0cHMlM0ElMkYlMkZpbnNpZGVjaGFyaXR5Lm9yZyUyRjIwMjElMkYwNCUyRjAxJTJGbm9ucHJvZml0LXdvcmtlcnMtdW5pb25pemUtZ29vZC1vci1iYWQtZm9yLWNoYXJpdHklMkY==26605550=jimmylarose%2Eactivehosted%2Ecom=vbeBh1HJZ6KtT6pdCg2Rk3b7ImIKGlCdLGgtNBedJm0%3D=b4e69d94bf077ab3e7944b1f14cd3206=77A195A29A425

 

Dear Nonprofit Execs,

Workers at the Minnesota Council of Nonprofits (MCN) announced in March
they are unionizing. The announcement came two days after they asked
management of the organization to grant voluntary recognition during a
biweekly staff meeting. Minnesota Council of Nonprofits' Executive
Director 

**Jon Pratt, has not yet voluntarily recognized the union.**

The Minnesota 

**StarTribune** reported that "worker-level employees" of the Minnesota
Council of Nonprofits wanted to join the Minnesota Newspaper and
Communications Guild.  As standard procedure, Jon Pratt could have
recognized workers' right to join a union but did not.  So, the union
filed for an election to take place in April to formally establish the
local unit and begin contract negotiations in good faith.The first
hearing by MCN's board of directors was to be held on 16 March 2021.

The Minnesota Newspaper and Communication Guild issued a formal
statement on 10 March 2021 about nonprofit workers who wish to join
their union. In the official Guild release it stated workers at MCN
wanted to join a union because, "frustration built up over time when
policy changes, decisions about workload, and new programming ideas
happened without the input from staff who are responsible for the
implementation of the activities. The lack of collaboration has led to
staff burnout, especially among staff whose job it is to make the
day-to-day activities of MCN flourish and thrive."  While racial
justice has been a part of the MCN's strategic plan for years, some
staff felt MCN leadership gave only "tepid gestures" towards racial
equity strategic plans when there needed to be a "bold and unapologetic
effort".  In addition, while MCN provides its employees some benefits,
workers believed more effort needed to be placed on improving working
conditions, particularly focused on racial justice in the nonprofit
sector and through their strategic efforts as a state association. 
They want to implement "a model of collective decision making" which
values and respects workers' time and labor, justly compensates them,
invests in their professional growth, and increases worker satisfaction
with MCN as a workplace.  As one staff member said, "As a union what we
fight for isn't just a suggestion. It's a negotiation; it is real
power. It's the best way to center the most marginal in our workplace
and to give them actual power to give management and directors guidance
and direction."

On 12 March MCN issued the following statement on their website: "On
Tuesday, March 9, 2021, the Minnesota Council of Nonprofits (MCN)
received a request from several MCN employees to be recognized as a
union affiliated with the Minnesota Newspaper & Communications Guild. 
As a state association with over 2,200 nonprofit member organizations
from across Minnesota and the region, a democratically elected and
representative board of directors, and a sincere belief in the
importance and vitality of the nonprofit sector and the people and
communities that make it go, MCN is energized to explore the proposal in
a way that is respectful, inclusive, and holistically considerate. 
Before responding to the proposal, MCN is committed to hearing from all
interested parties, including staff members and the organization's
board of directors, and thoughtfully considering full implications for
the organization and its far-reaching work. MCN's board of directors
will hear directly from staff representatives at its board meeting on
Tuesday, March 16, 2021. We look forward to the opportunity to share
further information following the meeting."

**Nonprofits employees seeking to unionize is not a new phenomenon**. 
As the sector has exploded, the movement has gained steam. From 1995 to
the present, increasingly nonprofit employees are joining a union to
address nonprofit sector workplace issues that management has been
unwilling or unable to resolve or acknowledge...VISIT HERE FOR ENTIRE
ARTICLE



__

Inside Charity is pleased to report that over 400,000 executives, staff,
board members, ministry leaders and volunteers rely on Inside Charity
for their nonprofit news. We're grateful to everyone who has
participated and want to encourage as many as people as possible to
continue to share their experiences. Your written comments and
contributions have helped your colleagues navigate this process.

We are all in the 

Re: [PATCH] JWT payloads break b64dec convertor

2021-04-02 Thread Tim Düsterhus

Moemen,

On 4/2/21 1:38 AM, Moemen MHEDHBI wrote:

Subject: [PATCH 1/2] MINOR: sample: add ub64dec and ubase64 converters

ub64dec and ubase64 are the base64url equivalent of b64dec and base64
converters. base64url encoding is the "URL and Filename Safe Alphabet"
variant of base64 encoding. It is also used in in JWT (JSON Web Token)
standard.
---
 doc/configuration.txt| 11 
 include/haproxy/base64.h |  2 ++
 src/base64.c | 54 +++-
 src/sample.c | 38 
 4 files changed, 104 insertions(+), 1 deletion(-)


Consider adding a reg-test.


diff --git a/doc/configuration.txt b/doc/configuration.txt
index 7048fb63e..10098adef 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -15494,6 +15494,17 @@ base64
   transfer binary content in a way that can be reliably transferred (e.g.
   an SSL ID can be copied in a header).
 
+ub64dec

+  This converter is the base64url variant of b64dec converter. base64url
+   encoding is the "URL and Filename Safe Alphabet" variant of base64 
encoding.
+   It is also the encoding used in JWT (JSON Web Token) standard.
+
+   Example:
+ http-request set-var(txn.token_payload) 
req.hdr(Authorization),word(2,.),ub64dec
+
+ubase64
+  This converter is the base64url variant of base64 converter.


This is not alphabetically sorted.


 bool
   Returns a boolean TRUE if the input value of type signed integer is
   non-null, otherwise returns FALSE. Used in conjunction with and(), it can be
diff --git a/src/base64.c b/src/base64.c
index 53e4d65b2..38902523f 100644
--- a/src/base64.c
+++ b/src/base64.c
@@ -138,6 +138,58 @@ int base64dec(const char *in, size_t ilen, char *out, 
size_t olen) {
return convlen;
 }
 
+/* url variant of a2base64 */

+int a2base64url(char *in, int ilen, char *out, int olen){


I know that the existing functions also use 'int', but I'm pretty 
annoyed by not using 'size_t' for buffer sizes :-)



[...]
+int base64urldec(const char *in, size_t ilen, char *out, size_t olen) {
+   char conv[ilen+2];


This looks like a remotely triggerable stack overflow.


[...]
 
 /* Converts the lower 30 bits of an integer to a 5-char base64 string. The

  * caller is responsible for ensuring that the output buffer can accept 6 bytes


Best regards
Tim Düsterhus



Re: [2.2.9] 100% CPU usage

2021-04-02 Thread Maciej Zdeb
Hi Christopher,

Yes I know, my issues are always pretty weird. ;) Of course it's not
reproducible. :(

I'll try to collect more data and return to you. I will start a new thread
to not mix those two cases.

Kind regards,

pt., 2 kwi 2021 o 10:13 Christopher Faulet  napisał(a):

> Le 31/03/2021 à 13:28, Maciej Zdeb a écrit :
> > Hi,
> >
> > Well it's a bit better situation than earlier because only one thread is
> looping
> > forever and the rest is working properly. I've tried to verify where
> exactly the
> > thread looped but doing "n" in gdb fixed the problem :( After quitting
> gdb
> > session all threads were idle. Before I started gdb it looped about 3h
> not
> > serving any traffic, because I've put it into maintenance as soon as I
> observed
> > abnormal cpu usage.
> >
>
> Hi Maciej,
>
> I'm puzzled. It seems to be a different issue than the first one. You
> reported
> an issue during reloads, on the old processes. There was a loop because of
> a
> deadlock, but the traces showed the watchdog was fired, probably because
> of the
> lua (this must be confirm though).
>
> Here, it is a loop on a living process, right ? Reading the trace, it is
> for now
> a bit hard to figure out what happens. If it is reproducible, you may try
> to use
> "perf top" command, selecting the right thread ID with "-t" argument.
>
> In addition, if the loop always falls in the H2 multiplexer, it could be
> good to
> print the h2C structure to have more info on its internal state.
>
> And to be complete, the output of "show activity", "show fd" and "show
> sess all"
> may help. Because it still loops with no traffic, it should be small.
> "show
> threads" may be good, but HAProxy should be compiled with
> USE_THREAD_DUMP=1 to
> have an advanced dump.
>
> Sorry to ask you so much work, it is pretty hard to track this kind of bug.
>
> Thanks !
> --
> Christopher Faulet
>


[PATCH] BUG/MINOR: opentracing: initialization after establishing daemon mode

2021-04-02 Thread Miroslav Zagorac

Hello,

This patch solves the problem reported in github issue #1204, where the 
OpenTracing filter cannot communicate with the selected tracer if 
HAProxy is run in daemon mode.


This patch should be backported to all branches where the OpenTracing 
filter is located.


--
Zaga

What can change the nature of a man?
>From fe482c8adaa703e790b4d1995255b9754799b149 Mon Sep 17 00:00:00 2001
From: Miroslav Zagorac 
Date: Fri, 2 Apr 2021 04:25:47 +0200
Subject: [PATCH] BUG/MINOR: opentracing: initialization after establishing
 daemon mode

This patch solves the problem reported in github issue #1204, where the
OpenTracing filter cannot communicate with the selected tracer if HAProxy
is run in daemon mode.  The author of the reported issue uses Zipkin
tracer, while in this example Jaeger tracer is used (see gdb output below).

The problem is that the OpenTracing library is initialized before HAProxy
initialize the daemon mode.  Establishing this mode kills the OpenTracing
thread, after which the correct operation of the OpenTracing filter is no
longer possible.  Also, HAProxy crashes on deinitialization of the
OpenTracing library.

The initialization of the OpenTracing library has been moved from the
flt_ot_init() function (which is started before switching the HAProxy to
daemon mode) to the flt_ot_init_per_thread() function (which is run after
switching the HAProxy to daemon mode).

Gdb output of crashed HAProxy process:

  [Thread debugging using libthread_db enabled]
  Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
  Core was generated by `../../../haproxy -f sa/haproxy.cfg'.
  Program terminated with signal SIGSEGV, Segmentation fault.
  #0  0x7f8131fd5629 in pthread_join (threadid=140192831239936, thread_return=0x0) at pthread_join.c:45
  45  pthread_join.c: No such file or directory.
  (gdb) where
  #0  0x7f8131fd5629 in pthread_join (threadid=140192831239936, thread_return=0x0) at pthread_join.c:45
  #1  0x7f812f15abc7 in std::thread::join() ()
 from /tmp/haproxy-os-master/contrib/opentracing/test/libjaeger_opentracing_plugin-0.5.0.so
  #2  0x7f812f0fb6f7 in jaegertracing::reporters::RemoteReporter::close() ()
from /tmp/haproxy-os-master/contrib/opentracing/test/libjaeger_opentracing_plugin-0.5.0.so
  #3  0x7f812f0b7055 in jaegertracing::reporters::CompositeReporter::close() ()
   from /tmp/haproxy-os-master/contrib/opentracing/test/libjaeger_opentracing_plugin-0.5.0.so
  #4  0x7f812f0b9136 in jaegertracing::Tracer::Close() ()
  from /tmp/haproxy-os-master/contrib/opentracing/test/libjaeger_opentracing_plugin-0.5.0.so
  #5  0x7f81309def32 in ot_tracer_close (tracer=0x55fb48057390) at ../../src/tracer.cpp:91
  #6  0x55fb41785705 in ot_close (tracer=0x55fb48061168) at contrib/opentracing/src/opentracing.c:208
  #7  0x55fb4177fc64 in flt_ot_deinit (p=, fconf=) at contrib/opentracing/src/filter.c:215
  #8  0x55fb418bc038 in flt_deinit (proxy=proxy@entry=0x55fb4805ce50) at src/filters.c:360
  #9  0x55fb41893ed1 in free_proxy (p=0x55fb4805ce50) at src/proxy.c:315
  #10 0x55fb4109 in deinit () at src/haproxy.c:2217
  #11 0x55fb41889078 in deinit_and_exit (status=0) at src/haproxy.c:2343
  #12 0x55fb4173d809 in main (argc=, argv=) at src/haproxy.c:3230

This patch should be backported to all branches where the OpenTracing
filter is located.
---
 contrib/opentracing/src/filter.c | 48 +++-
 1 file changed, 28 insertions(+), 20 deletions(-)

diff --git a/contrib/opentracing/src/filter.c b/contrib/opentracing/src/filter.c
index 6699d4613..d5fc2c72f 100644
--- a/contrib/opentracing/src/filter.c
+++ b/contrib/opentracing/src/filter.c
@@ -156,29 +156,15 @@ static void flt_ot_return_void(const struct filter *f, char **err)
 static int flt_ot_init(struct proxy *p, struct flt_conf *fconf)
 {
 	struct flt_ot_conf *conf = FLT_OT_DEREF(fconf, conf, NULL);
-	char   *err = NULL;
-	int retval = FLT_OT_RET_ERROR;
+	int retval = FLT_OT_RET_OK;
 
 	FLT_OT_FUNC("%p, %p", p, fconf);
 
 	if (conf == NULL)
-		FLT_OT_RETURN(retval);
+		FLT_OT_RETURN(FLT_OT_RET_ERROR);
 
 	flt_ot_cli_init();
 
-	/*
-	 * Initialize the OpenTracing library.
-	 * Enable HTX streams filtering.
-	 */
-	retval = ot_init(&(conf->tracer->tracer), conf->tracer->config, conf->tracer->plugin, );
-	if (retval != FLT_OT_RET_ERROR)
-		fconf->flags |= FLT_CFG_FL_HTX;
-	else if (err != NULL) {
-		FLT_OT_ALERT("%s", err);
-
-		FLT_OT_ERR_FREE(err);
-	}
-
 	FLT_OT_RETURN(retval);
 }
 
@@ -426,8 +412,6 @@ static int flt_ot_check(struct proxy *p, struct flt_conf *fconf)
 }
 
 
-#ifdef DEBUG_OT
-
 /***
  * NAME
  *   flt_ot_init_per_thread -
@@ -446,14 +430,38 @@ static int flt_ot_check(struct proxy *p, struct flt_conf *fconf)
  */
 static int flt_ot_init_per_thread(struct proxy *p, struct flt_conf *fconf)
 {
-	int retval = FLT_OT_RET_OK;
+	struct flt_ot_conf *conf = 

Re: [PATCH] JWT payloads break b64dec convertor

2021-04-02 Thread Willy Tarreau
Hi Moemen,

On Fri, Apr 02, 2021 at 01:38:59AM +0200, Moemen MHEDHBI wrote:
> I have came across the same use-case as Jonathan so I gave it a try and
> implemented the converters for base64url variant.
> 
> - Regarding the converters name, I have just prefixed "u" and used
> ubase64/ub64dec. Let me know if the names are not appropriate or if you
> rather prefer to add an argument to existing converters.

Thanks for this. I know it's not your fault but the initial names
"base64" and "b64dec" are quite messy because the former doesn't
indicate the direction. Thus I'd rather not replicate this error,
and at least make sure that "enc" or "dec" is explicit in both
keywords. Maybe "ub64enc" and "ub64dec" or ubase64enc/ubase64dec
are better (though shorter forms are often preferred for converters).

> - RFC1421 mention in base64.c is deprecated so I replaced it with
> RFC4648 to which base64/b64dec converters seem to still apply.

OK! Please do mention this in your commit message as well, after
all you made the effort of justifying it here, but the commit
message is the place for justifying a change :-)

> - not sure if samples list in sample.c should be formatted/aligned after
> the converters added in this patch. They seemed to be already not
> completely aligned. Anyway I have done the aligning on a separate patch
> so you can squash it or drop it at your convenience.

It's often the problem with such lists. Nobody wants to realign
everything when they're the first to break alignment, and at some point
too much is too much and some cleanup is needed. Thanks for doign this
one!

> Testing Example:
>  haproxy.cfg:
>http-request return content-type text/plain lf-string
> %[req.hdr(Authorization),word(2,.),ub64dec]
> 
>  client:
>TOKEN =
> eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VyIjoiZm9vIiwia2V5IjoiY2hhZTZBaFhhaTZlIn0.5VsVj7mdxVvo1wP5c0dVHnr-S_khnIdFkThqvwukmdg
>$ curl -H "Authorization: Bearer ${TOKEN}" 127.0.0.1:8080
>{"user":"foo","key":"chae6AhXai6e"}

You can put that in the commit message, it's extremely useful when
bissecting or even for those who would like to backport it.

I'm having some comments below about style issues however.

> --- a/src/base64.c
> +++ b/src/base64.c
> @@ -138,6 +138,58 @@ int base64dec(const char *in, size_t ilen, char *out, 
> size_t olen) {
>   return convlen;
>  }
>  
> +/* url variant of a2base64 */
> +int a2base64url(char *in, int ilen, char *out, int olen){
   ^
Opening brace on the next line please.

> + int convlen;

^ empty line after variables declarations

> + convlen = a2base64(in,ilen,out,olen);
 ^^   ^
Spaces after commas in function arguments

> + while (out[convlen-1]=='='){
  ^  ^ ^
spaces around operators

> + convlen--;
> + out[convlen]='\0';
^^
> + }
> + for(int i=0;i +/* url variant of base64dec */
> +int base64urldec(const char *in, size_t ilen, char *out, size_t olen) {
(...)
> + switch (ilen%4){
   ^
space before opening brace

> + case 0:
> + break;
> + case 2:
> + conv[ilen]= '=';
> + conv[ilen+1]= '=';
> + conv[ilen+2]= '\0';
> + ilen +=2;
> + break;

Please keep your indentation consistent.

> + case 3:
> + conv[ilen]= '=';
> + conv[ilen+1]= '\0';
> + ilen +=1;
> + break;
> + default:
> + return -1;
> + }
> +  return base64dec(conv,ilen,out,olen);
   ^^
indentation issue here as well

Thanks!
Willy



Re: 2.2.12 and rsa/ecdsa cert regression (crash on startup) ?

2021-04-02 Thread wlallem...@haproxy.com
On Fri, Apr 02, 2021 at 08:22:16AM +, Jarno Huuskonen wrote:
> Thanks William, with 2.2.12 + patch haproxy starts and serves both rsa/ecdsa
> certs.
> 
> I'm attaching a regtest patch that attempts to check that haproxy starts
> with multi-bundle cert and serves both rsa/ecdsa certs.
> (the test itself is not well tested so handle with care :)
> (for example I'm not sure if ciphers ECDHE-RSA-AES128-GCM-SHA256 / ECDHE-
> ECDSA-AES256-GCM-SHA384 are needed/usefull and work with boring/libressl).
> 
> -Jarno
>

Thanks, that's a good idea, but I don't think I will integrate it as it
is. I'll duplicate the set_ssl_cert.vtc one to have the full CLI part
tested with it, since we don't have this test too.

I'll probably emit a new 2.2 this evening with the fix and the reg-test.

Thanks!


-- 
William Lallemand



Re: 2.2.12 and rsa/ecdsa cert regression (crash on startup) ?

2021-04-02 Thread Jarno Huuskonen
Hello,

On Thu, 2021-04-01 at 16:03 +0200, William Lallemand wrote:
> On Thu, Apr 01, 2021 at 02:26:07PM +0200, William Lallemand wrote:
> > On Thu, Apr 01, 2021 at 10:19:31AM +, Jarno Huuskonen wrote:
> > > Hello,
> > > 
> > > I'm seeing a regression with 2.2.12 and using rsa and ecdsa certs on
> > > bind.
> > > (cert1.pem.ecdsa
> > > cert1.pem.ecdsa.ocsp
> > > cert1.pem.ocsp
> > > cert1.pem.rsa
> > > cert1.pem.rsa.ocsp
> > > )
> > > 
> > 
> > Thanks for the report, I can reproduce the problem, I'm investigating.
> > 
> 
> Could you try the attached patch?

Thanks William, with 2.2.12 + patch haproxy starts and serves both rsa/ecdsa
certs.

I'm attaching a regtest patch that attempts to check that haproxy starts
with multi-bundle cert and serves both rsa/ecdsa certs.
(the test itself is not well tested so handle with care :)
(for example I'm not sure if ciphers ECDHE-RSA-AES128-GCM-SHA256 / ECDHE-
ECDSA-AES256-GCM-SHA384 are needed/usefull and work with boring/libressl).

-Jarno

-- 
Jarno Huuskonen
From b0aec4e620404ea38dae0fe50046ab0f2cb48398 Mon Sep 17 00:00:00 2001
From: Jarno Huuskonen 
Date: Fri, 2 Apr 2021 09:39:39 +0300
Subject: [PATCH] REGTESTS: ssl: Minimal multi-bundle certificates bind check.

This adds minimal test to check that multi-bundle (rsa/ecdsa) bind
works (for BUG/MEDIUM: ssl: ckch_inst->ctx not assigned with
 multi-bundle certificates) and both rsa/ecdsa certs are served.
---
 reg-tests/ssl/rsa_and_ecdsa_bind.pem.ecdsa |  1 +
 reg-tests/ssl/rsa_and_ecdsa_bind.pem.rsa   |  1 +
 reg-tests/ssl/set_ssl_cert.vtc | 31 ++
 3 files changed, 33 insertions(+)
 create mode 12 reg-tests/ssl/rsa_and_ecdsa_bind.pem.ecdsa
 create mode 12 reg-tests/ssl/rsa_and_ecdsa_bind.pem.rsa

diff --git a/reg-tests/ssl/rsa_and_ecdsa_bind.pem.ecdsa b/reg-tests/ssl/rsa_and_ecdsa_bind.pem.ecdsa
new file mode 12
index 0..16276ab88
--- /dev/null
+++ b/reg-tests/ssl/rsa_and_ecdsa_bind.pem.ecdsa
@@ -0,0 +1 @@
+ecdsa.pem
\ No newline at end of file
diff --git a/reg-tests/ssl/rsa_and_ecdsa_bind.pem.rsa b/reg-tests/ssl/rsa_and_ecdsa_bind.pem.rsa
new file mode 12
index 0..1b7cb2c3c
--- /dev/null
+++ b/reg-tests/ssl/rsa_and_ecdsa_bind.pem.rsa
@@ -0,0 +1 @@
+common.pem
\ No newline at end of file
diff --git a/reg-tests/ssl/set_ssl_cert.vtc b/reg-tests/ssl/set_ssl_cert.vtc
index a606b477d..022e8d6c3 100644
--- a/reg-tests/ssl/set_ssl_cert.vtc
+++ b/reg-tests/ssl/set_ssl_cert.vtc
@@ -16,6 +16,9 @@
 # any SNI. The test consists in checking that the used certificate is the right one after
 # updating it via a "set ssl cert" call.
 #
+# listen other-rsaecdsa-ssl / other-rsaecdsa checks that haproxy can bind and serve multi-bundle
+# (rsa/ecdsa) certificate.
+#
 # If this test does not work anymore:
 # - Check that you have socat
 
@@ -74,6 +77,21 @@ haproxy h1 -conf {
 bind "${tmpdir}/other-ssl.sock" ssl crt-list ${testdir}/set_default_cert.crt-list
 server s1 ${s1_addr}:${s1_port}
 
+# check that we can bind with: rsa_and_ecdsa_bind.pem.rsa / rsa_and_ecdsa_bind.pem.ecdsa
+listen other-rsaecdsa-ssl
+bind "${tmpdir}/other-rsaecdsa-ssl.sock" ssl crt ${testdir}/rsa_and_ecdsa_bind.pem
+http-request deny deny_status 200
+server s1 ${s1_addr}:${s1_port}
+
+# use other-rsa_ecdsa-ssl to check both rsa and ecdsa certs are returned
+listen other-rsaecdsa
+bind "fd@${otherrsaecdsa}"
+http-response set-header X-SSL-Server-SHA1 %[ssl_s_sha1,hex]
+use-server s1rsa if { path_end -i .rsa }
+use-server s1ecdsa if { path_end -i .ecdsa }
+server s1rsa "${tmpdir}/other-rsaecdsa-ssl.sock" ssl verify none force-tlsv12 sni str(www.test1.com) ciphers ECDHE-RSA-AES128-GCM-SHA256
+server s1ecdsa "${tmpdir}/other-rsaecdsa-ssl.sock" ssl verify none force-tlsv12 sni str(localhost) ciphers ECDHE-ECDSA-AES256-GCM-SHA384
+
 } -start
 
 
@@ -202,3 +220,16 @@ client c1 -connect ${h1_clearlst_sock} {
 expect resp.http.X-SSL-Server-SHA1 == "9DC18799428875976DDE706E9956035EE88A4CB3"
 expect resp.status == 200
 } -run
+
+# Check that other-rsaecdsa serves both rsa and ecdsa certificate
+client c1 -connect ${h1_otherrsaecdsa_sock} {
+txreq -req GET -url /dummy.rsa
+rxresp
+expect resp.http.X-SSL-Server-SHA1 == "2195C9F0FD58470313013FC27C1B9CF9864BD1C6"
+expect resp.status == 200
+
+txreq -req GET -url /dummy.ecdsa
+rxresp
+expect resp.http.X-SSL-Server-SHA1 == "A490D069DBAFBEE66DE434BEC34030ADE8BCCBF1"
+expect resp.status == 200
+} -run
-- 
2.26.3



Re: [2.2.9] 100% CPU usage

2021-04-02 Thread Christopher Faulet

Le 31/03/2021 à 13:28, Maciej Zdeb a écrit :

Hi,

Well it's a bit better situation than earlier because only one thread is looping 
forever and the rest is working properly. I've tried to verify where exactly the 
thread looped but doing "n" in gdb fixed the problem :( After quitting gdb 
session all threads were idle. Before I started gdb it looped about 3h not 
serving any traffic, because I've put it into maintenance as soon as I observed 
abnormal cpu usage.




Hi Maciej,

I'm puzzled. It seems to be a different issue than the first one. You reported 
an issue during reloads, on the old processes. There was a loop because of a 
deadlock, but the traces showed the watchdog was fired, probably because of the 
lua (this must be confirm though).


Here, it is a loop on a living process, right ? Reading the trace, it is for now 
a bit hard to figure out what happens. If it is reproducible, you may try to use 
"perf top" command, selecting the right thread ID with "-t" argument.


In addition, if the loop always falls in the H2 multiplexer, it could be good to 
print the h2C structure to have more info on its internal state.


And to be complete, the output of "show activity", "show fd" and "show sess all" 
may help. Because it still loops with no traffic, it should be small. "show 
threads" may be good, but HAProxy should be compiled with USE_THREAD_DUMP=1 to 
have an advanced dump.


Sorry to ask you so much work, it is pretty hard to track this kind of bug.

Thanks !
--
Christopher Faulet