Re: agent-check requires newline in response?

2018-12-15 Thread Willy Tarreau
On Sat, Dec 15, 2018 at 11:14:22PM +, Nick Ramirez wrote:
> Yep, your suggestion reads better to me.

Great, thanks. I've moved it into the previous sentence as it didn't
integrate well within the existing one, which was not finished :

-  to the port set by the "agent-port" parameter and reading an ASCII string.
-  The string is made of a series of words delimited by spaces, tabs or commas
-  in any order, optionally terminated by '\r' and/or '\n', each consisting of :
+  to the port set by the "agent-port" parameter and reading an ASCII string
+  terminated by the first '\r' or '\n' met. The string is made of a series of
+  words delimited by spaces, tabs or commas in any order, each consisting of :

Willy



[PATCH 3/3] MEDIUM: lua: expose safe fetch/conv via val_args_flags

2018-12-15 Thread Robin H. Johnson
Any val_args functions with side-effects is unsafe for use in the Lua
context, and previously this stopped them from having object methods
created except where explicitly whitelisted (val_payload_lv, val_hdr).

Using the new val_args_flags field, we can track which sample
fetches/converters are safe, without having to explicitly whitelist in
the lua codebase.

Before any val_args function is used, the val_args_flags field is
checked as an additional safety measure.

The following fetch/converters are now available from Lua:
- 51d.all (51d_all)
- 51d.single (51d_all)
- distcc_body
- distcc_param
- bool
- meth
- json
- field
- word
- regsub

Initial-Discovery: Yue Zhu 
Signed-off-by: Robin H. Johnson 
Signed-off-by: Robin H. Johnson 
---
 src/hlua.c | 28 +++-
 1 file changed, 19 insertions(+), 9 deletions(-)

diff --git a/src/hlua.c b/src/hlua.c
index a9d126b53..6ddfed4b3 100644
--- a/src/hlua.c
+++ b/src/hlua.c
@@ -3299,9 +3299,14 @@ __LJMP static int hlua_run_sample_fetch(lua_State *L)
MAY_LJMP(hlua_lua2arg_check(L, 2, args, f->arg_mask, hsmp->p));
 
/* Run the special args checker. */
-   if (f->val_args && !f->val_args(args, NULL)) {
-   lua_pushfstring(L, "error in arguments");
-   WILL_LJMP(lua_error(L));
+   if (f->val_args) {
+   if (unlikely(f->val_args_flags != SMP_VAL_ARGS_F_SAFE)) {
+   lua_pushfstring(L, "argument validation is unsafe");
+   WILL_LJMP(lua_error(L));
+   } else if(!f->val_args(args, NULL)) {
+   lua_pushfstring(L, "error in arguments");
+   WILL_LJMP(lua_error(L));
+   }
}
 
/* Initialise the sample. */
@@ -3405,9 +3410,14 @@ __LJMP static int hlua_run_sample_conv(lua_State *L)
MAY_LJMP(hlua_lua2arg_check(L, 3, args, conv->arg_mask, hsmp->p));
 
/* Run the special args checker. */
-   if (conv->val_args && !conv->val_args(args, conv, "", 0, NULL)) {
-   hlua_pusherror(L, "error in arguments");
-   WILL_LJMP(lua_error(L));
+   if (conv->val_args) {
+   if (unlikely(conv->val_args_flags != SMP_VAL_ARGS_F_SAFE)) {
+   lua_pushfstring(L, "argument validation is unsafe");
+   WILL_LJMP(lua_error(L));
+   } else if (!conv->val_args(args, conv, "", 0, NULL)) {
+   hlua_pusherror(L, "error in arguments");
+   WILL_LJMP(lua_error(L));
+   }
}
 
/* Initialise the sample. */
@@ -7668,8 +7678,7 @@ void hlua_init(void)
 * not safe during the runtime.
 */
if ((sf->val_args != NULL) &&
-   (sf->val_args != val_payload_lv) &&
-(sf->val_args != val_hdr))
+   (sf->val_args_flags != SMP_VAL_ARGS_F_SAFE))
continue;
 
/* gL.Tua doesn't support '.' and '-' in the function names, 
replace it
@@ -7714,7 +7723,8 @@ void hlua_init(void)
/* Dont register the keywork if the arguments check function are
 * not safe during the runtime.
 */
-   if (sc->val_args != NULL)
+   if ((sc->val_args != NULL) &&
+   (sc->val_args_flags != SMP_VAL_ARGS_F_SAFE))
continue;
 
/* gL.Tua doesn't support '.' and '-' in the function names, 
replace it
-- 
2.18.0




[PATCH 1/3] MINOR: samples: Prep for val_args_flags

2018-12-15 Thread Robin H. Johnson
In preparation for val_args_flags, introduce a macro that is used to
populate the val_args field in sample_conv_kw_list and
sample_fetch_kw_list.

This commit produces code that has zero changes after preprocessor
expansion.

All sample registration can be found with:
$ git grep -n -e 'sample.*kw_list.*ILH' -A5000 \
  | sed -r -n '/struct sample_(conv|fetch)_kw_list\>/,/};/p'

Initial-Discovery: Yue Zhu 
Signed-off-by: Robin H. Johnson 
Signed-off-by: Robin H. Johnson 
---
 include/types/sample.h |   6 +
 src/51d.c  |   4 +-
 src/backend.c  |  28 ++---
 src/connection.c   |   4 +-
 src/da.c   |   4 +-
 src/flt_http_comp.c|   4 +-
 src/frontend.c |  10 +-
 src/listener.c |   4 +-
 src/map.c  |  62 +--
 src/payload.c  |  54 -
 src/proto_http.c   | 164 +--
 src/proto_tcp.c|  28 ++---
 src/sample.c   | 110 +--
 src/ssl_sock.c | 102 -
 src/stick_table.c  | 244 -
 src/vars.c |   7 +-
 src/wurfl.c|   4 +-
 17 files changed, 422 insertions(+), 417 deletions(-)

diff --git a/include/types/sample.h b/include/types/sample.h
index cf0528edc..a0a440512 100644
--- a/include/types/sample.h
+++ b/include/types/sample.h
@@ -342,4 +342,10 @@ struct sample_conv_kw_list {
 typedef int (*sample_cast_fct)(struct sample *smp);
 extern sample_cast_fct sample_casts[SMP_TYPES][SMP_TYPES];
 
+/* Macros for easier to use val_args list usage
+ * All sample_conv_kw_list/sample_fetch_kw_list should use the SMP_VAL_ARGS 
macro to populate the
+ * val_args field.
+ */
+#define SMP_VAL_ARGS(func)func
+
 #endif /* _TYPES_SAMPLE_H */
diff --git a/src/51d.c b/src/51d.c
index a36333ef5..f415fac26 100644
--- a/src/51d.c
+++ b/src/51d.c
@@ -668,13 +668,13 @@ static struct cfg_kw_list _51dcfg_kws = {{ }, {
 
 /* Note: must not be declared  as its list will be overwritten */
 static struct sample_fetch_kw_list sample_fetch_keywords = {ILH, {
-   { "51d.all", _51d_fetch, ARG5(1,STR,STR,STR,STR,STR), _51d_fetch_check, 
SMP_T_STR, SMP_USE_HRQHV },
+   { "51d.all", _51d_fetch, ARG5(1,STR,STR,STR,STR,STR), 
SMP_VAL_ARGS(_51d_fetch_check), SMP_T_STR, SMP_USE_HRQHV },
{ NULL, NULL, 0, 0, 0 },
 }};
 
 /* Note: must not be declared  as its list will be overwritten */
 static struct sample_conv_kw_list conv_kws = {ILH, {
-   { "51d.single", _51d_conv, ARG5(1,STR,STR,STR,STR,STR), 
_51d_conv_check, SMP_T_STR, SMP_T_STR },
+   { "51d.single", _51d_conv, ARG5(1,STR,STR,STR,STR,STR), 
SMP_VAL_ARGS(_51d_conv_check), SMP_T_STR, SMP_T_STR },
{ NULL, NULL, 0, 0, 0 },
 }};
 
diff --git a/src/backend.c b/src/backend.c
index b3fd6c677..a2bc13d68 100644
--- a/src/backend.c
+++ b/src/backend.c
@@ -1877,25 +1877,25 @@ static int sample_conv_nbsrv(const struct arg *args, 
struct sample *smp, void *p
  * Please take care of keeping this list alphabetically sorted.
  */
 static struct sample_fetch_kw_list smp_kws = {ILH, {
-   { "avg_queue", smp_fetch_avg_queue_size, ARG1(1,BE),  NULL, 
SMP_T_SINT, SMP_USE_INTRN, },
-   { "be_conn",   smp_fetch_be_conn,ARG1(1,BE),  NULL, 
SMP_T_SINT, SMP_USE_INTRN, },
-   { "be_id", smp_fetch_be_id,  0,   NULL, 
SMP_T_SINT, SMP_USE_BKEND, },
-   { "be_name",   smp_fetch_be_name,0,   NULL, 
SMP_T_STR,  SMP_USE_BKEND, },
-   { "be_sess_rate",  smp_fetch_be_sess_rate,   ARG1(1,BE),  NULL, 
SMP_T_SINT, SMP_USE_INTRN, },
-   { "connslots", smp_fetch_connslots,  ARG1(1,BE),  NULL, 
SMP_T_SINT, SMP_USE_INTRN, },
-   { "nbsrv", smp_fetch_nbsrv,  ARG1(1,BE),  NULL, 
SMP_T_SINT, SMP_USE_INTRN, },
-   { "queue", smp_fetch_queue_size, ARG1(1,BE),  NULL, 
SMP_T_SINT, SMP_USE_INTRN, },
-   { "srv_conn",  smp_fetch_srv_conn,   ARG1(1,SRV), NULL, 
SMP_T_SINT, SMP_USE_INTRN, },
-   { "srv_id",smp_fetch_srv_id, 0,   NULL, 
SMP_T_SINT, SMP_USE_SERVR, },
-   { "srv_is_up", smp_fetch_srv_is_up,  ARG1(1,SRV), NULL, 
SMP_T_BOOL, SMP_USE_INTRN, },
-   { "srv_queue", smp_fetch_srv_queue,  ARG1(1,SRV), NULL, 
SMP_T_SINT, SMP_USE_INTRN, },
-   { "srv_sess_rate", smp_fetch_srv_sess_rate,  ARG1(1,SRV), NULL, 
SMP_T_SINT, SMP_USE_INTRN, },
+   { "avg_queue", smp_fetch_avg_queue_size, ARG1(1,BE),  
SMP_VAL_ARGS(NULL), SMP_T_SINT, SMP_USE_INTRN, },
+   { "be_conn",   smp_fetch_be_conn,ARG1(1,BE),  
SMP_VAL_ARGS(NULL), SMP_T_SINT, SMP_USE_INTRN, },
+   { "be_id", smp_fetch_be_id,  0,   
SMP_VAL_ARGS(NULL), SMP_T_SINT, SMP_USE_BKEND, },
+   { "be_name",   smp_fetch_be_name,0,   
SMP_VAL_ARGS(NULL), SMP_T_STR,  SMP_USE_BKEND, },
+   { "be_sess_rate",  smp_fetch_be_sess_rate,   ARG1(1,BE),  
SMP_VAL_ARGS(NULL), SMP_T_SINT, SMP

[PATCH 2/3] MEDIUM: samples: add val_args_flags

2018-12-15 Thread Robin H. Johnson
Some argument validation (val_args) functions have unsafe side-effects,
introduce val_args_flags field to track which functions have
side-effects.

Keep the actual values in preprocessor symbols so that they can be kept
with the functions, and consumed via macros in the static lists.

Initial-Discovery: Yue Zhu 
Signed-off-by: Robin H. Johnson 
Signed-off-by: Robin H. Johnson 
---
 include/types/sample.h | 22 --
 src/51d.c  |  2 ++
 src/hlua.c |  3 +++
 src/map.c  |  1 +
 src/payload.c  |  2 ++
 src/proto_http.c   |  1 +
 src/sample.c   |  7 +++
 src/vars.c |  2 ++
 8 files changed, 38 insertions(+), 2 deletions(-)

diff --git a/include/types/sample.h b/include/types/sample.h
index a0a440512..0f4e93c30 100644
--- a/include/types/sample.h
+++ b/include/types/sample.h
@@ -206,6 +206,18 @@ enum {
SMP_F_CONST  = 1 << 7, /* This sample use constant memory. May 
diplicate it before changes */
 };
 
+/* Flags used to describe val_args functions.
+ * In future, some of these conditions might be permitted, but for now they are
+ * only markers about types of unsafe functions.
+ */
+enum {
+   SMP_VAL_ARGS_F_SAFE = 0,/* no side-effects! */
+   SMP_VAL_ARGS_F_UNSAFE_VAR   = 1 << 0,   /* unsafe side-effect: creates 
variable */
+   SMP_VAL_ARGS_F_UNSAFE_ALLOC = 1 << 1,   /* unsafe side-effect: 
allocates */
+   SMP_VAL_ARGS_F_UNSAFE_FILE  = 1 << 2,   /* unsafe side-effect: file 
access */
+   SMP_VAL_ARGS_F_UNSAFE_ANY   = (1<<0)|(1<<1)|(1<<2), /* any unsafe 
side-effect */
+};
+
 /* needed below */
 struct session;
 struct stream;
@@ -291,6 +303,7 @@ struct sample_conv {
struct sample_conv *smp_conv,
const char *file, int line,
char **err_msg);  /* argument validation 
function */
+   unsigned int val_args_flags;  /* argument validation flags 
*/
unsigned int in_type; /* expected input sample type 
*/
unsigned int out_type;/* output sample type */
void *private;/* private values. only used 
by maps and Lua */
@@ -313,6 +326,7 @@ struct sample_fetch {
uint64_t arg_mask;/* arguments (ARG*()) */
int (*val_args)(struct arg *arg_p,
char **err_msg);  /* argument validation 
function */
+   unsigned int val_args_flags;  /* argument validation flags 
*/
unsigned long out_type;   /* output sample type */
unsigned int use; /* fetch source (SMP_USE_*) */
unsigned int val; /* fetch validity (SMP_VAL_*) 
*/
@@ -344,8 +358,12 @@ extern sample_cast_fct sample_casts[SMP_TYPES][SMP_TYPES];
 
 /* Macros for easier to use val_args list usage
  * All sample_conv_kw_list/sample_fetch_kw_list should use the SMP_VAL_ARGS 
macro to populate the
- * val_args field.
+ * val_args and val_args_flags fields together.
+ * They should define SMP_VAL_ARGS_SAFEVAL_funcname as the bitfield of 
SMP_VAL_ARGS_F_* enum.
+ * SMP_VAL_ARGS(NULL) is common for fetches with no arguments or no validation
  */
-#define SMP_VAL_ARGS(func)func
+#define SMP_VAL_ARGS(func)func, SMP_VAL_ARGS_SAFEVAL_##func
+#define SMP_VAL_ARGS_SAFEVAL(x)   SMP_VAL_ARGS_SAFEVAL_##x
+#define SMP_VAL_ARGS_SAFEVAL_NULL SMP_VAL_ARGS_F_SAFE
 
 #endif /* _TYPES_SAMPLE_H */
diff --git a/src/51d.c b/src/51d.c
index f415fac26..d6e130e82 100644
--- a/src/51d.c
+++ b/src/51d.c
@@ -133,6 +133,7 @@ static int _51d_cache_size(char **args, int section_type, 
struct proxy *curpx,
return 0;
 }
 
+#define SMP_VAL_ARGS_SAFEVAL__51d_fetch_check SMP_VAL_ARGS_F_SAFE
 static int _51d_fetch_check(struct arg *arg, char **err_msg)
 {
if (global_51degrees.data_file_path)
@@ -142,6 +143,7 @@ static int _51d_fetch_check(struct arg *arg, char **err_msg)
return 0;
 }
 
+#define SMP_VAL_ARGS_SAFEVAL__51d_conv_check SMP_VAL_ARGS_F_SAFE
 static int _51d_conv_check(struct arg *arg, struct sample_conv *conv,
const char *file, int line, char **err_msg)
 {
diff --git a/src/hlua.c b/src/hlua.c
index 085544dc6..a9d126b53 100644
--- a/src/hlua.c
+++ b/src/hlua.c
@@ -1471,6 +1471,7 @@ __LJMP static int hlua_map_new(struct lua_State *L)
conv.process = NULL; /* unused. */
conv.arg_mask = 0; /* unused. */
conv.val_args = NULL; /* unused. */
+   conv.val_args_flags = SMP_VAL_ARGS_SAFEVAL(NULL); /* unused */
conv.out_type = SMP_T_STR;
conv.private = (void *)(long)match;
switch (match) {
@@ -6011,6 +6012,7 @@ __LJMP static int hlua_register_converters(lua_State *L)
sck->kw[0].process = hlua_sample_conv_wrapper;
sck->kw[0].arg_mask = 
ARG12(0,STR,STR,STR,STR,STR,STR,STR,STR,STR,STR,STR,STR);
sck-

Re[2]: agent-check requires newline in response?

2018-12-15 Thread Nick Ramirez

Yep, your suggestion reads better to me.

-- Original Message --
From: "Willy Tarreau" 
To: "Nick Ramirez" 
Cc: haproxy@formilux.org
Sent: 12/15/2018 10:30:58 AM
Subject: Re: agent-check requires newline in response?


Hi Nick,

On Fri, Dec 14, 2018 at 05:43:49PM +, Nick Ramirez wrote:

 In the documentation for agent-check
 (https://cbonte.github.io/haproxy-dconv/1.9/configuration.html#agent-check)
 it says that the string returned by the agent may be optionally terminated
 by '\r' or '\n'. However, in my tests, it was mandatory to end the response
 with this. Should the word "optionally" be removed from the docs?


At least one of them is required. Since it's TCP we need to find the end
of the message and to know it was not truncated. I agree that the wording
is confusing, it says :

  "The string is made of a series of words delimited by spaces, tabs or
   commas in any order, optionally terminated by '\r' and/or '\n'"

Which was meant to say that one of them was optional. Maybe we should
say this instead :

  "The string is made of a series of words delimited by spaces, tabs or
   commas in any order, terminated by the first '\r' or '\n' met"

(this basically is what the comments in the code say BTW). What do you
think ?

Thanks,
Willy





Re: HTTP/2 to backend server fails health check when 'option httpchk' set

2018-12-15 Thread Willy Tarreau
On Sat, Dec 15, 2018 at 11:11:58PM +, Nick Ramirez wrote:
> Thanks! That points me in the right direction. I found that to enable Layer
> 7 health checks in this case, I would open another port on the web server
> that does not advertise HTTP/2 support (ALPN HTTP/1.1) or does not use TLS
> (which also turns off HTTP/2 in the case of the Caddy web server), and then
> use the "port" parameter on the server line to point to that port.
> 
> backend webservers
>   balance roundrobin
>   option httpchk HEAD /
>   server server1 web:443 ssl  verify none  alpn h2,http/1.1  check port 80
> 
> Layer 7 health checks back up and running. :-)

Yes definitely, if you have clear-text there it's the way to do it.
Otherwise you can do it in H1 over TLS since your server is supposed
to serve H1 if no ALPN is negociated, but it really depends how both
sides agree on this. And I would not be surprised if checks run over
H1/TLS force a new handshake to happen for regular traffic since a
single session key can be stored per server.

Willy



Re[2]: HTTP/2 to backend server fails health check when 'option httpchk' set

2018-12-15 Thread Nick Ramirez
Thanks! That points me in the right direction. I found that to enable 
Layer 7 health checks in this case, I would open another port on the web 
server that does not advertise HTTP/2 support (ALPN HTTP/1.1) or does 
not use TLS (which also turns off HTTP/2 in the case of the Caddy web 
server), and then use the "port" parameter on the server line to point 
to that port.


backend webservers
  balance roundrobin
  option httpchk HEAD /
  server server1 web:443 ssl  verify none  alpn h2,http/1.1  check port 
80


Layer 7 health checks back up and running. :-)

-- Original Message --
From: "Willy Tarreau" 
To: "Nick Ramirez" 
Cc: haproxy@formilux.org
Sent: 12/15/2018 10:25:42 AM
Subject: Re: HTTP/2 to backend server fails health check when 'option 
httpchk' set



Hi Nick,

On Fri, Dec 14, 2018 at 10:43:04PM +, Nick Ramirez wrote:

 This may be something very simple that I am missing. I am using the latest
 HAProxy Docker image, which is using HAProxy 1.9-dev10 2018/12/08. It is
 using HTTP/2 to the backend web server (Caddy).

 It fails its health check if I uncomment the "option httpchk" line:

 backend webservers
   balance roundrobin
   #option httpchk
   server server1 web:443 check ssl verify none alpn h2


 With that line commented out, it works.

 The project is on Github:
 https://github.com/NickMRamirez/experiment-haproxy-http2

 Am I doing something wrong? It also works if I remove "option http-use-htx"
 and "alpn h2" and uncomment "option httpchk".


You're not really doing anything wrong, it's just the current limitation
of health checks that we've wanted to redesing for years and that deserve
a year worth of work. Currently health checks are only made of a TCP string
sent over the socket and checked in return. Since 1.6 or so, we introduced
the ability to send this string over SSL (when "check-ssl" is set) but that's
basically the limit.

In fact, health checks are completely separate from the traffic. You can
see them as being part of the control plane while the traffic is the data
plane. You're not even forced to send them to the same IP, ports, nor
protocol as your traffic. They only pre-set the same target IP and port
for convenience, but that's all.

I've thought we could at least implement an H2 preface+settings check but
this would provide a very low value for quite some hassle to make it work
for the user, so I think it would only steer the efforts away from a real
redesign of better HTTP checks.

However we should at the very least document this limitation more clearly
for 1.9, as chances are that a number of people will want to try this :-/

Thanks,
Willy

Re: [PATCHv2] ssl: Fix compilation without deprecated OpenSSL 1.1 APIs

2018-12-15 Thread Rosen Penev
On Sat, Dec 15, 2018 at 7:57 AM Willy Tarreau  wrote:
>
> On Fri, Dec 14, 2018 at 08:47:02AM -0800, Rosen Penev wrote:
> > Removing deprecated APIs is an optional part of OpenWrt's build system to
> > save some space on embedded devices.
> >
> > Also added compatibility for LibreSSL.
>
> Looks good, now applied. Thanks for the explanation by the way.
All good. Hope it makes it in the next release so I can drop the patch locally.
>
> Willy



Setting a unique header per server in a backend

2018-12-15 Thread Sachin Shetty
Hi,

We have a tricky requirement to set a different header value in the request
based on which server in a backend is picked.

backend pod0
...
server server1 server1:6180  check
server server2 server2:6180  check
server server3 server3:6180  check

so when request is forwarded to server1 - I want to inject an header
"X-Some-Header: Server1",  "X-Some-Header: Server2"  for server 2 and so
on.

If it possible to register some lua action that would inject the header
based on the server selected  before the request is forwarded to the server.

Thanks
Sachin


Re: regtests - with option http-use-htx

2018-12-15 Thread Willy Tarreau
On Sat, Dec 15, 2018 at 05:18:08PM +0100, PiBa-NL wrote:
> > >  h1    0.0 CLI recv|0x8026612c0: key=127.0.0.1 use=1 exp=0 gpt0=0 
> > > gpc0=0
> > > gpc0_rate(1)=0 conn_rate(1)=1 http_req_cnt=1 
> > > http_req_rate(1)=1
> > > http_err_cnt=0 http_err_rate(1)=0
> > >  h1    0.0 CLI recv|
> > >  h1    0.0 CLI expect failed ~ "table: http1, type: ip, size:1024,
> > > used:(0|1\n0x[0-9a-f]*: key=127\.0\.0\.1 use=0 exp=[0-9]* gpt0=0 gpc0=0
> > > gpc0_rate\(1\)=0 conn_rate\(1\)=1 http_req_cnt=1
> > > http_req_rate\(1\)=1 http_err_cnt=0 http_err_rate\(1\)=0)\n$"
> > Hmmm here I think we're really hitting corner cases depending on whether
> > the tracked counters are released before or after the logs are emitted.
> > In the case of htx, the logs are emitted slightly later than before,
> > which may induce this. Quite honestly I'd be inclined to set use=[01]
> > here in the regex to cover the race condition that exists in both cases,
> > as there isn't any single good value. Christopher, are you also OK with
> > this ? I can do the patch if you're OK.
> Its not about emitting logs, its querying the stats admin socket,

Ah of course you're right, I recognize the output now!

> and even
> with a added 'delay 2' before doing so the results seem to show the same
> difference with/without htx. I don't think its a matter of 'timing' .?

I still don't know well. Without htx, the stream is created when the connection
is accepted and maintained till its end, so it may very well hold a reference
to the table entry even while the connection is idle waiting for a new request.
With HTX, it's like with H2, the stream only lives for the time it takes to
perform a request/response. I think that using barriers to get the output
after the request is sent but before the server starts to respond could
reveal the use=1 here. But apparently it's the barrier in the "haproxy-cli"
section of varnishtest that makes it crash so it might not be the easiest
thing to try for now. Maybe you can try placing the delay between the rxreq
and txresp statements in the server section and try to make sure your CLI
access happens in the middle :-/

Thanks,
Willy



Re: regtests - with option http-use-htx

2018-12-15 Thread PiBa-NL

Hi Willy,
Op 15-12-2018 om 17:06 schreef Willy Tarreau:

Hi Pieter,

On Sat, Dec 15, 2018 at 04:52:10PM +0100, PiBa-NL wrote:

Hi List, Willy,

Trying to run some existing regtests with added option: option http-use-htx

Using: HA-Proxy version 1.9-dev10-c11ec4a 2018/12/15

I get the below issues sofar:

 based on /reg-tests/connection/b0.vtc
Takes 8 seconds to pass, in a slightly modified manor 1.1 > 2.0 expectation
for syslog. This surely needs a closer look?
#    top  TEST ./htx-test/connection-b0.vtc passed (8.490)

It looks exactly like another issue we've found when a content-length
is missing but the close is not seen, which is the same in your case
with the first proxy returning the 503 error page by default. Christopher
told me he understands what's happening in this situation (at least for
the one we've met), I'm CCing him in case this report fuels this thoughts.

Ok thanks.



 based on /reg-tests/stick-table/b1.vtc
Difference here is the use=1 vs use=0 , maybe that is better, but then the
'old' expectation seems wrong, and the bug is the case without htx?

 h1    0.0 CLI recv|0x8026612c0: key=127.0.0.1 use=1 exp=0 gpt0=0 gpc0=0
gpc0_rate(1)=0 conn_rate(1)=1 http_req_cnt=1 http_req_rate(1)=1
http_err_cnt=0 http_err_rate(1)=0
 h1    0.0 CLI recv|
 h1    0.0 CLI expect failed ~ "table: http1, type: ip, size:1024,
used:(0|1\n0x[0-9a-f]*: key=127\.0\.0\.1 use=0 exp=[0-9]* gpt0=0 gpc0=0
gpc0_rate\(1\)=0 conn_rate\(1\)=1 http_req_cnt=1
http_req_rate\(1\)=1 http_err_cnt=0 http_err_rate\(1\)=0)\n$"

Hmmm here I think we're really hitting corner cases depending on whether
the tracked counters are released before or after the logs are emitted.
In the case of htx, the logs are emitted slightly later than before,
which may induce this. Quite honestly I'd be inclined to set use=[01]
here in the regex to cover the race condition that exists in both cases,
as there isn't any single good value. Christopher, are you also OK with
this ? I can do the patch if you're OK.
Its not about emitting logs, its querying the stats admin socket, and 
even with a added 'delay 2' before doing so the results seem to show the 
same difference with/without htx. I don't think its a matter of 'timing' .?


Regards,

PiBa-NL (Pieter)

**   c1    0.0 === expect resp.status == 503
 c1    0.0 EXPECT resp.status (503) == "503" match
***  c1    0.0 closing fd 7
**   c1    0.0 Ending
**   top   0.0 === delay 2
***  top   0.0 delaying 2 second(s)
**   top   2.1 === haproxy h1 -cli {
**   h1    2.1 CLI starting
**   h1    2.1 CLI waiting
***  h1    2.1 CLI connected fd 7 from ::1 26202 to ::1 26153
**   h1    2.1 === send "show table http1"
 h1    2.1 CLI send|show table http1
**   h1    2.1 === expect ~ "table: http1, type: ip, size:1024, 
used:(0|1\\n0x[...
***  h1    2.1 debug|0001:GLOBAL.accept(0005)=000b from [::1:26202] 
ALPN=

 h1    2.1 CLI connection normally closed
***  h1    2.1 CLI closing fd 7
***  h1    2.1 debug|0001:GLOBAL.srvcls[adfd:]
***  h1    2.1 debug|0001:GLOBAL.clicls[adfd:]
***  h1    2.1 debug|0001:GLOBAL.closed[adfd:]
 h1    2.1 CLI recv|# table: http1, type: ip, size:1024, used:1
 h1    2.1 CLI recv|0x8026612c0: key=127.0.0.1 use=1 exp=0 gpt0=0 
gpc0=0 gpc0_rate(1)=0 conn_rate(1)=1 http_req_cnt=1 
http_req_rate(1)=1 http_err_cnt=0 http_err_rate(1)=0

 h1    2.1 CLI recv|
 h1    2.1 CLI expect failed ~ "table: http1, type: ip, size:1024, 
used:(0|1\n0x[0-9a-f]*: key=127\.0\.0\.1 use=0 exp=[0-9]* gpt0=0 gpc0=0 
gpc0_rate\(1\)=0 conn_rate\(1\)=1 http_req_cnt=1 
http_req_rate\(1\)=1 http_err_cnt=0 http_err_rate\(1\)=0)\n$"





Re: Random with Two Choices Load Balancing Algorithm

2018-12-15 Thread Willy Tarreau
Hi Norman,

On Thu, Dec 06, 2018 at 04:57:18PM +, Norman Branitsky wrote:
> NGINX just announced the following load balancing method as default for their 
> Ingress Controller for Kubernetes.
> Will this appear on the HAProxy roadmap?

That's interesting because we've had a discussion about this very same
algo not too long ago with a coworker. I said that it should be trivial
to implement, though I think that it might make sense to make the number
of tries configurable. Indeed, I need to read the complete thesis on it
but I suspect it can make sense to take more than 2 tries if the number
of servers and/or LB nodes (or the product of the two) is very high. Maybe
we'll end up picking log()+1 or something like this.

But yes that's definitely very interesting.

Thanks,
Willy



Re: regtests - with option http-use-htx

2018-12-15 Thread Willy Tarreau
Hi Pieter,

On Sat, Dec 15, 2018 at 04:52:10PM +0100, PiBa-NL wrote:
> Hi List, Willy,
> 
> Trying to run some existing regtests with added option: option http-use-htx
> 
> Using: HA-Proxy version 1.9-dev10-c11ec4a 2018/12/15
> 
> I get the below issues sofar:
> 
>  based on /reg-tests/connection/b0.vtc
> Takes 8 seconds to pass, in a slightly modified manor 1.1 > 2.0 expectation
> for syslog. This surely needs a closer look?
> #    top  TEST ./htx-test/connection-b0.vtc passed (8.490)

It looks exactly like another issue we've found when a content-length
is missing but the close is not seen, which is the same in your case
with the first proxy returning the 503 error page by default. Christopher
told me he understands what's happening in this situation (at least for
the one we've met), I'm CCing him in case this report fuels this thoughts.

>  based on /reg-tests/stick-table/b1.vtc
> Difference here is the use=1 vs use=0 , maybe that is better, but then the
> 'old' expectation seems wrong, and the bug is the case without htx?
> 
>  h1    0.0 CLI recv|0x8026612c0: key=127.0.0.1 use=1 exp=0 gpt0=0 gpc0=0
> gpc0_rate(1)=0 conn_rate(1)=1 http_req_cnt=1 http_req_rate(1)=1
> http_err_cnt=0 http_err_rate(1)=0
>  h1    0.0 CLI recv|
>  h1    0.0 CLI expect failed ~ "table: http1, type: ip, size:1024,
> used:(0|1\n0x[0-9a-f]*: key=127\.0\.0\.1 use=0 exp=[0-9]* gpt0=0 gpc0=0
> gpc0_rate\(1\)=0 conn_rate\(1\)=1 http_req_cnt=1
> http_req_rate\(1\)=1 http_err_cnt=0 http_err_rate\(1\)=0)\n$"

Hmmm here I think we're really hitting corner cases depending on whether
the tracked counters are released before or after the logs are emitted.
In the case of htx, the logs are emitted slightly later than before,
which may induce this. Quite honestly I'd be inclined to set use=[01]
here in the regex to cover the race condition that exists in both cases,
as there isn't any single good value. Christopher, are you also OK with
this ? I can do the patch if you're OK.

> Regards,
> 
> PiBa-NL (Pieter)
> 

Thanks,
Willy

> #commit b406b87
> # BUG/MEDIUM: connection: don't store recv() result into trash.data
> #
> # Cyril Bonté discovered that the proxy protocol randomly fails since
> # commit 843b7cb ("MEDIUM: chunks: make the chunk struct's fields match
> # the buffer struct"). This is because we used to store recv()'s return
> # code into trash.data which is now unsigned, so it never compares as
> # negative against 0. Let's clean this up and test the result itself
> # without storing it first.
> 
> varnishtest "PROXY protocol random failures"
> 
> feature ignore_unknown_macro
> 
> syslog Slog_1 -repeat 8 -level info {
> recv
> expect ~ "Connect from .* to ${h1_ssl_addr}:${h1_ssl_port}"
> recv
> expect ~ "ssl-offload-http/http .* \"POST /[1-8] HTTP/2\\.0\""
> } -start
> 
> haproxy h1 -conf {
> global
> nbproc 4
> nbthread 4
> tune.ssl.default-dh-param 2048
> stats bind-process 1
> log ${Slog_1_addr}:${Slog_1_port} len 2048 local0 debug err
> 
> defaults
> mode http
> timeout client 1s
> timeout server 1s
> timeout connect 1s
> log global
> 
> option http-use-htx
> 
> listen http
> bind-process 1
> bind unix@${tmpdir}/http.socket accept-proxy name ssl-offload-http
> option forwardfor
> 
> listen ssl-offload-http
> option httplog
> bind-process 2-4
> bind "fd@${ssl}" ssl crt ${testdir}/common.pem ssl no-sslv3 alpn 
> h2,http/1.1
> server http unix@${tmpdir}/http.socket send-proxy
> } -start
> 
> 
> shell {
> HOST=${h1_ssl_addr}
> if [ "$HOST" = "::1" ] ; then
> HOST="\[::1\]"
> fi
> for i in 1 2 3 4 5 6 7 8 ; do
> urls="$urls https://$HOST:${h1_ssl_port}/$i";
> done
> curl -i -k -d 'x=x' $urls & wait $!
> }
> 
> syslog Slog_1 -wait

> # commit 3e60b11
> # BUG/MEDIUM: stick-tables: Decrement ref_cnt in table_* converters
> #
> # When using table_* converters ref_cnt was incremented
> # and never decremented causing entries to not expire.
> #
> # The root cause appears to be that stktable_lookup_key()
> # was called within all sample_conv_table_* functions which was
> # incrementing ref_cnt and not decrementing after completion.
> #
> # Added stktable_release() to the end of each sample_conv_table_*
> # function and reworked the end logic to ensure that ref_cnt is
> # always decremented after use.
> #
> # This should be backported to 1.8
> 
> varnishtest "stick-tables: Test expirations when used with table_*"
> 
> # As some macros for haproxy are used in this file, this line is mandatory.
> feature ignore_unknown_macro
> 
> # Do nothing.
> server s1 {
> } -start
> 
> haproxy h1 -conf {
> # Configuration file of 'h1' haproxy instance.
> defaults
> mode   http
> timeout connect 5s
> timeout server  30s
> timeout client

Re: [PATCHv2] ssl: Fix compilation without deprecated OpenSSL 1.1 APIs

2018-12-15 Thread Willy Tarreau
On Fri, Dec 14, 2018 at 08:47:02AM -0800, Rosen Penev wrote:
> Removing deprecated APIs is an optional part of OpenWrt's build system to
> save some space on embedded devices.
> 
> Also added compatibility for LibreSSL.

Looks good, now applied. Thanks for the explanation by the way.

Willy



regtests - with option http-use-htx

2018-12-15 Thread PiBa-NL

Hi List, Willy,

Trying to run some existing regtests with added option: option http-use-htx

Using: HA-Proxy version 1.9-dev10-c11ec4a 2018/12/15

I get the below issues sofar:

 based on /reg-tests/connection/b0.vtc
Takes 8 seconds to pass, in a slightly modified manor 1.1 > 2.0 
expectation for syslog. This surely needs a closer look?

#    top  TEST ./htx-test/connection-b0.vtc passed (8.490)

 based on /reg-tests/stick-table/b1.vtc
Difference here is the use=1 vs use=0 , maybe that is better, but then 
the 'old' expectation seems wrong, and the bug is the case without htx?


 h1    0.0 CLI recv|0x8026612c0: key=127.0.0.1 use=1 exp=0 gpt0=0 
gpc0=0 gpc0_rate(1)=0 conn_rate(1)=1 http_req_cnt=1 
http_req_rate(1)=1 http_err_cnt=0 http_err_rate(1)=0

 h1    0.0 CLI recv|
 h1    0.0 CLI expect failed ~ "table: http1, type: ip, size:1024, 
used:(0|1\n0x[0-9a-f]*: key=127\.0\.0\.1 use=0 exp=[0-9]* gpt0=0 gpc0=0 
gpc0_rate\(1\)=0 conn_rate\(1\)=1 http_req_cnt=1 
http_req_rate\(1\)=1 http_err_cnt=0 http_err_rate\(1\)=0)\n$"


Regards,

PiBa-NL (Pieter)

#commit b406b87
# BUG/MEDIUM: connection: don't store recv() result into trash.data
#
# Cyril Bonté discovered that the proxy protocol randomly fails since
# commit 843b7cb ("MEDIUM: chunks: make the chunk struct's fields match
# the buffer struct"). This is because we used to store recv()'s return
# code into trash.data which is now unsigned, so it never compares as
# negative against 0. Let's clean this up and test the result itself
# without storing it first.

varnishtest "PROXY protocol random failures"

feature ignore_unknown_macro

syslog Slog_1 -repeat 8 -level info {
recv
expect ~ "Connect from .* to ${h1_ssl_addr}:${h1_ssl_port}"
recv
expect ~ "ssl-offload-http/http .* \"POST /[1-8] HTTP/2\\.0\""
} -start

haproxy h1 -conf {
global
nbproc 4
nbthread 4
tune.ssl.default-dh-param 2048
stats bind-process 1
log ${Slog_1_addr}:${Slog_1_port} len 2048 local0 debug err

defaults
mode http
timeout client 1s
timeout server 1s
timeout connect 1s
log global

option http-use-htx

listen http
bind-process 1
bind unix@${tmpdir}/http.socket accept-proxy name ssl-offload-http
option forwardfor

listen ssl-offload-http
option httplog
bind-process 2-4
bind "fd@${ssl}" ssl crt ${testdir}/common.pem ssl no-sslv3 alpn 
h2,http/1.1
server http unix@${tmpdir}/http.socket send-proxy
} -start


shell {
HOST=${h1_ssl_addr}
if [ "$HOST" = "::1" ] ; then
HOST="\[::1\]"
fi
for i in 1 2 3 4 5 6 7 8 ; do
urls="$urls https://$HOST:${h1_ssl_port}/$i";
done
curl -i -k -d 'x=x' $urls & wait $!
}

syslog Slog_1 -wait
# commit 3e60b11
# BUG/MEDIUM: stick-tables: Decrement ref_cnt in table_* converters
#
# When using table_* converters ref_cnt was incremented
# and never decremented causing entries to not expire.
#
# The root cause appears to be that stktable_lookup_key()
# was called within all sample_conv_table_* functions which was
# incrementing ref_cnt and not decrementing after completion.
#
# Added stktable_release() to the end of each sample_conv_table_*
# function and reworked the end logic to ensure that ref_cnt is
# always decremented after use.
#
# This should be backported to 1.8

varnishtest "stick-tables: Test expirations when used with table_*"

# As some macros for haproxy are used in this file, this line is mandatory.
feature ignore_unknown_macro

# Do nothing.
server s1 {
} -start

haproxy h1 -conf {
# Configuration file of 'h1' haproxy instance.
defaults
mode   http
timeout connect 5s
timeout server  30s
timeout client  30s
option http-use-htx

frontend http1
bind "fd@${my_frontend_fd}"
stick-table size 1k expire 1ms type ip store 
conn_rate(10s),http_req_cnt,http_err_cnt,http_req_rate(10s),http_err_rate(10s),gpc0,gpc0_rate(10s),gpt0
http-request track-sc0 req.hdr(X-Forwarded-For)
http-request redirect location https://${s1_addr}:${s1_port}/ if { 
req.hdr(X-Forwarded-For),table_http_req_cnt(http1) -m int lt 0  }
http-request redirect location https://${s1_addr}:${s1_port}/ if { 
req.hdr(X-Forwarded-For),table_trackers(http1) -m int lt 0  }
http-request redirect location https://${s1_addr}:${s1_port}/ if { 
req.hdr(X-Forwarded-For),in_table(http1) -m int lt 0  }
http-request redirect location https://${s1_addr}:${s1_port}/ if { 
req.hdr(X-Forwarded-For),table_bytes_in_rate(http1) -m int lt 0  }
http-request redirect location https://${s1_addr}:${s1_port}/ if { 
req.hdr(X-Forwarded-For),table_bytes_out_rate(http1) -m int lt 0  }
http-request redirect location https://${s1_addr}:${s1_port}/ if { 
req.hdr(X-Forwarded-For),table_conn_cnt(http1) -m int lt 0  }
http

Re: agent-check requires newline in response?

2018-12-15 Thread Willy Tarreau
Hi Nick,

On Fri, Dec 14, 2018 at 05:43:49PM +, Nick Ramirez wrote:
> In the documentation for agent-check
> (https://cbonte.github.io/haproxy-dconv/1.9/configuration.html#agent-check)
> it says that the string returned by the agent may be optionally terminated
> by '\r' or '\n'. However, in my tests, it was mandatory to end the response
> with this. Should the word "optionally" be removed from the docs?

At least one of them is required. Since it's TCP we need to find the end
of the message and to know it was not truncated. I agree that the wording
is confusing, it says :

  "The string is made of a series of words delimited by spaces, tabs or
   commas in any order, optionally terminated by '\r' and/or '\n'"

Which was meant to say that one of them was optional. Maybe we should
say this instead :

  "The string is made of a series of words delimited by spaces, tabs or
   commas in any order, terminated by the first '\r' or '\n' met"

(this basically is what the comments in the code say BTW). What do you
think ?

Thanks,
Willy



Re: HTTP/2 to backend server fails health check when 'option httpchk' set

2018-12-15 Thread Willy Tarreau
Hi Nick,

On Fri, Dec 14, 2018 at 10:43:04PM +, Nick Ramirez wrote:
> This may be something very simple that I am missing. I am using the latest
> HAProxy Docker image, which is using HAProxy 1.9-dev10 2018/12/08. It is
> using HTTP/2 to the backend web server (Caddy).
> 
> It fails its health check if I uncomment the "option httpchk" line:
> 
> backend webservers
>   balance roundrobin
>   #option httpchk
>   server server1 web:443 check ssl verify none alpn h2
> 
> 
> With that line commented out, it works.
> 
> The project is on Github:
> https://github.com/NickMRamirez/experiment-haproxy-http2
> 
> Am I doing something wrong? It also works if I remove "option http-use-htx"
> and "alpn h2" and uncomment "option httpchk".

You're not really doing anything wrong, it's just the current limitation
of health checks that we've wanted to redesing for years and that deserve
a year worth of work. Currently health checks are only made of a TCP string
sent over the socket and checked in return. Since 1.6 or so, we introduced
the ability to send this string over SSL (when "check-ssl" is set) but that's
basically the limit.

In fact, health checks are completely separate from the traffic. You can
see them as being part of the control plane while the traffic is the data
plane. You're not even forced to send them to the same IP, ports, nor
protocol as your traffic. They only pre-set the same target IP and port
for convenience, but that's all.

I've thought we could at least implement an H2 preface+settings check but
this would provide a very low value for quite some hassle to make it work
for the user, so I think it would only steer the efforts away from a real
redesign of better HTTP checks.

However we should at the very least document this limitation more clearly
for 1.9, as chances are that a number of people will want to try this :-/

Thanks,
Willy



Re: [PATCH 1/1] REGTEST: Add a reg test for HTTP cookies.

2018-12-15 Thread Willy Tarreau
On Fri, Dec 14, 2018 at 08:07:57PM +0100, flecai...@haproxy.com wrote:
> From: Frédéric Lécaille 
> 
> This script tests the "cookie  insert indirect" directive with
> header checks on server and client side. syslog messages are also
> checked, especially --II (invalid, insert) flags logging.

Applied, thank you Fred!
Willy



Re: [PATCH] BUG/MEDIUM: Expose all converters & fetches

2018-12-15 Thread Willy Tarreau
Hi Robin,

On Fri, Dec 14, 2018 at 06:28:16AM +, Robin H. Johnson wrote:
> On Fri, Dec 07, 2018 at 01:14:47PM +0100, Willy Tarreau wrote:
> > I had a quick look, some converters use check_operator() which creates
> > a variable upon each invocation of the parsing function. Some people
> > might inadvertently get caught by using these ones to look up cookie
> > values or session identifiers for example and not realize that zoombie
> > variables are lying behind. Another one is smp_check_const_bin() which
> > will call parse_binary(), returning an allocated memory block representing
> > the binary string, that nobody will free either. And for converters you
> > can see that all map-related functions use sample_load_map(), which will
> > attempt to load a file from the file system. Not only this must not be
> > attempted at run time for obvious performance reasons (will not work if
> > the config properly uses chroot anyway), but it may also use huge amounts
> > of memory on each call.
> That's a horrible side-effect for functions that are intended to
> validate arguments to functions.

Yes, and it would probably deserve being improved!

> - Should we introduce a field into sample_conv/sample_fetch that
>   describes if the argument validation has side-effects?

That could be one option. Another one might involve having two steps,
one to validate the arguments (which would be safe) and one to resolve
or convert them. In this case we could possibly refuse to call those
with such functions.

> - Can we improve the fetch/conv to not have side-effects, or at least
>   cleanup afterwards?

Not really, when you figure that some involve loading stuff from a file,
there's no real cleanup. Also this still does not address the thread safety.

> Here's my validation of each of the checks so far.
> 
> safe:
> sample_conv_json_check
> sample_conv_regsub_check
> sample_conv_field_check
> _51d_conv_check
> val_distcc
> val_payload_lv
> val_hdr
> smp_check_const_bool
> smp_check_const_meth
> 
> unsafe:
> check_operator -> vars_check_arg -> register_name
> smp_check_concat -> vars_check_arg  -> register_name
> smp_check_strcmp -> vars_check_arg  -> register_name
> sample_load_map -> (it's a messy)
> conv_check_var -> vars_check_arg -> register_name
> smp_check_const_bin -> parse_binary -> ??

Maybe as a first step it would make sense to extend the current "if" block
in the Lua code to cover the extra safe ones above, and to put a comment
in front of each of them indicating that they must remain side-effect-free
because they're explicitly allowed to be called from Lua ?

Regards,
Willy



Re: stdout logging makes syslog logging fail.. 1.9-dev10-6e0d8ae

2018-12-15 Thread Willy Tarreau
On Sat, Dec 15, 2018 at 12:40:08AM +0100, PiBa-NL wrote:
> Hi List, Willy,
> 
> stdout logging makes syslog logging fail.. regtest that reproduces the issue
> attached.

Now fixed, thanks Pieter. I was confused by the way the initialized
state of the log FDs is detected there, so this only works when using
an FD alone or an FD combined with UNIX socket addresses, but it doesn't
work when using multiple FDs nor FD with an IP address, because the FD
overrides the FD used to send UDP logs.

Willy



Re: crash with regtest: /reg-tests/connection/h00001.vtc after commit f157384

2018-12-15 Thread Willy Tarreau
On Sat, Dec 15, 2018 at 12:32:28AM +0100, PiBa-NL wrote:
> Hi List, Willy,
> 
> Current 1.9-dev master ( 6e0d8ae ) crashes with regtest:
> /reg-tests/connection/h1.vtc stack below, it fails after commit f157384.
> 
> Can someone check? Thanks.

Now fixed, thank you Pieter. I don't know how I failed to see it since
I *am* using the regtests. Maybe not trained enough to run them yet :-/

Willy



Re: Quick update on 1.9

2018-12-15 Thread Willy Tarreau
On Sat, Dec 15, 2018 at 01:18:18PM +0100, PiBa-NL wrote:
> Hi Willy,
> 
> Op 15-12-2018 om 6:15 schreef Willy Tarreau:
> > > - Compression corrupts data(Christopher is investigating):
> > > https://www.mail-archive.com/haproxy@formilux.org/msg32059.html
> > This one was fixed, he had to leave quickly last evening so he
> > couldn't respond, but it was due to some of my changes to avoid
> > copies, I failed to grasp some corner cases of htx.
> > 
> Could it be it is not fixed/committed in the git repository? (b.t.w. i don't
> use htx in the vtc testfile ..).

Ah so it must be different, I mixed it because what I broke was revealed
by compression.

> As "6e0d8ae BUG/MINOR: mworker: don't use unitialized mworker_proc struct
> master" seems to be the latest commit and the .vtc file still produces files
> with different hashes for the 3 curl commands for me.

OK, I'll retry, thanks for the details.

> Besides that and as usual thanks for your elaborate response on all the
> other subjects :).

You're welcome :-)

Willy



Re: Quick update on 1.9

2018-12-15 Thread PiBa-NL

Hi Willy,

Op 15-12-2018 om 6:15 schreef Willy Tarreau:

- Compression corrupts data(Christopher is investigating):
https://www.mail-archive.com/haproxy@formilux.org/msg32059.html

This one was fixed, he had to leave quickly last evening so he
couldn't respond, but it was due to some of my changes to avoid
copies, I failed to grasp some corner cases of htx.

Could it be it is not fixed/committed in the git repository? (b.t.w. i 
don't use htx in the vtc testfile ..).
As "6e0d8ae BUG/MINOR: mworker: don't use unitialized mworker_proc 
struct master" seems to be the latest commit and the .vtc file still 
produces files with different hashes for the 3 curl commands for me.


Besides that and as usual thanks for your elaborate response on all the 
other subjects :).


Regards,

PiBa-NL (Pieter)