Re: haproxy-systemd-wrapper exit code problem

2016-11-03 Thread Willy Tarreau
Hi Gabriele,

I assigned some time on this issue and fixed it. The wrapper's code was
simply wrong as the wait()'s status must not be passed directly to exit() 
otherwise it truncates it. It's now OK, I've just merged the attached patch
that I'll backport to 1.6 and 1.5.

Thanks for reporting this bug,
Willy
>From f7659cb10cb0420c7ca06fad1067207021d2a078 Mon Sep 17 00:00:00 2001
From: Willy Tarreau 
Date: Thu, 3 Nov 2016 20:31:40 +0100
Subject: BUG/MEDIUM: systemd-wrapper: return correct exit codes
X-Bogosity: Ham, tests=bogofilter, spamicity=0.00, version=1.2.4

Gabriele Cerami reported the the exit codes of the systemd-wrapper are
wrong. In short, it directly returns the output of the wait syscall's
status, which is a composite value made of error code an signal numbers.
In general it contains the signal number on the lower bits and the error
code on the higher bits, but exit() truncates it to the lowest 8 bits,
causing config validations to incorrectly report a success. Example :

  $ ./haproxy-systemd-wrapper -c -f /dev/null
  <7>haproxy-systemd-wrapper: executing /tmp/haproxy -c -f /dev/null -Ds
  Configuration file has no error but will not start (no listener) => exit(2).
  <5>haproxy-systemd-wrapper: exit, haproxy RC=512
  $ echo $?
  0

If the process is killed however, the signal number is directly reported
in the exit code.

Let's fix all this to ensure that the exit code matches what the shell does,
which means that codes 0..127 are for exit codes, codes 128..254 for signals,
and code 255 for unknown exit code. Now the return code is correct :

  $ ./haproxy-systemd-wrapper -c -f /dev/null
  <7>haproxy-systemd-wrapper: executing /tmp/haproxy -c -f /dev/null -Ds
  Configuration file has no error but will not start (no listener) => exit(2).
  <5>haproxy-systemd-wrapper: exit, haproxy RC=2
  $ echo $?
  2

  $ ./haproxy-systemd-wrapper -f /tmp/cfg.conf
  <7>haproxy-systemd-wrapper: executing /tmp/haproxy -f /dev/null -Ds
  ^C
  <5>haproxy-systemd-wrapper: exit, haproxy RC=130
  $ echo $?
  130

This fix must be backported to 1.6 and 1.5.
---
 src/haproxy-systemd-wrapper.c | 10 ++
 1 file changed, 10 insertions(+)

diff --git a/src/haproxy-systemd-wrapper.c b/src/haproxy-systemd-wrapper.c
index b426f96..f6a9c85 100644
--- a/src/haproxy-systemd-wrapper.c
+++ b/src/haproxy-systemd-wrapper.c
@@ -295,6 +295,16 @@ int main(int argc, char **argv)
}
}
 
+   /* return either exit code or signal+128 */
+   if (WIFEXITED(status))
+   status = WEXITSTATUS(status);
+   else if (WIFSIGNALED(status))
+   status = 128 + WTERMSIG(status);
+   else if (WIFSTOPPED(status))
+   status = 128 + WSTOPSIG(status);
+   else
+   status = 255;
+
fprintf(stderr, SD_NOTICE "haproxy-systemd-wrapper: exit, haproxy 
RC=%d\n",
status);
return status;
-- 
1.7.12.1



Re: load-server-state-from-file does not preserve the DRAIN state

2016-11-03 Thread Willy Tarreau
sent too fast

On Thu, Nov 03, 2016 at 06:33:03PM +0100, Willy Tarreau wrote:
> On Thu, Nov 03, 2016 at 05:45:34PM +0100, Willy Tarreau wrote:
> > Hi Thomas,
> > 
> > On Sun, Oct 30, 2016 at 09:14:53PM +0200, Thomas Raehalme wrote:
> > > Hi,
> > > 
> > > In January[1] and June[2] there were discussions regarding the DRAIN state
> > > not being preserved when loading server state from file. Looking at the
> > > source it looks to me as the issue has not been resolved.
> > > 
> > > Have you come to a conclusion if the current behaviour is how it will be 
> > > or
> > > if a fix can be expected at some time?
> > 
> > By looking at the code, I think there is a bug. I'm not yet certain because
> > there are various combinations but I think one of them is not correct. In
> > fact the goal is to ensure that :
> >   - when the config doesn't change, the state file always has precedence ;
> >   - when the config changes, it takes precedence, otherwise it's impossible
> > to take config changes into account, and the first purpose of the server
> > state file is to keep state across reloads caused by config updates!
> > 
> > I need to continue to study this part to understand if the difference 
> > between
> > the code and the comment is intentional or a result of another corner case 
> > we
> > initially missed.
> 
> OK now after writing down the various cases and reviewing the design notes
> we exchanged last year with Baptiste on this subject, it's clear that the
> comment is right and the code wrong (it does the exact opposite of the code).
> 
> This patch fixes it for me. Could you please confirm as well ? If so I'll
> backport it into 1.6 so that 1.6.10 gets it (ie I'm interested in knowing
> more or less quickly :-)).
> 
> Thanks!
> Willy
>From 69000df4055a1fc6061192fa205ab2b53e6798d3 Mon Sep 17 00:00:00 2001
From: Willy Tarreau 
Date: Thu, 3 Nov 2016 18:19:49 +0100
Subject: BUG/MEDIUM: srv-state: properly restore the DRAIN state
X-Bogosity: Ham, tests=bogofilter, spamicity=0.00, version=1.2.4

There were seveal reports about the DRAIN state not being properly
restored upon reload.

It happens that the condition in the code does exactly the opposite
of what the comment says, and the comment is right so the code is
wrong.

It's worth noting that the conditions are complex here due to the 2
available methods to set the drain state (CLI/agent, and config's
weight). To paraphrase the updated comment in the code, there are
two possible reasons for FDRAIN to have been present :
  - previous config weight was zero
  - "set server b/s drain" was sent to the CLI

In the first case, we simply want to drop this drain state if the new
weight is not zero anymore, meaning the administrator has intentionally
turned the weight back to a positive value to enable the server again
after an operation. In the second case, the drain state was forced on
the CLI regardless of the config's weight so we don't want a change to
the config weight to lose this status. What this means is :
  - if previous weight was 0 and new one is >0, drop the DRAIN state.
  - if the previous weight was >0, keep it.

This fix must be backported to 1.6.
---
 src/server.c | 22 +++---
 1 file changed, 15 insertions(+), 7 deletions(-)

diff --git a/src/server.c b/src/server.c
index 68fa800..340bba7 100644
--- a/src/server.c
+++ b/src/server.c
@@ -2161,15 +2161,23 @@ static void srv_update_state(struct server *srv, int 
version, char **params)
/* apply drain mode if server is currently enabled */
if (!(srv->admin & SRV_ADMF_FMAINT) && (srv_admin_state 
& SRV_ADMF_FDRAIN)) {
/* The SRV_ADMF_FDRAIN flag is inherited when 
srv->iweight is 0
-* (srv->iweight is the weight set up in 
configuration)
-* so we don't want to apply it when 
srv_iweight is 0 and
-* srv->iweight is greater than 0. Purpose is 
to give the
-* chance to the admin to re-enable this server 
from configuration
-* file by setting a new weight > 0.
+* (srv->iweight is the weight set up in 
configuration).
+* There are two possible reasons for FDRAIN to 
have been present :
+*   - previous config weight was zero
+*   - "set server b/s drain" was sent to the 
CLI
+*
+* In the first case, we simply want to drop 
this drain state
+* if the new weight is not zero anymore, 
meaning the administrator
+* has intentionally turned the weight back to 
a positive value to
+* enable the server again after an operation. 
In the second case,
+  

Re: load-server-state-from-file does not preserve the DRAIN state

2016-11-03 Thread Willy Tarreau
On Thu, Nov 03, 2016 at 05:45:34PM +0100, Willy Tarreau wrote:
> Hi Thomas,
> 
> On Sun, Oct 30, 2016 at 09:14:53PM +0200, Thomas Raehalme wrote:
> > Hi,
> > 
> > In January[1] and June[2] there were discussions regarding the DRAIN state
> > not being preserved when loading server state from file. Looking at the
> > source it looks to me as the issue has not been resolved.
> > 
> > Have you come to a conclusion if the current behaviour is how it will be or
> > if a fix can be expected at some time?
> 
> By looking at the code, I think there is a bug. I'm not yet certain because
> there are various combinations but I think one of them is not correct. In
> fact the goal is to ensure that :
>   - when the config doesn't change, the state file always has precedence ;
>   - when the config changes, it takes precedence, otherwise it's impossible
> to take config changes into account, and the first purpose of the server
> state file is to keep state across reloads caused by config updates!
> 
> I need to continue to study this part to understand if the difference between
> the code and the comment is intentional or a result of another corner case we
> initially missed.

OK now after writing down the various cases and reviewing the design notes
we exchanged last year with Baptiste on this subject, it's clear that the
comment is right and the code wrong (it does the exact opposite of the code).

This patch fixes it for me. Could you please confirm as well ? If so I'll
backport it into 1.6 so that 1.6.10 gets it (ie I'm interested in knowing
more or less quickly :-)).

Thanks!
Willy



Re: load-server-state-from-file does not preserve the DRAIN state

2016-11-03 Thread Willy Tarreau
Hi Thomas,

On Sun, Oct 30, 2016 at 09:14:53PM +0200, Thomas Raehalme wrote:
> Hi,
> 
> In January[1] and June[2] there were discussions regarding the DRAIN state
> not being preserved when loading server state from file. Looking at the
> source it looks to me as the issue has not been resolved.
> 
> Have you come to a conclusion if the current behaviour is how it will be or
> if a fix can be expected at some time?

By looking at the code, I think there is a bug. I'm not yet certain because
there are various combinations but I think one of them is not correct. In
fact the goal is to ensure that :
  - when the config doesn't change, the state file always has precedence ;
  - when the config changes, it takes precedence, otherwise it's impossible
to take config changes into account, and the first purpose of the server
state file is to keep state across reloads caused by config updates!

I need to continue to study this part to understand if the difference between
the code and the comment is intentional or a result of another corner case we
initially missed.

Regards,
Willy



HAProxy with MQTT

2016-11-03 Thread Hari Chandrasekhar
Hello,
   Our team is planning to use HAProxy as a TCP load balancer for MQTT
servers. We don't have much familiarity with the HAProxy set up.
So we would like to get some clarity on how the process would work. Please
let me know if this is not the right place to ask questions and Thanks in
Advance.

We are planning to use HAProxy's SSL termination feature and enforce client
certificate validation. So far, we were able to get HAProxy to enforce
clients to present client certificates.
But we are trying to implement some additional client validations - mainly
the following.
   1. Add additional certificate validation by checking the client
Identifier presented in the MQTT data (MQTT - connect packets) against the
CN in the presented client certificate
   2. Perform some authorizations based on the certificate and the type of
packets (PUBLISH, SUBSCRIBE etc).
(Here is the MQTT specification document -
http://docs.oasis-open.org/mqtt/mqtt/v3.1.1/os/mqtt-v3.1.1-os.html)

Here are some of our questions.

   -   Is it possible to do the above using HAProxy & is using HAProxy for
   these purposes the right approach?
   -   Is Lua scripts the recommended way for extending functionalities
   like this? Are there other plugin mechanisms available?
   -   We were analyzing the sample-fetch options to parse data from the
   request body but found it hard to log these contents. We found the http
   capture options but that seem specific to http. Are there similar ones
   which are more generic than specific to http?
   - According to the docs - payload_lv can read the content length at the
   given offset. What is the format used to represent the length of the
   contents? MQTT protocol uses similar approach to prefix the length before
   certain contents. We are trying to verify the format used are the same. If
   you could point us to a more detailed document or some examples around
   these - that would be helpful.



Here are snippets from the config file that we are testing with.

global
log-format Client:%ci\ SSLCipher:%sslc\ SSLVersion:%sslv\
BytesUploaded:%U\ TermState:%ts\ StatusCode:%ST\ Retries:%rc\
ClientCertCN:\ %[ssl_c_s_dn(CN)]
\ MQTTClientId\ %[req.payload_lv(13,2)] - (The req.payload_lv part
obviously does not work. )



frontend mqtt-in
bind *:8883 ssl crt crtfile.cer ca-file cafile.pem verify required
mode tcp
tcp-request inspect-delay 3s
acl mqtt_connect req.payload(0,1) -m bin 10
#  tcp-request content capture req.payload(13,2) len 2
tcp-request content reject if !mqtt_connect
default_backend mqtt-out

backend mqtt-out
server mqttserver:1883



Thanks,
Hari Chandrasekhar


unsubscribe

2016-11-03 Thread MEßNER Arthur , Ing . Mag .
unsubscribe