Re: Backport proposal, opinion needed

2017-04-19 Thread Aleksandar Lazic



Am 19-04-2017 13:02, schrieb Pavlos Parissis:

On 19/04/2017 12:13 μμ, Willy Tarreau wrote:

Hi all,

Stephan (in Cc) reported me two nice segfaults in the config parser 
when
feeding haproxy with some horribly fuzzed invalid configurations. To 
make
it clear, it happens only when haproxy *fails* to start due to an 
error.

But it's not a reason for failing the dirty way. Every time it was a
problem in smp_resolve_args() which is used to resolve acl args.

The root cause of the issue is that there are certain types of errors
where it's very tricky to unroll what has been started (eg: add 
multiple
keywords to a list then you have to remove them and exactly them, 
taking
care not to free a shared memory are if at least one remains because 
this

one will be freed later), etc.

The first bug was a use-after-free causing all sort of random things 
like
memory corruption or an infinite loop when trying to exit, which can 
be a
problem for those aggregating configs from customers. The second one 
was

a "more conventional" null dereference. I could fix both of them but I
realized that the deeper reason is that we try to perform all the 
cross-

reference checks after we've met such errors, which doesn't make sense
and even causes some absurd errors to be reported. So I wrote the 
simple
patch below for 1.8 and I think it would make sense to backport this 
into
earlier versions to make everyone's life easier. It would also make 
the

parser much more robust against such issues in the future.

Now the question is : this is not a bug fix but a small improvement 
which
may have some impact on those being used to read error reports, so 
does
anyone have any objection against this being backported (if so to 
regarding

a specific version maybe) ?




I also believe that it should be backported at least to 1.7 version[1].
It makes the output more clear and squeaks only the relevant bad config 
lines.


+1


Cheers,
Pavlos


[1] IMHO: Users of 1.5 version should upgrade to 1.7, I don't see
any valid reason to stay on 1.5. From my personal experience I can tell
that 1.7 version is a rock solid release.




Fix building haproxy-1.7.5 with LibreSSL

2017-04-19 Thread Bernard Spil
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi,

haproxy 1.7.5 fails to build with LibreSSL 2.5.3.

Like OpenSSL, LibreSSL is making structs opaque. Direct access to the
members thus leads to build failures. This has been addressed by OpenBSD
for 1.6, see
cvsweb.openbsd.org/cgi-bin/cvsweb/ports/net/haproxy/patches/patch-src_ssl_sock_c
.

In making the structs opaque, OpenSSL (and LibreSSL) must make sure they
provide getters and setters for struct members that should be
accessible. OpenSSL has done that in 1.1. Haproxy started
using/emulating some of these new methods used in 1.7.5 but the
implementation is not complete. This causes build failures with
LibreSSL.
The relevant commit in OpenSSL 1.1 is
https://github.com/openssl/openssl/commit/fddfc0afc84728f8a5140685163e66ce6471742d
The haproxy code adds the defines for the methods yet fails to also add
the defines for the constants SSL_CTRL_GET_TLSEXT_STATUS_REQ_CB and
SSL_CTRL_GET_TLSEXT_STATUS_REQ_CB_ARG

The patch adds fixes for the 1.7 added methods aswell as the OpenBSD
fixes.

- --- src/ssl_sock.c.orig   2017-04-03 08:28:32 UTC
+++ src/ssl_sock.c
@@ -794,8 +795,11 @@ static int ssl_sock_load_ocsp(SSL_CTX *c
ocsp = NULL;

 #ifndef SSL_CTX_get_tlsext_status_cb
- -# define SSL_CTX_get_tlsext_status_cb(ctx, cb) \
- - *cb = (void (*) (void))ctx->tlsext_status_cb;
+#ifndef SSL_CTRL_GET_TLSEXT_STATUS_REQ_CB
+#define SSL_CTRL_GET_TLSEXT_STATUS_REQ_CB 128
+#endif
+#define SSL_CTX_get_tlsext_status_cb(ctx, cb) \
+   *cb = SSL_CTX_ctrl(ctx,SSL_CTRL_GET_TLSEXT_STATUS_REQ_CB,0, (void
(**)(void))cb)
 #endif
SSL_CTX_get_tlsext_status_cb(ctx, &callback);

@@ -823,7 +827,10 @@ static int ssl_sock_load_ocsp(SSL_CTX *c
int key_type;
EVP_PKEY *pkey;

- -#ifdef SSL_CTX_get_tlsext_status_arg
+#if defined(SSL_CTX_get_tlsext_status_arg) ||
defined(LIBRESSL_VERSION_NUMBER)
+#ifndef SSL_CTRL_GET_TLSEXT_STATUS_REQ_CB_ARG
+#define SSL_CTRL_GET_TLSEXT_STATUS_REQ_CB_ARG 129
+#endif
SSL_CTX_ctrl(ctx, SSL_CTRL_GET_TLSEXT_STATUS_REQ_CB_ARG, 0, 
&cb_arg);
 #else
cb_arg = ctx->tlsext_status_arg;
@@ -3539,7 +3546,7 @@ int ssl_sock_handshake(struct connection
OSSL_HANDSHAKE_STATE state = 
SSL_get_state((SSL *)conn->xprt_ctx);
empty_handshake = state == 
TLS_ST_BEFORE;
 #else
- - empty_handshake = !((SSL 
*)conn->xprt_ctx)->packet_length;
+   empty_handshake = SSL_state((SSL 
*)conn->xprt_ctx) ==
SSL_ST_BEFORE;
 #endif

if (empty_handshake) {
@@ -3617,7 +3624,7 @@ int ssl_sock_handshake(struct connection
state = SSL_get_state((SSL *)conn->xprt_ctx);
empty_handshake = state == TLS_ST_BEFORE;
 #else
- - empty_handshake = !((SSL 
*)conn->xprt_ctx)->packet_length;
+   empty_handshake = SSL_state((SSL *)conn->xprt_ctx) == 
SSL_ST_BEFORE;
 #endif
if (empty_handshake) {
if (!errno) {
-BEGIN PGP SIGNATURE-

iQIcBAEBAgAGBQJY97C7AAoJEHT7/r+FArC0Vx0P/04wVZ1nsyNdeh/JLNpcKVeP
rza/wB8iQjIBLx/KiyVPppJdIPeplU9Gjtkdh68xNyRkH3sZmG6VZIM94YbGtZex
+TF4tGAHOjpi6E2oN8X9V51MYfVUoaaQfe3K7bG6yRDQG3whyRLKNXd5dZfAFoZn
D6TSAwoTdBtICdcKTonCVrw3avT31hTcW5ykv4fe29WIblW5QNKEJH+3h0c7W5sE
Uk1c1joy62MvxdrnO6KgmyatYkABAWb3AV8yMX6uNbeITwMbSKsq3UwGXNfIjhaL
bM+XHTyntXZZMSnT0N84edNOERWTL2SJW0BHzUfMpRAhEfl+fZgMDTpsPfmZXa/f
yk4XSrJ1VBySPazF17mOYbl/5LQJnO10CEvnDcczXMNWvi5bFfjsO/uDohGsZw9o
u2JUYinSDJtb6mj6Qykn+oDrWH6vKY13HroDboury+K6eGimHOGomad3HgRp6TrY
lSYDHm7L2tVNtVYbYd008Ch7nMoM88tGXyARKpBveUU1u7zS1J3gRps+HSc0mwYW
pSOnCQ1p5MxxmrAYRCU+IOT9plxM41sRxREk9aXMrJJAWAY/B5SDohfaLA+Xa7DF
OH6XoBtyzJtpciY4W0F8vqZqIrzPSaqC/K6fiSKA7uOpiL4qV2CLxG8DmgedMIuU
LxQ7aj9KS58LJ2nCbsKW
=8/pm
-END PGP SIGNATURE-



Re: ModSecurity: First integration patches

2017-04-19 Thread Aleksandar Lazic



Am 19-04-2017 11:24, schrieb Thierry Fournier:

On 19 Apr 2017, at 09:16, Aleksandar Lazic  wrote:



Am 19-04-2017 05:51, schrieb Willy Tarreau:

On Tue, Apr 18, 2017 at 11:55:46PM +0200, Aleksandar Lazic wrote:

Why not reuse the upcoming http/2 format.
HTTP/2 is *easy* to parse and the implementations of servers are 
growing?
Are you kidding ? I mean you want everyone to have to implement HPACK 
etc ?


Well there are a currently lot of http/2 servers out there, but I 
understand your concerns.


I know this is still on the todo list but I think this is worth to 
go

instead to add another protocol.
Or we can take a look into grpc, which is on top of http/2 with 
protobuffer.

http://www.grpc.io/
As I take more time to think in this direction why not add a grpc 
filter

like the spoa filter?
Opinions?
Code generators are always a pain to deal with. It reminds me the old 
days
when I was using rpcgen. And grpc doesn't provide a C implementation 
so that

kills it :-)


Yeah I have also seen that they prefer c++, c#, go and other languages 
but not offer a c lib.
This is strange just because the core is written in c, but I don't 
need to understand everything ;-)


Thierry told me he's going to implement a simple TLV-like list which 
will

be much easier to implement and easy to code on both sides.


@Thierry:
Ah something like netstrings ?
https://cr.yp.to/proto/netstrings.txt


Hi thanks for the link. I do not like this encoding because it is not 
easy to
parse. I must read ascii representation of numbers and convert-it. I 
prefer
simple binary format, with length (encoding in the same way than the 
SPOE

protocol) and value.


Thanks for clarification.


Hi, the main goal is providing:
 - a format easy to decode
 - not adding lot of dependencies to haproxy


That's very polite.


So, after brainstorm, I propose:

 - remove all data already accessible via existing sample-fetches from 
the new

   sample-fetch introduced by my previous patch. These data are:
   method, path, query string, http-version and the body. All of these 
data are
   available in some sample fetches and it will be great to reuse it. 
Note that

   the only data which keep are the headers.

 - For the headers I propose two format: The first format is for 
general use. It

   is simply the header block in the same format that we received
(with the final
   \r\n for detecting truncated header bloc)

 - For header I propose also a binary format like this:
   
   [
  
  
   ]…
   This format is easy to decode because each length is encoded with 
the SPOE
   numeric values encoding method, so it is always embedded in SPOE 
client.


The SPOE message description will become

   spoe-message check-request
  uniqueid args method path query req.ver req.hdrs_bin
req.body_size req.body

uniqueid is a string which allow to do matching between the mod
security logs and
 the haproxy logs. I propose the uniqueid, but this string can
be another
 think like an header containing a uniqueid previously 
generated.


req.hrds_bin is the header bloc in binary format.

req.body_size is the advertised size of the body and it is used to
determine if the
  body is full or truncated.


Looks good to me.

Regards
Aleks



Thierry


I personally
think that adding dependencies on tons of libs to implement simple 
stuff
is more of a problem than an help even for implementors, because when 
you
start to enter some dependency hell or version incompatibilities, 
it's a

real pain, especially when it was done only to find how to implement
just a small list. These things are useful if you want to implement 
heavy

inter-application communications but it's not really the case here.


I got you.


Cheers,
Willy


Regards
Aleks




Re: Restricting RPS to a service

2017-04-19 Thread Krishna Kumar (Engineering)
Hi Holgar,

Thanks once again. However, I understand that session means the same as
connection. The rate-limit documentation confirms that: "When the frontend
reaches the specified number of new sessions per second, it stops accepting
*new connections* until the rate drops below the limit again".

As you say, maxconn defines the maximum number of established sessions,
but rate-limit tells how much of that can come per second. I really want
existing
or new sessions to not exceed a particular RPS.

Regards,
- Krishna


On Wed, Apr 19, 2017 at 9:16 PM, Holger Just  wrote:

> Hi Krishna,
>
> Krishna Kumar (Engineering) wrote:
> > Thanks for your response. However, I want to restrict the requests
> > per second either at the frontend or backend, not session rate. I
> > may have only 10 connections from clients, but the backends can
> > handle only 100 RPS. How do I deny or delay requests when they
> > cross a limit?
>
> A "session" is this context is equivalent to a request-response pair. It
> is not connected to a session of your applciation which might be
> represented by a session cookie.
>
> As such, to restrict the number of requests per second for a frontend,
> rate-limit sessions is exactly the option you are looking for. It does
> not limit the concurrent number of sessions (as maxconn would do) but
> the rate with which the requests are comming in.
>
> If there are more requests per second than the configured value, haproxy
> waits until the session rate drops below the configured value. Once the
> socket's backlog backlog is full, requests will not be accepted by the
> kernel anymore until it clears.
>
> If you want to deny requsts with a custom http error instead, you could
> use a custom `http-request deny` rule and use the fe_sess_rate or
> be_sess_rate values.
>
> Cheers,
> Holger
>


Re: Restricting RPS to a service

2017-04-19 Thread Holger Just
Hi Krishna,

Krishna Kumar (Engineering) wrote:
> Thanks for your response. However, I want to restrict the requests
> per second either at the frontend or backend, not session rate. I
> may have only 10 connections from clients, but the backends can
> handle only 100 RPS. How do I deny or delay requests when they
> cross a limit?

A "session" is this context is equivalent to a request-response pair. It
is not connected to a session of your applciation which might be
represented by a session cookie.

As such, to restrict the number of requests per second for a frontend,
rate-limit sessions is exactly the option you are looking for. It does
not limit the concurrent number of sessions (as maxconn would do) but
the rate with which the requests are comming in.

If there are more requests per second than the configured value, haproxy
waits until the session rate drops below the configured value. Once the
socket's backlog backlog is full, requests will not be accepted by the
kernel anymore until it clears.

If you want to deny requsts with a custom http error instead, you could
use a custom `http-request deny` rule and use the fe_sess_rate or
be_sess_rate values.

Cheers,
Holger



Re: Restricting RPS to a service

2017-04-19 Thread Holger Just
Hi Krishna,

Krishna Kumar (Engineering) wrote:
> What is the way to rate limit on the entire service, without caring 
> about which client is hitting it? Something like "All RPS should be <
> 1000/sec"?

You can set a rate limit per frontend (in a frontend section):

rate-limit sessions 1000

or globally per process [2] (in the global section):

maxsessrate 1000

Cheers,
Holger

[1]
http://cbonte.github.io/haproxy-dconv/1.7/configuration.html#4.2-rate-limit%20sessions
[2]
http://cbonte.github.io/haproxy-dconv/1.7/configuration.html#3.2-maxsessrate




Re: ModSecurity: First integration patches

2017-04-19 Thread thierry . fournier
Hi,

There is a new lot of patches for the spoa/modescurity contrib.

Thierry


On Wed, 19 Apr 2017 11:24:36 +0200
Thierry Fournier  wrote:

> 
> > On 19 Apr 2017, at 09:16, Aleksandar Lazic  wrote:
> > 
> > 
> > 
> > Am 19-04-2017 05:51, schrieb Willy Tarreau:
> >> On Tue, Apr 18, 2017 at 11:55:46PM +0200, Aleksandar Lazic wrote:
> >>> Why not reuse the upcoming http/2 format.
> >>> HTTP/2 is *easy* to parse and the implementations of servers are growing?
> >> Are you kidding ? I mean you want everyone to have to implement HPACK etc ?
> > 
> > Well there are a currently lot of http/2 servers out there, but I 
> > understand your concerns.
> > 
> >>> I know this is still on the todo list but I think this is worth to go
> >>> instead to add another protocol.
> >>> Or we can take a look into grpc, which is on top of http/2 with 
> >>> protobuffer.
> >>> http://www.grpc.io/
> >>> As I take more time to think in this direction why not add a grpc filter
> >>> like the spoa filter?
> >>> Opinions?
> >> Code generators are always a pain to deal with. It reminds me the old days
> >> when I was using rpcgen. And grpc doesn't provide a C implementation so 
> >> that
> >> kills it :-)
> > 
> > Yeah I have also seen that they prefer c++, c#, go and other languages but 
> > not offer a c lib.
> > This is strange just because the core is written in c, but I don't need to 
> > understand everything ;-)
> > 
> >> Thierry told me he's going to implement a simple TLV-like list which will
> >> be much easier to implement and easy to code on both sides.
> > 
> > @Thierry:
> > Ah something like netstrings ?
> > https://cr.yp.to/proto/netstrings.txt
> 
> Hi thanks for the link. I do not like this encoding because it is not easy to
> parse. I must read ascii representation of numbers and convert-it. I prefer
> simple binary format, with length (encoding in the same way than the SPOE
> protocol) and value.
> 
> Hi, the main goal is providing:
>  - a format easy to decode
>  - not adding lot of dependencies to haproxy
> 
> So, after brainstorm, I propose:
> 
>  - remove all data already accessible via existing sample-fetches from the new
>sample-fetch introduced by my previous patch. These data are:
>method, path, query string, http-version and the body. All of these data 
> are
>available in some sample fetches and it will be great to reuse it. Note 
> that
>the only data which keep are the headers.
> 
>  - For the headers I propose two format: The first format is for general use. 
> It
>is simply the header block in the same format that we received (with the 
> final
>\r\n for detecting truncated header bloc)
> 
>  - For header I propose also a binary format like this:
>
>[
>   
>   
>]…
>This format is easy to decode because each length is encoded with the SPOE
>numeric values encoding method, so it is always embedded in SPOE client.
> 
> The SPOE message description will become
> 
>spoe-message check-request
>   uniqueid args method path query req.ver req.hdrs_bin req.body_size 
> req.body
> 
> uniqueid is a string which allow to do matching between the mod security logs 
> and
>  the haproxy logs. I propose the uniqueid, but this string can be 
> another
>  think like an header containing a uniqueid previously generated.
> 
> req.hrds_bin is the header bloc in binary format.
> 
> req.body_size is the advertised size of the body and it is used to determine 
> if the
>   body is full or truncated.
> 
> Thierry
> 
> >> I personally
> >> think that adding dependencies on tons of libs to implement simple stuff
> >> is more of a problem than an help even for implementors, because when you
> >> start to enter some dependency hell or version incompatibilities, it's a
> >> real pain, especially when it was done only to find how to implement
> >> just a small list. These things are useful if you want to implement heavy
> >> inter-application communications but it's not really the case here.
> > 
> > I got you.
> > 
> >> Cheers,
> >> Willy
> > 
> > Regards
> > Aleks
> 
> 
>From 2759c1584f08f397ae11672fa11d260970033406 Mon Sep 17 00:00:00 2001
From: Thierry FOURNIER 
Date: Sun, 9 Apr 2017 05:41:27 +0200
Subject: [PATCH 1/5] BUG/MINOR: change header-declared function to static
 inline

When we include the header proto/spoe.h in other files in the same
project, the compilator claim that the symbol have multiple definitions:

   src/flt_spoe.o: In function `spoe_encode_varint':
   ~/git/haproxy/include/proto/spoe.h:45: multiple definition of `spoe_encode_varint'
   src/proto_http.o:~/git/haproxy/include/proto/spoe.h:45: first defined here
---
 include/proto/spoe.h |   16 
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/include/proto/spoe.h b/include/proto/spoe.h
index 06fb52d..da19db1 100644
--- a/include/proto/spoe.h
+++ b/include/proto/spoe.h
@@ -39,7 +39,7 @@
  *
  * On success, it returns the number of written bytes and

Re: Backport proposal, opinion needed

2017-04-19 Thread Willy Tarreau
Hi Pavlos,

On Wed, Apr 19, 2017 at 01:02:55PM +0200, Pavlos Parissis wrote:
> I also believe that it should be backported at least to 1.7 version[1].
> It makes the output more clear and squeaks only the relevant bad config lines.

Thanks for your feedback!

> [1] IMHO: Users of 1.5 version should upgrade to 1.7, I don't see
> any valid reason to stay on 1.5. From my personal experience I can tell
> that 1.7 version is a rock solid release.

It really depends on each user's level of expectation. Those willing to
"deploy and forget" will more easily go with 1.6 or 1.5 than 1.7 which
will still receive some important fixes for a while. However backporting
the above fix only to 1.7 adds another incentive for upgrading and at the
same time maintains the principle of least surprise for older versions so
that's probably a good tradeoff.

thanks!
Willy



Re: [RFC][PATCHES] seamless reload

2017-04-19 Thread Conrad Hoffmann


On 04/19/2017 11:22 AM, Olivier Houchard wrote:
very long conversation
>> Joining this again a bit late, do you still want me to test it?
>> I would like to know if there are any performance impact, but I see that
>> Conrad Hoffmann has done all the hard work on this. So, we can conclude that 
>> we
>> don't expect any performance impact.
>>
>> Once again thanks a lot for your hard work on this.
>> @Conrad Hoffmann, thanks a lot for testing this and checking if there is any
>> performance impact.
>>
> 
> 
> Hi Pavlos,
> 
> More testing is always appreciated :)
> I don't expect any performance impact, but that wouldn't be the first time
> I say that and am proven wrong, though as you said with all the testing
> Conrad did, I'm fairly confident it should be OK.
> 
> Thanks !
> 
> Olivier

I also think more testing is always very welcome, especially as there are
so many different configurations that certain code paths might for example
never be executed with the configuration we are running here!

Cheers,
Conrad
-- 
Conrad Hoffmann
Traffic Engineer

SoundCloud Ltd. | Rheinsberger Str. 76/77, 10115 Berlin, Germany

Managing Director: Alexander Ljung | Incorporated in England & Wales
with Company No. 6343600 | Local Branch Office | AG Charlottenburg |
HRB 110657B



Re: Backport proposal, opinion needed

2017-04-19 Thread Pavlos Parissis
On 19/04/2017 12:13 μμ, Willy Tarreau wrote:
> Hi all,
> 
> Stephan (in Cc) reported me two nice segfaults in the config parser when
> feeding haproxy with some horribly fuzzed invalid configurations. To make
> it clear, it happens only when haproxy *fails* to start due to an error.
> But it's not a reason for failing the dirty way. Every time it was a
> problem in smp_resolve_args() which is used to resolve acl args.
> 
> The root cause of the issue is that there are certain types of errors
> where it's very tricky to unroll what has been started (eg: add multiple
> keywords to a list then you have to remove them and exactly them, taking
> care not to free a shared memory are if at least one remains because this
> one will be freed later), etc.
> 
> The first bug was a use-after-free causing all sort of random things like
> memory corruption or an infinite loop when trying to exit, which can be a
> problem for those aggregating configs from customers. The second one was
> a "more conventional" null dereference. I could fix both of them but I
> realized that the deeper reason is that we try to perform all the cross-
> reference checks after we've met such errors, which doesn't make sense
> and even causes some absurd errors to be reported. So I wrote the simple
> patch below for 1.8 and I think it would make sense to backport this into
> earlier versions to make everyone's life easier. It would also make the
> parser much more robust against such issues in the future.
> 
> Now the question is : this is not a bug fix but a small improvement which
> may have some impact on those being used to read error reports, so does
> anyone have any objection against this being backported (if so to regarding
> a specific version maybe) ?
> 


I also believe that it should be backported at least to 1.7 version[1].
It makes the output more clear and squeaks only the relevant bad config lines.

Cheers,
Pavlos


[1] IMHO: Users of 1.5 version should upgrade to 1.7, I don't see
any valid reason to stay on 1.5. From my personal experience I can tell
that 1.7 version is a rock solid release.



signature.asc
Description: OpenPGP digital signature


Restricting RPS to a service

2017-04-19 Thread Krishna Kumar (Engineering)
Hi Willy, others,

I have seen documents  that describe how to rate limit from a single client.
What is the way to rate limit on the entire service, without caring about
which
client is hitting it? Something like "All RPS should be < 1000/sec"?

Thanks,
- Krishna


Re: [PATCH] Fix haproxy hangs on FreeBSD >= 11

2017-04-19 Thread Willy Tarreau
On Wed, Apr 19, 2017 at 11:52:22AM +0200, Olivier Houchard wrote:
> Hi guys,
> 
> Thanks to your help, we finally figure out what was happening on FreeBSD,
> and the attached patch should fix it.
> Problem was, haproxy relies on what is really undefined behavior in C, with
> signed integer overflows. gcc and earlier versions of clang behaved as we
> expected, but newer versions of clang do not, which is why it started to be
> a problem with FreeBSD 11 (and is probably on other BSDs using clang, and
> MacOS X).
> 
> Thanks again for giving us enough informations to really figure that out, it
> would have been _tough_ overwise :)

Thanks Olivier (and to all those giving enough feedback making it possible
to figure the real issue). I've now taken it and already backported to 1.7.
I'll backport to older branches as well since many people will progressively
get hit as they upgrade their compilers :-/

It's really sad that there is no way to get any single warning from Clang
when it purposely decides to "optimize" this check in a way that can be
disabled with specific options.

Willy



Backport proposal, opinion needed

2017-04-19 Thread Willy Tarreau
Hi all,

Stephan (in Cc) reported me two nice segfaults in the config parser when
feeding haproxy with some horribly fuzzed invalid configurations. To make
it clear, it happens only when haproxy *fails* to start due to an error.
But it's not a reason for failing the dirty way. Every time it was a
problem in smp_resolve_args() which is used to resolve acl args.

The root cause of the issue is that there are certain types of errors
where it's very tricky to unroll what has been started (eg: add multiple
keywords to a list then you have to remove them and exactly them, taking
care not to free a shared memory are if at least one remains because this
one will be freed later), etc.

The first bug was a use-after-free causing all sort of random things like
memory corruption or an infinite loop when trying to exit, which can be a
problem for those aggregating configs from customers. The second one was
a "more conventional" null dereference. I could fix both of them but I
realized that the deeper reason is that we try to perform all the cross-
reference checks after we've met such errors, which doesn't make sense
and even causes some absurd errors to be reported. So I wrote the simple
patch below for 1.8 and I think it would make sense to backport this into
earlier versions to make everyone's life easier. It would also make the
parser much more robust against such issues in the future.

Now the question is : this is not a bug fix but a small improvement which
may have some impact on those being used to read error reports, so does
anyone have any objection against this being backported (if so to regarding
a specific version maybe) ?

For reference, in the patch there's an example of config which produces
stupid errors right now, with their output and the same output after the
patch.

Please just let me know, I'd personally prefer to backport it.

Thanks,
Willy

--
>From b83dc3d2ef5ffa882aed926ee4d6a82bd94024f0 Mon Sep 17 00:00:00 2001
From: Willy Tarreau 
Date: Wed, 19 Apr 2017 11:24:07 +0200
Subject: MEDIUM: config: don't check config validity when there are fatal
 errors

Overall we do have an issue with the severity of a number of errors. Most
fatal errors are reported with ERR_FATAL (which prevents startup) and not
ERR_ABORT (which stops parsing ASAP), but check_config_validity() is still
called on ERR_FATAL, and will most of the time report bogus errors. This
is what caused smp_resolve_args() to be called on a number of unparsable
ACLs, and it also is what reports incorrect ordering or unresolvable
section names when certain entries could not be properly parsed.

This patch stops this domino effect by simply aborting before trying to
further check and resolve the configuration when it's already know that
there are fatal errors.

A concrete example comes from this config :

  userlist users :
  user foo insecure-password bar

  listen foo
  bind :1234
  mode htttp
  timeout client 10S
  timeout server 10s
  timeout connect 10s
  stats uri /stats
  stats http-request auth unless { http_auth(users) }
  http-request redirect location /index.html if { path / }

It contains a colon after the userlist name, a typo in the client timeout value,
another one in "mode http" which cause some other configuration elements not to
be properly handled.

Previously it would confusingly report :

  [ALERT] 108/114851 (20224) : parsing [err-report.cfg:1] : 'userlist' cannot 
handle unexpected argument ':'.
  [ALERT] 108/114851 (20224) : parsing [err-report.cfg:6] : unknown proxy mode 
'htttp'.
  [ALERT] 108/114851 (20224) : parsing [err-report.cfg:7] : unexpected 
character 'S' in 'timeout client'
  [ALERT] 108/114851 (20224) : Error(s) found in configuration file : 
err-report.cfg
  [ALERT] 108/114851 (20224) : parsing [err-report.cfg:11] : unable to find 
userlist 'users' referenced in arg 1 of ACL keyword 'http_auth' in proxy 'foo'.
  [WARNING] 108/114851 (20224) : config : missing timeouts for proxy 'foo'.
 | While not properly invalid, you will certainly encounter various problems
 | with such a configuration. To fix this, please ensure that all following
 | timeouts are set to a non-zero value: 'client', 'connect', 'server'.
  [WARNING] 108/114851 (20224) : config : 'stats' statement ignored for proxy 
'foo' as it requires HTTP mode.
  [WARNING] 108/114851 (20224) : config : 'http-request' rules ignored for 
proxy 'foo' as they require HTTP mode.
  [ALERT] 108/114851 (20224) : Fatal errors found in configuration.

The "requires HTTP mode" errors are just pollution resulting from the
improper spelling of this mode earlier. The unresolved reference to the
userlist is caused by the extra colon on the declaration, and the warning
regarding the missing timeouts is caused by the wrong character.

Now it more accurately reports :

  [ALERT] 108/114900 (20225) : parsing [err-report.cfg:1] : 'userlist' cannot 
handle unexpected argument ':'.
  [ALERT] 108/114900 (20225) : parsing [err-re

[PATCH] Fix haproxy hangs on FreeBSD >= 11

2017-04-19 Thread Olivier Houchard
Hi guys,

Thanks to your help, we finally figure out what was happening on FreeBSD,
and the attached patch should fix it.
Problem was, haproxy relies on what is really undefined behavior in C, with
signed integer overflows. gcc and earlier versions of clang behaved as we
expected, but newer versions of clang do not, which is why it started to be
a problem with FreeBSD 11 (and is probably on other BSDs using clang, and
MacOS X).

Thanks again for giving us enough informations to really figure that out, it
would have been _tough_ overwise :)

Regards,

Olivier
>From 163be439a8bc6e5aa1cf3fea0f086d518ddad0a9 Mon Sep 17 00:00:00 2001
From: Olivier Houchard 
Date: Wed, 19 Apr 2017 11:34:10 +0200
Subject: [PATCH] BUG/MAJOR: Use -fwrapv.

Haproxy relies on signed integer wraparound on overflow, however this is
really an undefined behavior, so the C compiler is allowed to do whatever
it wants, and clang does exactly that, and that causes problems when the
timer goes from <= INT_MAX to > INT_MAX, and explains the various hangs
reported on FreeBSD every 49.7 days. To make sure we get the intended
behavior, use -fwrapv for now. A proper fix is to switch everything to
unsigned, and it will happen later, but this is simpler, and more likely to
be backported to the stable branches.
Many thanks to David King, Mark S, Dave Cottlehuber, Slawa Olhovchenkov,
Piotr Pawel Stefaniak, and any other I may have forgotten for reporting that
 and investigating.
---
 Makefile | 5 -
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/Makefile b/Makefile
index a87e1e2..63db432 100644
--- a/Makefile
+++ b/Makefile
@@ -131,7 +131,10 @@ DEBUG_CFLAGS = -g
  Compiler-specific flags that may be used to disable some negative over-
 # optimization or to silence some warnings. -fno-strict-aliasing is needed with
 # gcc >= 4.4.
-SPEC_CFLAGS = -fno-strict-aliasing -Wdeclaration-after-statement
+# We rely on signed integer wraparound on overflow, however clang think it
+# can do whatever it wants since it's an undefined behavior, so use -fwrapv
+# to be sure e get the intended behavior.
+SPEC_CFLAGS = -fno-strict-aliasing -Wdeclaration-after-statement -fwrapv
 
  Memory usage tuning
 # If small memory footprint is required, you can reduce the buffer size. There
-- 
2.9.3



Re: ModSecurity: First integration patches

2017-04-19 Thread Thierry Fournier

> On 19 Apr 2017, at 09:16, Aleksandar Lazic  wrote:
> 
> 
> 
> Am 19-04-2017 05:51, schrieb Willy Tarreau:
>> On Tue, Apr 18, 2017 at 11:55:46PM +0200, Aleksandar Lazic wrote:
>>> Why not reuse the upcoming http/2 format.
>>> HTTP/2 is *easy* to parse and the implementations of servers are growing?
>> Are you kidding ? I mean you want everyone to have to implement HPACK etc ?
> 
> Well there are a currently lot of http/2 servers out there, but I understand 
> your concerns.
> 
>>> I know this is still on the todo list but I think this is worth to go
>>> instead to add another protocol.
>>> Or we can take a look into grpc, which is on top of http/2 with protobuffer.
>>> http://www.grpc.io/
>>> As I take more time to think in this direction why not add a grpc filter
>>> like the spoa filter?
>>> Opinions?
>> Code generators are always a pain to deal with. It reminds me the old days
>> when I was using rpcgen. And grpc doesn't provide a C implementation so that
>> kills it :-)
> 
> Yeah I have also seen that they prefer c++, c#, go and other languages but 
> not offer a c lib.
> This is strange just because the core is written in c, but I don't need to 
> understand everything ;-)
> 
>> Thierry told me he's going to implement a simple TLV-like list which will
>> be much easier to implement and easy to code on both sides.
> 
> @Thierry:
> Ah something like netstrings ?
> https://cr.yp.to/proto/netstrings.txt

Hi thanks for the link. I do not like this encoding because it is not easy to
parse. I must read ascii representation of numbers and convert-it. I prefer
simple binary format, with length (encoding in the same way than the SPOE
protocol) and value.

Hi, the main goal is providing:
 - a format easy to decode
 - not adding lot of dependencies to haproxy

So, after brainstorm, I propose:

 - remove all data already accessible via existing sample-fetches from the new
   sample-fetch introduced by my previous patch. These data are:
   method, path, query string, http-version and the body. All of these data are
   available in some sample fetches and it will be great to reuse it. Note that
   the only data which keep are the headers.

 - For the headers I propose two format: The first format is for general use. It
   is simply the header block in the same format that we received (with the 
final
   \r\n for detecting truncated header bloc)

 - For header I propose also a binary format like this:
   
   [
  
  
   ]…
   This format is easy to decode because each length is encoded with the SPOE
   numeric values encoding method, so it is always embedded in SPOE client.

The SPOE message description will become

   spoe-message check-request
  uniqueid args method path query req.ver req.hdrs_bin req.body_size 
req.body

uniqueid is a string which allow to do matching between the mod security logs 
and
 the haproxy logs. I propose the uniqueid, but this string can be 
another
 think like an header containing a uniqueid previously generated.

req.hrds_bin is the header bloc in binary format.

req.body_size is the advertised size of the body and it is used to determine if 
the
  body is full or truncated.

Thierry

>> I personally
>> think that adding dependencies on tons of libs to implement simple stuff
>> is more of a problem than an help even for implementors, because when you
>> start to enter some dependency hell or version incompatibilities, it's a
>> real pain, especially when it was done only to find how to implement
>> just a small list. These things are useful if you want to implement heavy
>> inter-application communications but it's not really the case here.
> 
> I got you.
> 
>> Cheers,
>> Willy
> 
> Regards
> Aleks




Re: [RFC][PATCHES] seamless reload

2017-04-19 Thread Olivier Houchard
On Wed, Apr 19, 2017 at 09:58:27AM +0200, Pavlos Parissis wrote:
> On 13/04/2017 06:18 μμ, Olivier Houchard wrote:
> > On Thu, Apr 13, 2017 at 06:00:59PM +0200, Conrad Hoffmann wrote:
> >> On 04/13/2017 05:10 PM, Olivier Houchard wrote:
> >>> On Thu, Apr 13, 2017 at 04:59:26PM +0200, Conrad Hoffmann wrote:
>  Sure, here it is ;P
> 
>  I now get a segfault (on reload):
> 
>  *** Error in `/usr/sbin/haproxy': corrupted double-linked list:
>  0x05b511e0 ***
> 
>  Here is the backtrace, retrieved from the core file:
> 
>  (gdb) bt
>  #0  0x7f4c92801067 in __GI_raise (sig=sig@entry=6) at
>  ../nptl/sysdeps/unix/sysv/linux/raise.c:56
>  #1  0x7f4c92802448 in __GI_abort () at abort.c:89
>  #2  0x7f4c9283f1b4 in __libc_message (do_abort=do_abort@entry=1,
>  fmt=fmt@entry=0x7f4c92934210 "*** Error in `%s': %s: 0x%s ***\n") at
>  ../sysdeps/posix/libc_fatal.c:175
>  #3  0x7f4c9284498e in malloc_printerr (action=1, str=0x7f4c929302ec
>  "corrupted double-linked list", ptr=) at malloc.c:4996
>  #4  0x7f4c92845923 in _int_free (av=0x7f4c92b71620 ,
>  p=, have_lock=0) at malloc.c:3996
>  #5  0x00485850 in tcp_find_compatible_fd (l=0xaaed20) at
>  src/proto_tcp.c:812
>  #6  tcp_bind_listener (listener=0xaaed20, errmsg=0x7ffccc774e10 "",
>  errlen=100) at src/proto_tcp.c:878
>  #7  0x00493ce1 in start_proxies (verbose=0) at src/proxy.c:793
>  #8  0x004091ec in main (argc=21, argv=0x7ffccc775168) at
>  src/haproxy.c:1942
> >>>
> >>> Ok, yet another stupid mistake, hopefully the attached patch fixes this :)
> >>>
> >>> Thanks !
> >>>
> >>> Olivier
> >>
> >>
> >> It does indeed! Not only does it work now, the result is impressive! Not a
> >> single dropped request even when aggressively reloading under substantial 
> >> load!
> >>
> >> You certainly gave me an unexpected early easter present here :)
> >>
> >> I will now head out, but I am planning on installing a 1.8 version with
> >> your patches on our canary pool (which receives a small amount of
> >> production traffic to test changes) after the holidays. I will happily test
> >> anything else that might be helpful for you. I will also set up a proper
> >> load test inside our data center then, but I expect no surprises there (my
> >> current tests were done over a VPN link, somewhat limiting the achievable
> >> throughput).
> >>
> >> Once more, thank you so much! This will greatly simplify much of our
> >> operations. If there is anything else we can help test, let me know :)
> > 
> > Pfew, at least :) Thanks a lot for your patience, and doing all that 
> > testing !
> > 
> > Olivier
> > 
> 
> 
> Joining this again a bit late, do you still want me to test it?
> I would like to know if there are any performance impact, but I see that
> Conrad Hoffmann has done all the hard work on this. So, we can conclude that 
> we
> don't expect any performance impact.
> 
> Once again thanks a lot for your hard work on this.
> @Conrad Hoffmann, thanks a lot for testing this and checking if there is any
> performance impact.
> 


Hi Pavlos,

More testing is always appreciated :)
I don't expect any performance impact, but that wouldn't be the first time
I say that and am proven wrong, though as you said with all the testing
Conrad did, I'm fairly confident it should be OK.

Thanks !

Olivier





Re: [RFC][PATCHES] seamless reload

2017-04-19 Thread Pavlos Parissis
On 13/04/2017 06:18 μμ, Olivier Houchard wrote:
> On Thu, Apr 13, 2017 at 06:00:59PM +0200, Conrad Hoffmann wrote:
>> On 04/13/2017 05:10 PM, Olivier Houchard wrote:
>>> On Thu, Apr 13, 2017 at 04:59:26PM +0200, Conrad Hoffmann wrote:
 Sure, here it is ;P

 I now get a segfault (on reload):

 *** Error in `/usr/sbin/haproxy': corrupted double-linked list:
 0x05b511e0 ***

 Here is the backtrace, retrieved from the core file:

 (gdb) bt
 #0  0x7f4c92801067 in __GI_raise (sig=sig@entry=6) at
 ../nptl/sysdeps/unix/sysv/linux/raise.c:56
 #1  0x7f4c92802448 in __GI_abort () at abort.c:89
 #2  0x7f4c9283f1b4 in __libc_message (do_abort=do_abort@entry=1,
 fmt=fmt@entry=0x7f4c92934210 "*** Error in `%s': %s: 0x%s ***\n") at
 ../sysdeps/posix/libc_fatal.c:175
 #3  0x7f4c9284498e in malloc_printerr (action=1, str=0x7f4c929302ec
 "corrupted double-linked list", ptr=) at malloc.c:4996
 #4  0x7f4c92845923 in _int_free (av=0x7f4c92b71620 ,
 p=, have_lock=0) at malloc.c:3996
 #5  0x00485850 in tcp_find_compatible_fd (l=0xaaed20) at
 src/proto_tcp.c:812
 #6  tcp_bind_listener (listener=0xaaed20, errmsg=0x7ffccc774e10 "",
 errlen=100) at src/proto_tcp.c:878
 #7  0x00493ce1 in start_proxies (verbose=0) at src/proxy.c:793
 #8  0x004091ec in main (argc=21, argv=0x7ffccc775168) at
 src/haproxy.c:1942
>>>
>>> Ok, yet another stupid mistake, hopefully the attached patch fixes this :)
>>>
>>> Thanks !
>>>
>>> Olivier
>>
>>
>> It does indeed! Not only does it work now, the result is impressive! Not a
>> single dropped request even when aggressively reloading under substantial 
>> load!
>>
>> You certainly gave me an unexpected early easter present here :)
>>
>> I will now head out, but I am planning on installing a 1.8 version with
>> your patches on our canary pool (which receives a small amount of
>> production traffic to test changes) after the holidays. I will happily test
>> anything else that might be helpful for you. I will also set up a proper
>> load test inside our data center then, but I expect no surprises there (my
>> current tests were done over a VPN link, somewhat limiting the achievable
>> throughput).
>>
>> Once more, thank you so much! This will greatly simplify much of our
>> operations. If there is anything else we can help test, let me know :)
> 
> Pfew, at least :) Thanks a lot for your patience, and doing all that testing !
> 
> Olivier
> 


Joining this again a bit late, do you still want me to test it?
I would like to know if there are any performance impact, but I see that
Conrad Hoffmann has done all the hard work on this. So, we can conclude that we
don't expect any performance impact.

Once again thanks a lot for your hard work on this.
@Conrad Hoffmann, thanks a lot for testing this and checking if there is any
performance impact.

Cheers,
Pavlos



signature.asc
Description: OpenPGP digital signature


Re: ModSecurity: First integration patches

2017-04-19 Thread Aleksandar Lazic



Am 19-04-2017 05:51, schrieb Willy Tarreau:

On Tue, Apr 18, 2017 at 11:55:46PM +0200, Aleksandar Lazic wrote:

Why not reuse the upcoming http/2 format.
HTTP/2 is *easy* to parse and the implementations of servers are 
growing?


Are you kidding ? I mean you want everyone to have to implement HPACK 
etc ?


Well there are a currently lot of http/2 servers out there, but I 
understand your concerns.



I know this is still on the todo list but I think this is worth to go
instead to add another protocol.

Or we can take a look into grpc, which is on top of http/2 with 
protobuffer.

http://www.grpc.io/

As I take more time to think in this direction why not add a grpc 
filter

like the spoa filter?
Opinions?


Code generators are always a pain to deal with. It reminds me the old 
days
when I was using rpcgen. And grpc doesn't provide a C implementation so 
that

kills it :-)


Yeah I have also seen that they prefer c++, c#, go and other languages 
but not offer a c lib.
This is strange just because the core is written in c, but I don't need 
to understand everything ;-)


Thierry told me he's going to implement a simple TLV-like list which 
will

be much easier to implement and easy to code on both sides.


@Thierry:
Ah something like netstrings ?
https://cr.yp.to/proto/netstrings.txt


I personally
think that adding dependencies on tons of libs to implement simple 
stuff
is more of a problem than an help even for implementors, because when 
you
start to enter some dependency hell or version incompatibilities, it's 
a

real pain, especially when it was done only to find how to implement
just a small list. These things are useful if you want to implement 
heavy

inter-application communications but it's not really the case here.


I got you.


Cheers,
Willy


Regards
Aleks