Re: Deny with 413 request too large

2017-05-26 Thread Willy Tarreau

On Mon, May 22, 2017 at 07:08:13PM -0300, Joao Morais wrote:
> 
> > Em 17 de mai de 2017, à(s) 19:34, Bryan Talbot  
> > escreveu:
> > 
> > 
> >> On May 15, 2017, at May 15, 6:35 PM, Joao Morais  
> >> wrote:
> >> 
> >>   errorfile 413 /usr/local/etc/haproxy/errors/413.http
> >>   http-request deny deny_status 413 if { req.body_size gt 10485760 }
> >> 
> >> ... HAProxy complains with:
> >> 
> >>   [WARNING] 135/001448 (27) : parsing [/etc/haproxy/haproxy.cfg:15] : 
> >> status code 413 not handled by 'errorfile', error customization will be 
> >> ignored.
> >>   [WARNING] 135/001448 (27) : parsing [/etc/haproxy/haproxy.cfg:89] : 
> >> status code 413 not handled, using default code 403.
> >> 
> >> How should I configure HAProxy in order to deny with 413?
> > 
> > 
> > You've already found it. AFAIK, that's the only way.
> 
> 
> =(
> 
> In my understanding I should only use a 400badreq.http like message on an
> errorfile 400 config line, otherwise if HAProxy need to issue a 400 status
> code, my 413 status code would be issued instead.
> 
> Is this a valid feature request or there are technical reasons why this has
> been done that way?

Bryan has already listed the other solution (use_backend). In general
we'd all like to have a "return" directive to return some contents, but
when you start to enumerate all the requirements, there are so many
different combinations that no work was ever started :-/

Willy



Re: [PATCH] MEDIUM: ssl: disable SSLv3 per default for bind

2017-05-26 Thread Willy Tarreau
On Thu, May 25, 2017 at 06:36:32PM +0200, Lukas Tribus wrote:
> Hello,
> 
> 
> Am 23.05.2017 um 17:17 schrieb Emmanuel Hocdet:
> > Hi,
> >
> > I think it's time to disable SSLv3 on bind line per default.
> > All configurations should have no-sslv3 (or used a ssllib without sslv3).
> > SSLv3 can be enable with "ssl-min-ver SSLv3.
> >
> > What do you think?
> 
> +1 agreed, no need to enable SSLv3 by default (for 1.8-dev).

+1 as well

Willy



Re: haproxy 1.7.5 segfault on cookie/header parsing.

2017-05-26 Thread Willy Tarreau
Hello Jean,

On Fri, May 26, 2017 at 01:00:17PM +, Jean LUBATTI wrote:
> Hello,
> 
> When using a vulnerability scanner on haproxy 1.7.5, we discovered a scenario
> under which the haproxy segfaults.
> 
> Unfortunately, this is a "bundled" scanner whith no access to the exact
> requests, and the haproxy terminates the SSL for https, so not easy to
> capture the actual traffic, but we managed to gather a core of haproxy.

You could run haproxy with "-d", it will dump to stdout/stderr each header
line it receives. I even think we could get a complete capture of the whole
request like this :

   tcp-request content capture req.hdrs_bin len 2000

It will then appear in the core in s->req_cap.

> This happens only when using the cookie SERVERID for session stickiness:

We've had similar issues a very long time ago, I hoped they were fixed :-/

> The backtrace is as follow:
> 
> I believe the scanner injects a screwed up header and/or cookie. The result 
> appears to be the incorrect memmove of the last header line :
> 
> (gdb) bt full
> #0  _wordcopy_fwd_dest_aligned (dstp=14237664, srcp=14237696, 
> len=2305843009213653720) at wordcopy.c:196
> a0 = 
> a1 = 
> a2 = 0
> a3 = 0
> sh_1 = 40
> sh_2 = 24
> #1  0x003727a838be in memmove (dest=0xd4575d, src=, 
> len=18446744073709551439) at memmove.c:73
> dstp = 
> srcp = 
> #2  0x00411217 in buffer_replace2 (b=0xd456b0, pos=0xd4575d 
> "Accept-Encoding: gzip\r\n\r\n.32.31\r\nConnection: close\r\n\r\nre\r\n\r\n", 
> end=, str=0x0, len=0) at src/buffer.c:90
> delta = -21
> #3  0x0045de02 in manage_client_side_cookies (s=0xcfa800, 
> req=0xcfa810) at src/proto_http.c:7976
> delta = 
> cur_hdr = 0xcfac6c
> val = 
> txn = 
> sess = 0xc1c930
> preserve_hdr = 0
> cur_idx = 3
> old_idx = 2
> hdr_beg = 0xd4575d "Accept-Encoding: 
> gzip\r\n\r\n.32.31\r\nConnection: close\r\n\r\nre\r\n\r\n"
> hdr_end = 0xd45770 "ip\r\n\r\n.32.31\r\nConnection: 
> close\r\n\r\nre\r\n\r\n"
> hdr_next = 0xd45772 "\r\n\r\n.32.31\r\nConnection: 
> close\r\n\r\nre\r\n\r\n"
> del_from = 0xd45763 "-Encoding: gzip\r\n\r\n.32.31\r\nConnection: 
> close\r\n\r\nre\r\n\r\n"
> prev = 
> att_beg = 
> att_end = 
> equal = 
> val_beg = 
> val_end = 
> next = 
> #4  0x00460a86 in http_process_req_common (s=0xcfa800, req=0xcfa810, 
> an_bit=256, px=0x86a650) at src/proto_http.c:4474
> sess = 0xc1c930
> txn = 0xcfab50
> msg = 0xcfabb0
> rule = 
> wl = 
> verdict = 
> deny_status = 2
> #5  0x00486d0e in process_stream (t=0xb8fa70) at src/stream.c:1798
> max_loops = 199
> ana_list = 2304
> ana_back = 2304
> flags = 2
> srv = 
> s = 0xcfa800
> sess = 0xc1c930
> rqf_last = 
> rpf_last = 2147745792
> rq_prod_last = 
> rq_cons_last = 
> rp_cons_last = 7
> rp_prod_last = 0
> req_ana_back = 
> req = 0xcfa810
> res = 0xcfa850
> si_f = 0xcfaa38
> si_b = 0xcfaa60
> #6  0x00415ac0 in process_runnable_tasks () at src/task.c:238
> t = 
> max_processed = 
> #7  0x00407028 in run_poll_loop () at src/haproxy.c:1724
> next = 
> #8  0x0040a308 in main (argc=, argv= optimized out>) at src/haproxy.c:2105
> err = 
> retry = 
> limit = {rlim_cur = 4091, rlim_max = 4091}
> errmsg = 
> "\000\347K\000\000\000\000\000\b\300n\000\000\000\000\000\020\340\377\377\377\177\000\000\000\300n\000\000\000\000\000\006\000\000\000\000\000\000\000H\341\377\377\377\177\000\000\200\341\377\377\377\177\000\000XpA\000\000\000\000\000\000\300n\000\000\000\000\000\226\070K\000\000\000\000\000\340\070\370/7\000\000\000\340\067K\000\000\000\000\000\000\000\000"
> 
> 
> Basically, the memove in #1 is called with len=18446744073709551439 from
> buffer_replace2, which is a negative value.
> 
> I am not sure exactly in which case this is possible (last line of the header
> incorrect or something), but bi_end(b) is < to end, so the unsigned size_t
> expected by memmove is incorrect.

For sure there's a bug there. However I'd be very interested in understanding
in which case it can happen, as it will reveal the root cause of this bug.
We know that this part is tricky and I'd rather be sure we don't miss a
single case.

> I recompiled haproxy with the patch below and now it survives the
> vulnerability scanner, but it might not be at the proper place of the code
> (maybe the logic fault is better addressed above in
> manage_client_side_cookies... ):

This obviously confirms that it's where it dies, but it doesn't explain
what the problem really is. Would you happen to have an exploitable core
with the associated executable ? I 

Re: OpenSSL engine and async support

2017-05-26 Thread Willy Tarreau
Hi Emeric, Grant,

patch set now merged! Thank you both for this great work!

Willy



Re: [RFC][PATCHES] seamless reload

2017-05-26 Thread Willy Tarreau
Hi William,

On Sat, May 27, 2017 at 12:08:38AM +0200, William Lallemand wrote:
> The attached patches provide the "expose-fd listeners" stats socket option and
> remove the "no-unused-socket" global option.
> 
> It behaves exactly has Willy explain above minus the master process :-)
> 
> ps: Master Worker patches are coming soon!

Cool, thanks!

I'm applying them right now.
Willy



Re: [RFC][PATCHES] seamless reload

2017-05-26 Thread William Lallemand
Hi guys,

On Fri, May 12, 2017 at 04:26:01PM +0200, Willy Tarreau wrote:
> In fact William is currently working on the master-worker model to get rid
> of the systemd-wrapper and found some corner cases between this and your
> patchset. Nothing particularly difficult, just the fact that he'll need
> to pass the path to the previous socket to the new processes during reloads.
> 
> During this investigation it was found that we'd need to be able to say
> that a process possibly has no stats socket and that the next one will not
> be able to retrieve the FDs. Such information cannot be passed from the
> command line since it's a consequence of the config parsing. Thus we thought
> it would make sense to have a per-socket option to say whether or not it
> would be usable for offering the listening file descriptors, just like we
> currently have an administrative level on them (I even seem to remember
> that Olivier first asked if we wouldn't need to do this). And suddenly a
> few benefits appear when doing this :
>   - security freaks not willing to expose FDs over the socket would simply
> not enable them ;
> 
>   - we could restrict the number of processes susceptible of exposing the
> FDs simply by playing with the "process" directive on the socket ; that
> could also save some system-wide FDs ;
> 
>   - the master process could reliably find the socket's path in the conf
> (the first one with this new directive enabled), even if it's changed
> between reloads ;
> 
>   - in the default case (no specific option) we wouldn't change the existing
> behaviour so it would not make existing reloads worse.
> 

The attached patches provide the "expose-fd listeners" stats socket option and
remove the "no-unused-socket" global option.

It behaves exactly has Willy explain above minus the master process :-)

ps: Master Worker patches are coming soon!

-- 
William Lallemand

>From 5567d977f722e862c6ba56d65118e094ac28735c Mon Sep 17 00:00:00 2001
From: William Lallemand 
Date: Wed, 24 May 2017 00:57:40 +0200
Subject: [PATCH 1/3] cli: add ACCESS_LVL_MASK to store the access level

The current level variable use only 2 bits for storing the 3 access
level (user, oper and admin).

This patch add a bitmask which allows to use the remaining bits for
other usage.
---
 include/types/global.h |  2 ++
 src/cli.c  | 32 ++--
 src/stats.c|  2 +-
 src/stick_table.c  |  4 ++--
 4 files changed, 23 insertions(+), 17 deletions(-)

diff --git a/include/types/global.h b/include/types/global.h
index 57b969d..cd5fda3 100644
--- a/include/types/global.h
+++ b/include/types/global.h
@@ -69,6 +69,8 @@
 #define ACCESS_LVL_USER 1
 #define ACCESS_LVL_OPER 2
 #define ACCESS_LVL_ADMIN3
+#define ACCESS_LVL_MASK 0x3
+
 
 /* SSL server verify mode */
 enum {
diff --git a/src/cli.c b/src/cli.c
index 55baee3..cdbaf2b 100644
--- a/src/cli.c
+++ b/src/cli.c
@@ -217,7 +217,8 @@ static int stats_parse_global(char **args, int section_type, struct proxy *curpx
 		}
 
 		bind_conf = bind_conf_alloc(global.stats_fe, file, line, args[2], xprt_get(XPRT_RAW));
-		bind_conf->level = ACCESS_LVL_OPER; /* default access level */
+		bind_conf->level &= ~ACCESS_LVL_MASK;
+		bind_conf->level |= ACCESS_LVL_OPER; /* default access level */
 
 		if (!str2listener(args[2], global.stats_fe, bind_conf, file, line, err)) {
 			memprintf(err, "parsing [%s:%d] : '%s %s' : %s\n",
@@ -383,7 +384,7 @@ int cli_has_level(struct appctx *appctx, int level)
 	struct stream_interface *si = appctx->owner;
 	struct stream *s = si_strm(si);
 
-	if (strm_li(s)->bind_conf->level < level) {
+	if ((strm_li(s)->bind_conf->level & ACCESS_LVL_MASK) < level) {
 		appctx->ctx.cli.msg = stats_permission_denied_msg;
 		appctx->st0 = CLI_ST_PRINT;
 		return 0;
@@ -790,12 +791,12 @@ static int cli_io_handler_show_cli_sock(struct appctx *appctx)
 		} else
 			continue;
 
-		if (bind_conf->level == ACCESS_LVL_USER)
-			chunk_appendf(, "user ");
-		else if (bind_conf->level == ACCESS_LVL_OPER)
-			chunk_appendf(, "operator ");
-		else if (bind_conf->level == ACCESS_LVL_ADMIN)
+		if ((bind_conf->level & ACCESS_LVL_MASK) == ACCESS_LVL_ADMIN)
 			chunk_appendf(, "admin ");
+		else if ((bind_conf->level & ACCESS_LVL_MASK) == ACCESS_LVL_OPER)
+			chunk_appendf(, "operator ");
+		else if ((bind_conf->level & ACCESS_LVL_MASK) == ACCESS_LVL_USER)
+			chunk_appendf(, "user ");
 		else
 			chunk_appendf(, "  ");
 
@@ -1000,13 +1001,16 @@ static int bind_parse_level(char **args, int cur_arg, struct proxy *px, struct b
 		return ERR_ALERT | ERR_FATAL;
 	}
 
-	if (!strcmp(args[cur_arg+1], "user"))
-		conf->level = ACCESS_LVL_USER;
-	else if (!strcmp(args[cur_arg+1], "operator"))
-		conf->level = ACCESS_LVL_OPER;
-	else if (!strcmp(args[cur_arg+1], "admin"))
-		conf->level = ACCESS_LVL_ADMIN;
-	else {
+	if (!strcmp(args[cur_arg+1], "user")) {
+	

Re: New feature proposal: Add support for decompressing proxyed gziped requests

2017-05-26 Thread Aleksandar Lazic
Hi Vasileios Kyrillidis.

Vasileios Kyrillidis  have
written on Fri, 26 May 2017 16:17:48 +0200:

> Hi Aleksandar,
> 
> No patches yet. We wanted confirmation that it has a chance of
> getting merged before spending the time implementing. We have other
> solutions that we could pursue instead.
> 
> If merging looks promising we will start working on patches.

Well I'm not a decision guy but when the patche's look good and does
match the event based system of haproxy I think there a good chances to
be part of haproxy.

Regards
Aleks

> Cheers,
> Vasilis
> 
> On 24/05/17 23:08, Aleksandar Lazic wrote:
> > Hi Vasileios Kyrillidis.
> >
> > Vasileios Kyrillidis have written on Tue, 23 May 2017 18:23:07
> > +0200: 
> >> Hello,
> >>
> >> We would like to add support for decompressing proxyed gziped
> >> requests (i.e. those with "Content-Encoding: gzip") to HAProxy.
> >> Would there be interest in such a feature? i.e. is it likely it
> >> could be accepted into the main repo? Is anyone else already
> >> working on such a feature?
> >>
> >> Currently HAProxy supports compressing outgoing responses where the
> >> client has indicated it supports receiving them. It does not yet
> >> support receiving requests that are compressed and decompressing
> >> them before sending them to a backend. This is not something
> >> commonly seen with browsers but is not uncommon when dealing with
> >> web service calls, which is our use case.
> >>
> >> It is envisioned that this feature would be off by default and
> >> require explicit enabling.  
> > Sounds interesting.
> > Do you have any patches available yet or are you willing to provide
> > one? 
> >> Regards,
> >> Vasilis  
> > Regards
> > Aleks  
> 



Re: New feature proposal: Add support for decompressing proxyed gziped requests

2017-05-26 Thread Vasileios Kyrillidis

Hi Aleksandar,

No patches yet. We wanted confirmation that it has a chance of getting 
merged before spending the time implementing. We have other solutions 
that we could pursue instead.


If merging looks promising we will start working on patches.

Cheers,
Vasilis

On 24/05/17 23:08, Aleksandar Lazic wrote:

Hi Vasileios Kyrillidis.

Vasileios Kyrillidis have written on Tue, 23 May 2017 18:23:07 +0200:


Hello,

We would like to add support for decompressing proxyed gziped
requests (i.e. those with "Content-Encoding: gzip") to HAProxy. Would
there be interest in such a feature? i.e. is it likely it could be
accepted into the main repo? Is anyone else already working on such a
feature?

Currently HAProxy supports compressing outgoing responses where the
client has indicated it supports receiving them. It does not yet
support receiving requests that are compressed and decompressing them
before sending them to a backend. This is not something commonly seen
with browsers but is not uncommon when dealing with web service
calls, which is our use case.

It is envisioned that this feature would be off by default and
require explicit enabling.

Sounds interesting.
Do you have any patches available yet or are you willing to provide one?


Regards,
Vasilis

Regards
Aleks


--
*Vasileios Kyrillidis*  | Software Developer  | Sociomantic Labs 
www.sociomantic.com 
  | 

Twitter  + Facebook 
 + Resources 
  | 
*Subscribe* to our Newsletter 
. 


Sociomantic Labs Logo
Sociomantic Labs GmbH, Location: Berlin, Commercial Register - AG 
Charlottenburg: HRB 121302 B, VAT No. - USt-ID: DE 266262100, Managing 
Directors: Thomas Nicolai, Sarah McCarthy, Mark Anthony Hinds. This 
message and any attachments are confidential and intended solely for the 
use of the individual to whom it is addressed.




haproxy 1.7.5 segfault on cookie/header parsing.

2017-05-26 Thread Jean LUBATTI
Hello,

When using a vulnerability scanner on haproxy 1.7.5, we discovered a scenario 
under which the haproxy segfaults.

Unfortunately, this is a "bundled" scanner whith no access to the exact 
requests, and the haproxy terminates the SSL for https, so not easy to capture 
the actual traffic, but we managed to gather a core of haproxy.

This happens only when using the cookie SERVERID for session stickiness:

The backtrace is as follow:

I believe the scanner injects a screwed up header and/or cookie. The result 
appears to be the incorrect memmove of the last header line :

(gdb) bt full
#0  _wordcopy_fwd_dest_aligned (dstp=14237664, srcp=14237696, 
len=2305843009213653720) at wordcopy.c:196
a0 = 
a1 = 
a2 = 0
a3 = 0
sh_1 = 40
sh_2 = 24
#1  0x003727a838be in memmove (dest=0xd4575d, src=, 
len=18446744073709551439) at memmove.c:73
dstp = 
srcp = 
#2  0x00411217 in buffer_replace2 (b=0xd456b0, pos=0xd4575d 
"Accept-Encoding: gzip\r\n\r\n.32.31\r\nConnection: close\r\n\r\nre\r\n\r\n", 
end=, str=0x0, len=0) at src/buffer.c:90
delta = -21
#3  0x0045de02 in manage_client_side_cookies (s=0xcfa800, req=0xcfa810) 
at src/proto_http.c:7976
delta = 
cur_hdr = 0xcfac6c
val = 
txn = 
sess = 0xc1c930
preserve_hdr = 0
cur_idx = 3
old_idx = 2
hdr_beg = 0xd4575d "Accept-Encoding: gzip\r\n\r\n.32.31\r\nConnection: 
close\r\n\r\nre\r\n\r\n"
hdr_end = 0xd45770 "ip\r\n\r\n.32.31\r\nConnection: 
close\r\n\r\nre\r\n\r\n"
hdr_next = 0xd45772 "\r\n\r\n.32.31\r\nConnection: 
close\r\n\r\nre\r\n\r\n"
del_from = 0xd45763 "-Encoding: gzip\r\n\r\n.32.31\r\nConnection: 
close\r\n\r\nre\r\n\r\n"
prev = 
att_beg = 
att_end = 
equal = 
val_beg = 
val_end = 
next = 
#4  0x00460a86 in http_process_req_common (s=0xcfa800, req=0xcfa810, 
an_bit=256, px=0x86a650) at src/proto_http.c:4474
sess = 0xc1c930
txn = 0xcfab50
msg = 0xcfabb0
rule = 
wl = 
verdict = 
deny_status = 2
#5  0x00486d0e in process_stream (t=0xb8fa70) at src/stream.c:1798
max_loops = 199
ana_list = 2304
ana_back = 2304
flags = 2
srv = 
s = 0xcfa800
sess = 0xc1c930
rqf_last = 
rpf_last = 2147745792
rq_prod_last = 
rq_cons_last = 
rp_cons_last = 7
rp_prod_last = 0
req_ana_back = 
req = 0xcfa810
res = 0xcfa850
si_f = 0xcfaa38
si_b = 0xcfaa60
#6  0x00415ac0 in process_runnable_tasks () at src/task.c:238
t = 
max_processed = 
#7  0x00407028 in run_poll_loop () at src/haproxy.c:1724
next = 
#8  0x0040a308 in main (argc=, argv=) at src/haproxy.c:2105
err = 
retry = 
limit = {rlim_cur = 4091, rlim_max = 4091}
errmsg = 
"\000\347K\000\000\000\000\000\b\300n\000\000\000\000\000\020\340\377\377\377\177\000\000\000\300n\000\000\000\000\000\006\000\000\000\000\000\000\000H\341\377\377\377\177\000\000\200\341\377\377\377\177\000\000XpA\000\000\000\000\000\000\300n\000\000\000\000\000\226\070K\000\000\000\000\000\340\070\370/7\000\000\000\340\067K\000\000\000\000\000\000\000\000"


Basically, the memove in #1 is called with len=18446744073709551439 from 
buffer_replace2, which is a negative value.

I am not sure exactly in which case this is possible (last line of the header 
incorrect or something), but bi_end(b) is < to end, so the unsigned size_t 
expected by memmove is incorrect.

I recompiled haproxy with the patch below and now it survives the vulnerability 
scanner, but it might not be at the proper place of the code (maybe the logic 
fault is better addressed above in manage_client_side_cookies... ):

diff -uNr haproxy-1.7.5/src/buffer.c haproxy-1.7.5p/src/buffer.c
--- haproxy-1.7.5/src/buffer.c  2017-04-03 10:28:32.0 +0200
+++ haproxy-1.7.5p/src/buffer.c 2017-05-26 13:04:58.225311000 +0200
@@ -87,7 +87,8 @@
return 0;  /* no space left before wrapping data */

/* first, protect the end of the buffer */
-   memmove(end + delta, end, bi_end(b) - end);
+   if ( bi_end(b) > end )
+   memmove(end + delta, end, bi_end(b) - end);

/* now, copy str over pos */
if (len)


the configuration is as follows:



frontend ft_X_443
bind 185.139.245.111:443 ssl crt 
/etc/haproxy/ssl/X.pem   ciphers 
AES256+EECDH:AES256+EDH
bind-process 2 3 4
mode http
option http-buffer-request

http-response set-header Strict-Transport-Security "max-age=1600; 
includeSubDomains; preload;"
http-request del-header Origin

stick-table type ip size 1m expire 1m store gpc0,conn_rate(1s)

tcp-request connection track-sc1