Re: [PATCH] BUG/MEDIUM: threads: Fix the max/min calculation because of name clashes

2018-04-10 Thread Christopher Faulet

Le 09/04/2018 à 11:52, Christopher Faulet a écrit :

Hi,

This patch fixes a bug affecting HAProxy compiled with gcc < 4.7 (with
threads). It must be merged in 1.8.



Sorry, I sent the patch I used for HAProxy 1.8. Here is the right patch, 
for the upstream. But good news for you Willy, it will be easier to 
backport it in 1.8 now :)


--
Christopher Faulet
>From 8a4f5982f4bb8daaf264938df9fed6f5458f0b33 Mon Sep 17 00:00:00 2001
From: Christopher Faulet 
Date: Mon, 9 Apr 2018 08:45:43 +0200
Subject: [PATCH] BUG/MEDIUM: threads: Fix the max/min calculation because of
 name clashes
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

With gcc < 4.7, when HAProxy is built with threads, the macros
HA_ATOMOC_CAS/XCHG/STORE relies on the legacy ‘__sync’ builtins. These macros
are slightly complicated than the versions relying on the '_atomic'
builtins. Internally, some local variables are defined, prefixed with '__' to
avoid name clashes with the caller.

On the other hand, the macros HA_ATOMIC_UPDATE_MIN/MAX call HA_ATOMIC_CAS. Some
local variables are also definied in these macros, following the same naming
rule as below. The problem is that '__new' variable is used in
HA_ATOMIC_MIN/_MAX and in HA_ATOMIC_CAS. Obviously, the behaviour is undefined
because '__new' in HA_ATOMIC_CAS is left uninitialized. Unfortunatly gcc fails
to detect this error.

To fix the problem, all internal variables to macros are now suffixed with name
of the macros to avoid clashes (for instance, '__new_cas' in HA_ATOMIC_CAS).

This patch must be backported in 1.8.
---
 include/common/hathreads.h | 132 +++--
 1 file changed, 67 insertions(+), 65 deletions(-)

diff --git a/include/common/hathreads.h b/include/common/hathreads.h
index 15b8ce2c1..0f10b48ca 100644
--- a/include/common/hathreads.h
+++ b/include/common/hathreads.h
@@ -41,44 +41,44 @@ extern THREAD_LOCAL unsigned long tid_bit; /* The bit corresponding to the threa
 #define HA_ATOMIC_OR(val, flags) ({*(val) |= (flags);})
 #define HA_ATOMIC_XCHG(val, new)	\
 	({\
-		typeof(*(val)) __old = *(val);\
+		typeof(*(val)) __old_xchg = *(val);			\
 		*(val) = new;		\
-		__old;			\
+		__old_xchg;		\
 	})
 #define HA_ATOMIC_BTS(val, bit)		\
 	({\
-		typeof((val)) __p = (val);\
-		typeof(*__p)  __b = (1UL << (bit));			\
-		typeof(*__p)  __t = *__p & __b;\
-		if (!__t)		\
-			*__p |= __b;	\
-		__t;			\
+		typeof((val)) __p_bts = (val);\
+		typeof(*__p_bts)  __b_bts = (1UL << (bit));		\
+		typeof(*__p_bts)  __t_bts = *__p_bts & __b_bts;		\
+		if (!__t_bts)		\
+			*__p_bts |= __b_bts;\
+		__t_bts;		\
 	})
 #define HA_ATOMIC_BTR(val, bit)		\
 	({\
-		typeof((val)) __p = (val);\
-		typeof(*__p)  __b = (1UL << (bit));			\
-		typeof(*__p)  __t = *__p & __b;\
-		if (__t)		\
-			*__p &= ~__b;	\
-		__t;			\
+		typeof((val)) __p_btr = (val);\
+		typeof(*__p_btr)  __b_btr = (1UL << (bit));		\
+		typeof(*__p_btr)  __t_btr = *__p_btr & __b_btr;		\
+		if (__t_btr)		\
+			*__p_btr &= ~__b_btr;\
+		__t_btr;		\
 	})
 #define HA_ATOMIC_STORE(val, new)({*(val) = new;})
 #define HA_ATOMIC_UPDATE_MAX(val, new)	\
 	({\
-		typeof(*(val)) __new = (new);\
+		typeof(*(val)) __new_max = (new);			\
 	\
-		if (*(val) < __new)	\
-			*(val) = __new;	\
+		if (*(val) < __new_max)	\
+			*(val) = __new_max;\
 		*(val);			\
 	})
 
 #define HA_ATOMIC_UPDATE_MIN(val, new)	\
 	({\
-		typeof(*(val)) __new = (new);\
+		typeof(*(val)) __new_min = (new);			\
 	\
-		if (*(val) > __new)	\
-			*(val) = __new;	\
+		if (*(val) > __new_min)	\
+			*(val) = __new_min;\
 		*(val);			\
 	})
 
@@ -150,51 +150,51 @@ static inline void __ha_barrier_full(void)
  * but only if it differs from the expected one. If it's the same it's a race
  * thus we try again to avoid confusing a possibly sensitive caller.
  */
-#define HA_ATOMIC_CAS(val, old, new)	   \
-	({   \
-		typeof((val)) __val = (val);   \
-		typeof((old)) __oldp = (old);   \
-		typeof(*(old)) __oldv;	   \
-		typeof((new)) __new = (new);   \
-		int __ret;		   \
-		do {			   \
-			__oldv = *__val;   \
-			__ret = __sync_bool_compare_and_swap(__val, *__oldp, __new); \
-		} while (!__ret && *__oldp == __oldv);			   \
-		if (!__ret)		   \
-			*__oldp = __oldv;   \
-		__ret;			   \
+#define HA_ATOMIC_CAS(val, old, new)	\
+	({\
+		typeof((val)) __val_cas = (val);			\
+		typeof((old)) __oldp_cas = (old);			\
+		typeof(*(old)) __oldv_cas;\
+		typeof((new)) __new_cas = (new);			\
+		int __ret_cas;		\
+		do {			\
+			__oldv_cas = *__val_cas;			\
+			__ret_cas = __sync_bool_compare_and_swap(__val_cas, *__oldp_cas, __new_cas); \
+		} while (!__ret_cas && *__oldp_cas == __oldv_c

Re: Haproxy 1.8.4 crashing workers and increased memory usage

2018-04-10 Thread Cyril Bonté
Hi Robin,

> De: "Robin Geuze" 
> À: "Willy Tarreau" 
> Cc: haproxy@formilux.org
> Envoyé: Lundi 9 Avril 2018 10:24:43
> Objet: Re: Haproxy 1.8.4 crashing workers and increased memory usage
> 
> Hey Willy,
> 
> So I made a build this morning with libslz and re-enabled compression
> and within an hour we had the exit code 134 errors, so zlib does not
> seem to be the problem here.

I have spent some times on this issue yesterday, without being able to 
reproduce it.
I suspect something wrong with pending connections (without any clue, except 
there's an abort() in the path), but couldn't see anything wrong in the code.

There's still something missing in this thread (maybe I missed it), but can you 
provide the output of "haproxy -vv" ?
Also, are you 100% sure you're running the version you compiled ? I prefer to 
ask, as it sometimes happens ;-)

Thanks,
Cyril



Re: Rejected connections not getting counted in stats

2018-04-10 Thread Errikos Koen
Hey Moemen,

You are right I was indeed looking at the wrong counter and had not checked
the socket output. I assumed it would be available in the stats page or in
the metricbeat module which I use to track stats.

Thanks for pointing it out!

On 4 April 2018 at 19:08, Moemen MHEDHBI  wrote:

> Hi Errikos,
>
> On 26/03/2018 13:03, Errikos Koen wrote:
>
> Hello,
>
> I have a frontend whitelisted by IP with the following rules:
>
> acl whitelist src -f /etc/haproxy/whitelist.lst
> tcp-request connection reject unless whitelist
>
> and while documentation
> 
>  suggests
> I would be able to see the rejected connections counted in stats (quote: they
> are accounted separately for in the stats, as "denied connections"),
> those are stuck at 0.
>
> The whitelist appears to be working ok, making a request from a non
> whitelisted IP results in:
>
> $ curl -v http://hostname
> * About to connect() to hostname port 80 (#0)
> *   Trying xxx.xxx.xxx.xxx...
> * connected
> * Connected to hostname (xxx.xxx.xxx.xxx) port 80 (#0)
> > GET / HTTP/1.1
> > User-Agent: curl/7.26.0
> > Host: hostname
> > Accept: */*
> >
> * additional stuff not fine transfer.c:1037: 0 0
> * Recv failure: Connection reset by peer
> * Closing connection #0
> curl: (56) Recv failure: Connection reset by peer
>
> and whitelisted IPs work ok.
>
> I am running a self compiled haproxy 1.8.4 (with make options USE_PCRE=1
> TARGET=linux2628 USE_OPENSSL=1 USE_ZLIB=1 USE_SYSTEMD=1) on Debian 8 with
> 3.16.0-5-amd64 kernel.
>
> Any ideas?
>
> Thanks
> --
> Errikos Koen,
> Cloud Architect
> www.pamediakopes.gr
>
>
> It works for me using the same version and build options.
> Maybe you are looking to the wrong counter.
> The one in the stats page is about "denied requests" (this is about http
> requests) while you should be looking for "denied connections", you can
> find more about this here: https://cbonte.github.io/
> haproxy-dconv/1.8/management.html#9.1
> According to the doc, the "denied connections" is the 81th field (counting
> from 0) so using the following command will help track the counter:
> watch  'echo "show stat" | socat stdio  < haproxy-socket-path > | cut -d
> "," -f 1-2,82 | column -s, -t'
>
> ++
>
> --
> Moemen MHEDHBI
>
>
>


-- 
Errikos Koen,
Cloud Architect
www.pamediakopes.gr


Re: [PATCH] BUG/MEDIUM: threads: Fix the max/min calculation because of name clashes

2018-04-10 Thread Willy Tarreau
On Tue, Apr 10, 2018 at 09:29:37AM +0200, Christopher Faulet wrote:
> Sorry, I sent the patch I used for HAProxy 1.8. Here is the right patch, for
> the upstream. But good news for you Willy, it will be easier to backport it
> in 1.8 now :)

Both now merged, thanks for the backport ;-)

Willy



[PATCH] epoll: the listener socket use EPOLLEXCLUSIVE flag

2018-04-10 Thread 龙红波
Hi, all,
 haproxy still have the thundering herd problem in the multi-process
mode, the EPOLLEXCLUSIVE flag has been added since linux 4.5, which can
solve this problem


0001-epoll-the-listener-socket-use-EPOLLEXCLUSIVE-flag.patch
Description: Binary data


1.7.10 and 1.6.14 always compress response

2018-04-10 Thread Veiko Kukk

Hi,


Lets run simple query against host (real hostnames replaced).

curl https://testhost01.tld -o /dev/null -vvv

Request headers:

> GET / HTTP/1.1
> Host: testhost01.tld
> User-Agent: curl/7.58.0
> Accept: */*

Response headers:

< HTTP/1.1 200 OK
< Date: Tue, 10 Apr 2018 12:23:44 GMT
< Content-Encoding: gzip
< Content-Type: text/html;charset=utf-8
< Cache-Control: no-cache
< Date: Tue, 10 Apr 2018 12:23:44 GMT
< Accept-Ranges: bytes
< Server: Restlet-Framework/2.3.4
< Vary: Accept-Charset, Accept-Encoding, Accept-Language, Accept
< Connection: close
< Access-Control-Allow-Origin: *
< Strict-Transport-Security: max-age=15768000

This happens even when either compression algo nor compression type are 
specified in haproxy configuration file.


But lets say during request that we don't want any compression:

curl https://testhost01.tld -H "Accept-Encoding: identity" -o /dev/null -vvv

Request headers:

> GET / HTTP/1.1
> Host: testhost01.tld
> User-Agent: curl/7.58.0
> Accept: */*
> Accept-Encoding: identity

Response headers:

< HTTP/1.1 200 OK
< Date: Tue, 10 Apr 2018 12:40:25 GMT
< Content-Encoding: gzip
< Content-Type: text/html;charset=utf-8
< Cache-Control: no-cache
< Date: Tue, 10 Apr 2018 12:40:25 GMT
< Accept-Ranges: bytes
< Server: Restlet-Framework/2.3.4
< Vary: Accept-Charset, Accept-Encoding, Accept-Language, Accept
< Connection: close
< Access-Control-Allow-Origin: *
< Strict-Transport-Security: max-age=15768000

Still, response is gzipped.

HA-Proxy version 1.6.14-66af4a1 2018/01/02
Copyright 2000-2018 Willy Tarreau 

Build options :
  TARGET  = linux2628
  CPU = generic
  CC  = gcc
  CFLAGS  = -m64 -march=x86-64 -O2 -g -fno-strict-aliasing 
-Wdeclaration-after-statement -fwrapv

  OPTIONS = USE_ZLIB=1 USE_OPENSSL=1 USE_LUA=1 USE_STATIC_PCRE=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Encrypted password support via crypt(3): yes
Built with zlib version : 1.2.3
Running on zlib version : 1.2.3
Compression algorithms supported : identity("identity"), 
deflate("deflate"), raw-deflate("deflate"), gzip("gzip")

Built with OpenSSL version : OpenSSL 1.0.1e-fips 11 Feb 2013
Running on OpenSSL version : OpenSSL 1.0.1e-fips 11 Feb 2013
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports prefer-server-ciphers : yes
Built with PCRE version : 7.8 2008-09-05
Running on PCRE version : 7.8 2008-09-05
PCRE library supports JIT : no (USE_PCRE_JIT not set)
Built with Lua version : Lua 5.3.4
Built with transparent proxy support using: IP_TRANSPARENT 
IPV6_TRANSPARENT IP_FREEBIND


Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

HA-Proxy version 1.6.14-66af4a1 2018/01/02
Copyright 2000-2018 Willy Tarreau 

Build options :
  TARGET  = linux2628
  CPU = generic
  CC  = gcc
  CFLAGS  = -m64 -march=x86-64 -O2 -g -fno-strict-aliasing 
-Wdeclaration-after-statement -fwrapv

  OPTIONS = USE_ZLIB=1 USE_OPENSSL=1 USE_LUA=1 USE_STATIC_PCRE=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Encrypted password support via crypt(3): yes
Built with zlib version : 1.2.3
Running on zlib version : 1.2.3
Compression algorithms supported : identity("identity"), 
deflate("deflate"), raw-deflate("deflate"), gzip("gzip")

Built with OpenSSL version : OpenSSL 1.0.1e-fips 11 Feb 2013
Running on OpenSSL version : OpenSSL 1.0.1e-fips 11 Feb 2013
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports prefer-server-ciphers : yes
Built with PCRE version : 7.8 2008-09-05
Running on PCRE version : 7.8 2008-09-05
PCRE library supports JIT : no (USE_PCRE_JIT not set)
Built with Lua version : Lua 5.3.4
Built with transparent proxy support using: IP_TRANSPARENT 
IPV6_TRANSPARENT IP_FREEBIND


Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.




Re: [PATCH] epoll: the listener socket use EPOLLEXCLUSIVE flag

2018-04-10 Thread Willy Tarreau
Hi,

On Tue, Apr 10, 2018 at 08:38:32PM +0800, ??? wrote:
> Hi, all,
>  haproxy still have the thundering herd problem in the multi-process
> mode, the EPOLLEXCLUSIVE flag has been added since linux 4.5, which can
> solve this problem

Well I disagree with this approach, it will instead degrade the situation.
Let me explain.

Right now when working in multi-process mode, it is strongly suggested to
use multiple "bind" lines each with its own process. When this is done,
the sockets are bound with SO_REUSEPORT, where the kernel performs some
round-robin load balancing between all the sockets, and each process
receiving connections will be woken up and will be able to accept at
once *all* pending connections for its listener.

With your approach what will happen is that a single process will be
woken up for multiple pending requests at once, it will suck them all
in a loop without leaving a chance to the other processes to take their
share. This creates a huge imbalance that is already visible when using
nbproc without the "process" directive on the bind lines. With SSL this
has an even worse impact since a process can steal a lot of traffic and
spend a lot of time in handshakes while the other ones are twidling
thumbs.

Regards,
Willy



Re: 1.7.10 and 1.6.14 always compress response

2018-04-10 Thread William Lallemand
On Tue, Apr 10, 2018 at 03:43:12PM +0300, Veiko Kukk wrote:
> Hi,
> 

Hi,

> 
> This happens even when either compression algo nor compression type are 
> specified in haproxy configuration file.
>

If you didn't specify any compression keyword in the haproxy configuration
file, that's probably your backend server which is doing the compression.
 

-- 
William Lallemand



Site Proposal

2018-04-10 Thread Ruel Revales
Good day!

I came across your rockstar site and at the same time is very informative.
Can I ask about adding my fantastic site: https://www.topvpncanada.com/ to
your page: http://www.haproxy.org/they-use-it.html?

Hoping for your kind consideration.

Ruel Revales


Re: 1.7.10 and 1.6.14 always compress response

2018-04-10 Thread Veiko Kukk

On 04/10/2018 03:51 PM, William Lallemand wrote:

On Tue, Apr 10, 2018 at 03:43:12PM +0300, Veiko Kukk wrote:

Hi,



Hi,



This happens even when either compression algo nor compression type are
specified in haproxy configuration file.



If you didn't specify any compression keyword in the haproxy configuration
file, that's probably your backend server which is doing the compression.


Actually, you are right.
What is suprising, is that in case of requesting non-compressed from 
haproxy, it still passes through compressed data.


Maybe that's how standard specifies, I don't know.

Thanks,
Veiko





Re: Logs full TCP incoming and outgoing packets

2018-04-10 Thread florent

Hello,


Thanks for answer. Yes, I would prefer to say no as well but I am not 
the CTO here ;) I thought about tcpdump as well even if it will kill the 
performance !



Anyway, I found in the ML archives some relevant informations like this 
one :



https://www.mail-archive.com/haproxy@formilux.org/msg25964.html


but in my case, it logs nothing. Trying to log the req.len gives a size 
of 0 for the buffer as well. I did something like that, in the frontend 
section :



frontend  localnode
modetcp
#option  tcplog
#log global
bind192.168.1.4:4300
default_backend uxdaemon
declare capture request len 80
tcp-request inspect-delay 3s
#tcp-request content capture dst len 15
tcp-request content capture req.payload(0,80) len 80
#tcp-request content capture req.len len 80
 log-format  "%[capture.req.hdr(0)]"

I tried with and without the

declare capture request len 80

just in case it was required to declared the buffer prior, but I have 
got nothing but a dash in the logs :/ Too, commented out "option tcp log 
" and "log global" as well but no changes.


Best regards,
Florent

Le 2018-04-10 02:24, Jonathan Matthews a écrit :

On 10 April 2018 at 00:04,   wrote:

Hello everybody,

For an application, I use haproxy in TCP mode but I would need to log, 
from
the main load balancer machine, all the TCP transactions (incoming 
packets
sent to the node then the answer that is sent back from the node to 
the

client through the haproxy load balancer machine).

Is it possible to do such a thing ? I started to dig in the ML and 
found few
information about capturing the tcp-request, which does not work for 
now...
and I need the response as well... so preferred to ask if someone have 
got

an experience doing this. Sure, it will have a performance penalty but
exhaustive logging is more important than that and it it the best 
solution

to avoid a lot of changes in the existing infrastructure we just
load-balanced.


I don't believe this is possible inside haproxy right now.

If I *had* to do this, I'd start by saying "no", and then I'd work out
how to run a tcpdump process on the machine with carefully tuned
filters and a -w parameter. Then I'd drink something strong.

J





Re: Can you help me

2018-04-10 Thread Aleksandar Lazic
Hi.

Am 10.04.2018 um 22:14 schrieb Juan Carlos Real Guevara:
> Hi and thanks for your response. I think that ive found the problem, and
> i look that in the first packets the ip.src is the ip of the balancer
> and in the GET to download the archive is with ip.src from the client
> and the response is forbidden.
> 
> In your experience, Is possible that the ip.src always are the client
> ip. and not of the ip of the balancer ?

Please keep the mailling list in the answer, thanks.
Please can you and the requested information's below.

> Thanks

Regards
Aleks

> 2018-04-05 13:41 GMT-05:00 Aleksandar Lazic  >:
> 
> Hi Juan Carlos.
> 
> Am 05.04.2018 um 19:18 schrieb Juan Carlos Real Guevara:
> > Hi i have a problem with a config in haproxy, can you help me?
> 
> Well with such little information probably not.
> 
> Please can you tell us a little bit more about the setup.
> 
> OS:
> HAProxy version: haproxy -vv
> HAProxy config:
> 
> What's the backend server?
> What's the protocol which the server expect to get?
> What's in the haproxy log?
> 
> > The problem is that when i send a request throught haproxy with
> > "c.g.cloud.web.HttpSessionMonitor" the backend server response error 403
> > Forbidden.
> 
> Have you seen that https://en.wikipedia.org/wiki/HTTP_403
>  ?
> 
> Do you receive the same error when you call the backend server directly
> with "c.g.cloud.web.HttpSessionMonitor"?
> 
> What's this "c.g.cloud.web.HttpSessionMonitor" ?
> Does the backend server expect some authentication?
> 
> > Thanks
> 
> Please keep the mailling list in the answer, thanks.
> 
> Best regards
> Aleks
> 



Re: DNS resolver and mixed case responses

2018-04-10 Thread Ben Draut
It's interesting that the default behavior of HAProxy resolvers can
conflict with the default behavior of bind. (If you're unlucky with
whatever bind has cached)

By default, bind uses case-insensitive compression, which can cause it to
use a different case in the ANSWER than in the QUESTION. (See
'no-case-compress':
https://ftp.isc.org/isc/bind9/cur/9.9/doc/arm/Bv9ARM.ch06.html) We were
impacted by this recently.

Also interesting:
https://indico.dns-oarc.net/event/20/session/2/contribution/12/material/slides/0.pdf


On Mon, Apr 9, 2018 at 2:12 AM, Baptiste  wrote:

> So, it seems that responses that does not match the case should be dropped:
> https://twitter.com/PowerDNS_Bert/status/983254222694240257
>
> Baptiste
>


1.8.7 http-tunnel doesn't seem to work? (but default http-keep-alive does)

2018-04-10 Thread PiBa-NL

Hi Haproxy List,

I upgraded to 1.8.7 (coming from 1.8.3) and found i could no-longer use 
one of our IIS websites. The login procedure thats using windows 
authentication / ntlm seems to fail..
Removing option http-tunnel seems to fix this though. Afaik http-tunnel 
'should' switch to tunnelmode after the first request and as such should 
have no issue sending the credentials the the server.?.


Below are:  config / haproxy -vv / tcpdump / sess all

Is it a known issue? Is there anything else i can provide?

Regards,

PiBa-NL (Pieter)

-
# Automaticaly generated, dont edit manually.
# Generated on: 2018-04-10 21:00
global
    maxconn            1000
    log            192.168.8.10    local1    info
    stats socket /tmp/haproxy.socket level admin
    gid            80
    nbproc            1
    nbthread            1
    hard-stop-after        15m
    chroot                /tmp/haproxy_chroot
    daemon
    tune.ssl.default-dh-param    2048
    defaults
    option log-health-checks


frontend site.domain.nl2
    bind            192.168.8.5:443 name 192.168.8.5:443  ssl  crt 
/var/etc/haproxy/site.domain.nl2.pem crt-list 
/var/etc/haproxy/site.domain.nl2.crt_list

    mode            http
    log            global
    option            httplog
    option            http-tunnel
    maxconn            100
    timeout client        1h
    option tcplog
    default_backend website-intern_http_ipvANY

backend site-intern_http_ipvANY
    mode            http
    log            global
    option            http-tunnel
    timeout connect        10s
    timeout server        1h
    retries            3
    server            site 192.168.13.44:443 ssl  weight 1.1 verify none

-
[2.4.3-RELEASE][root@pfsense_5.local]/root: haproxy -vv
HA-Proxy version 1.8.7 2018/04/07
Copyright 2000-2018 Willy Tarreau 

Build options :
  TARGET  = freebsd
  CPU = generic
  CC  = cc
  CFLAGS  = -O2 -pipe -fstack-protector -fno-strict-aliasing 
-fno-strict-aliasing -Wdeclaration-after-statement -fwrapv 
-fno-strict-overflow -Wno-address-of-packed-member -Wno-null-dereference 
-Wno-unused-label -DFREEBSD_PORTS
  OPTIONS = USE_GETADDRINFO=1 USE_ZLIB=1 USE_CPU_AFFINITY=1 
USE_ACCEPT4=1 USE_REGPARM=1 USE_OPENSSL=1 USE_LUA=1 USE_STATIC_PCRE=1 
USE_PCRE_JIT=1


Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with network namespace support.
Built with zlib version : 1.2.11
Running on zlib version : 1.2.11
Compression algorithms supported : identity("identity"), 
deflate("deflate"), raw-deflate("deflate"), gzip("gzip")

Built with PCRE version : 8.40 2017-01-11
Running on PCRE version : 8.40 2017-01-11
PCRE library supports JIT : yes
Built with multi-threading support.
Encrypted password support via crypt(3): yes
Built with transparent proxy support using: IP_BINDANY IPV6_BINDANY
Built with Lua version : Lua 5.3.4
Built with OpenSSL version : OpenSSL 1.0.2m-freebsd  2 Nov 2017
Running on OpenSSL version : OpenSSL 1.0.2m-freebsd  2 Nov 2017
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : SSLv3 TLSv1.0 TLSv1.1 TLSv1.2

Available polling systems :
 kqueue : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use kqueue.

Available filters :
    [TRACE] trace
    [COMP] compression
    [SPOE] spoe
-
tcpdump of : Client 8.32>Haproxy 8.5:

21:09:13.452118 IP 192.168.8.32.51658 > 192.168.8.5.443: Flags [S], seq 
1417754656, win 8192, options [mss 1260,nop,wscale 8,nop,nop,sackOK], 
length 0
21:09:13.452312 IP 192.168.8.5.443 > 192.168.8.32.51658: Flags [S.], seq 
1950703403, ack 1417754657, win 65228, options [mss 1260,nop,wscale 
7,sackOK,eol], length 0
21:09:13.453030 IP 192.168.8.32.51658 > 192.168.8.5.443: Flags [.], ack 
1, win 260, length 0
21:09:13.457740 IP 192.168.8.32.51658 > 192.168.8.5.443: Flags [P.], seq 
1:190, ack 1, win 260, length 189
21:09:13.457762 IP 192.168.8.5.443 > 192.168.8.32.51658: Flags [.], ack 
190, win 510, length 0
21:09:13.459503 IP 192.168.8.5.443 > 192.168.8.32.51658: Flags [.], seq 
1:1261, ack 190, win 511, length 1260
21:09:13.459516 IP 192.168.8.5.443 > 192.168.8.32.51658: Flags [.], seq 
1261:2521, ack 190, win 511, length 1260
21:09:13.459527 IP 192.168.8.5.443 > 192.168.8.32.51658: Flags [P.], seq 
2521:2686, ack 190, win 511, length 165
21:09:13.460342 IP 192.168.8.32.51658 > 192.168.8.5.443: Flags [.], ack 
2686, win 260, length 0
21:09:13.478984 IP 192.168.8.32.51658 > 192.168.8.5.443: Flags [P.], seq 
190:316, ack 2686, win 260, length 126
21:09:13.479038 IP 192.168.8.5.443 > 192.168.8.32.51658: Flags [.], ack 
316, win 510, length 0
21:09:13.480105 IP 192.168.8.5.443 > 192.168.8.32.51658: Flags [P.], seq 
2686:2737, ack 316, win 511, length 51
21:09:13.490136 IP 192.168.8.32.51658 > 192.168.8.5.443: Flags [P.

Re: resolvers - resolv.conf fallback

2018-04-10 Thread Ben Draut
I agree.

On Mon, Apr 9, 2018 at 1:35 AM, Baptiste  wrote:

>
>
> On Fri, Apr 6, 2018 at 4:54 PM, Willy Tarreau  wrote:
>
>> On Fri, Apr 06, 2018 at 04:50:54PM +0200, Lukas Tribus wrote:
>> > > Well, sometimes when you're debugging a configuration, it's nice to be
>> > > able to disable some elements. Same for those manipulating/building
>> > > configs by assembling elements and iteratively pass them through
>> > > "haproxy -c". That's exactly the reason why we relaxed a few checks in
>> > > the past, like accepting a frontend with no bind line or accepting a
>> > > backend with a "cookie" directive with no cookie on server lines. In
>> > > fact we could simply emit a warning when a resolvers section has no
>> > > resolver nor resolv.conf enabled, but at least accept to start.
>> >
>> > Understood; however in this specific case I would argue one would
>> > remove the "resolver" directive from the server-line(s), instead of
>> > dropping the nameservers from the global nameserver declaration.
>>
>> No, because in order to do this, you also have to remove all references
>> on all "server" lines, which is quite a pain, and error-prone when you
>> want to reactivate them.
>>
>> > Maybe a config warning would be a compromise for this case?
>>
>> Yes, that's what I mentionned above, I'm all in favor of this given that
>> we can't objectively find a valid use case for an empty resolvers section
>> in production.
>>
>> Cheers,
>> Willy
>>
>
>
> Ok, so just to summarize:
> - we should enable parsing of resolv.conf with a configuration statement
> in the resolvers section
> - only nameserver directives from resolv.conf will be parsed for now
> - parsing of resolv.conf can be used in conjunction with nameserver
> directives in the resolvers section
> - HAProxy should emit a warning message when parsing a configuration which
> has no resolv.conf neither nameserver directives enabled
>
> Is that correct?
>
> Baptiste
>


Segfault in haproxy v1.8 with Lua

2018-04-10 Thread Hessam Mirsadeghi
Hi,

I have a simple Lua http-response action script that leads to segmentation
fault in haproxy. The Lua script is a simple call to txn.res:forward(0).
A sample haproxy config and the Lua script files are attached. The backend
is simply an nginx instance which responds with 204 No Content.

The commit that introduces this problem is:
commit 8a5949f2d74c3a3a6c6da25449992c312b183ef3
BUG/MEDIUM: http: Switch the HTTP response in tunnel mode as earlier as
possible

Any ideas?

Best,
Seyed


foo.lua
Description: Binary data


haproxy.cfg
Description: Binary data