Re: HAProxy makes backend unresponsive when handling multiple thousand connections per second

2017-06-21 Thread Igor Cicimov
Hi Lukas,

On 22 Jun 2017 3:02 am, "Lukas Tribus"  wrote:

Hello,


> Daniel, if using ssl to the backends shouldn't you use http mode?
> Per your config you are using tcp which is default one. Afaik tcp
> is for ssl passthrough.

For the record, this is not true. Just because you need TCP mode
for TLS passthrough, doesn't mean you have to use HTTP mode when
terminating TLS.

Actually, terminating TLS while using TCP mode is a quite common
configuration (for example with HTTP/2).


Thanks for clarifying this.




>> Try adding:
>> option httpclose
>> in the backend and see if that helps.
>
> Sorry, replace httpclose with  http-server-close

Actually, I would have suggested the opposite: making the whole
thing less expensive, by going full blown keep-alive with
http-reuse:

option http-keep-alive
option prefer-last-server
timeout http-keep-alive 30s
http-reuse safe


Keep-alive is on by default hence my suggestion to use the opposite. Of
course keep-alive enabled is always better especially in case of ssl.




> global
>  ulimit-n 2

Why specify ulimit? Haproxy will do this for you, you are just
asking for trouble. I suggest you remove this.



Maybe something on your backend (conntrack or the application)
is rate-limiting per IP, or the aggressive client your are facing
is keep-aliving properly with the backend, while it doesn't when
using haproxy.


I would apply the keep-alive configurations above and I would
also suggest that you check the CPU load on your backend server
as connections through haproxy become unresponsive, because that
CPU can be saturated due to TLS negotiations as well.


That's what the haproxy log shows, the response time from the tomcat
backend is high suggesting something is wrong. Maybe something that you
mentioned above (which makes sesnse), some system settings or if we can see
the tomcat connector settings (and logs possibly) maybe something there is
causing issues.



Regards,
Lukas


Re: LoadBalance whole subnet

2017-06-21 Thread William Lallemand
On Wed, Jun 21, 2017 at 08:05:20AM +0200, Aleksandar Lazic wrote:
> > Hi Aleksandar,
> 
> > Don't worry that's a mistake, Sarunas put cont...@haproxy.com in copy to his
> > mail which lead to this.
> 
> > Please don't continue this thread on the mailing list, thanks.
> 
> 
> Well, I assume I understand you.
> 

To clarify, I think I wasn't clear enough. I meant it's not necessary to
continue the thread with Anamarija and contact@ on the mailing list :-)

-- 
William Lallemand



Re: HAProxy makes backend unresponsive when handling multiple thousand connections per second

2017-06-21 Thread Lukas Tribus
Hello,


> Daniel, if using ssl to the backends shouldn't you use http mode?
> Per your config you are using tcp which is default one. Afaik tcp
> is for ssl passthrough.

For the record, this is not true. Just because you need TCP mode
for TLS passthrough, doesn't mean you have to use HTTP mode when
terminating TLS.

Actually, terminating TLS while using TCP mode is a quite common
configuration (for example with HTTP/2).



>> Try adding:
>> option httpclose
>> in the backend and see if that helps.
>
> Sorry, replace httpclose with  http-server-close

Actually, I would have suggested the opposite: making the whole
thing less expensive, by going full blown keep-alive with
http-reuse:

option http-keep-alive
option prefer-last-server
timeout http-keep-alive 30s
http-reuse safe



> global
>  ulimit-n 2

Why specify ulimit? Haproxy will do this for you, you are just
asking for trouble. I suggest you remove this.



Maybe something on your backend (conntrack or the application)
is rate-limiting per IP, or the aggressive client your are facing
is keep-aliving properly with the backend, while it doesn't when
using haproxy.


I would apply the keep-alive configurations above and I would
also suggest that you check the CPU load on your backend server
as connections through haproxy become unresponsive, because that
CPU can be saturated due to TLS negotiations as well.



Regards,
Lukas




Re: Trouble getting rid of Connection Keep-Alive header

2017-06-21 Thread Lukas Tribus
Hi Mats,


Am 21.06.2017 um 14:30 schrieb Mats Eklund:
>
> Hi,
>
>
> Thanks, here's the full config:
>

So for the record, what you are trying to achieve is to disable HTTP
keep-alive between haproxy and the browser?

In the default section, replace:
option http-server-close

with:
option httpclose



Regards,
Lukas






Re: haproxy does not capture the complete request header host sometimes

2017-06-21 Thread Willy Tarreau
On Wed, Jun 21, 2017 at 05:00:01PM +0200, Christopher Faulet wrote:
> I attached a patch to improve the configuration parsing and to update the
> documentation. It can be backported in 1.7, 1.6 and 1.5. I finally marked
> this patch as a bug fix.

Applied, thanks to both of your for killing this one.

willy



Re: haproxy does not capture the complete request header host sometimes

2017-06-21 Thread Christopher Faulet

Le 13/06/2017 à 14:16, Christopher Faulet a écrit :

Le 13/06/2017 à 10:31, siclesang a écrit :

haproxy balances by host,but often captures   a part of  request header
host or null, and requests balance to default server.

how to debug it ,



Hi,

I'll try to help you. Can you share your configuration please ? It could
help to find a potential bug.

Could you also provide the tcpdump of a buggy request ?

And finally, could you upgrade your HAProxy to the last 1.6 version
(1.6.12) to be sure ?



Hi,

Just for the record. After some exchanges in private with siclesang, we 
found the bug in the configuration parser, because of a too high value 
for tune.http.maxhdr. Here is the explanation:


Well, I think I found the problem. This is not a bug (not really). There
is something I missed in your configuration. You set tune.http.maxhdr to
64000. I guess you keep this parameter during all your tests. This is an
invalid value. It needs to be in the range [0, 32767]. This is mandatory
to avoid integer overflow. the size of the array where headers offsets
are stored is a signed short.

To be fair, there is no check on this value during the configuration
parsing. And the documentation does not specify any range for this
parameter. I will post a fix very quickly to avoid errors.

BTW, this is a really huge value. The default one is 101. You can
legitimately increase this value. But there is no reason to have 64000
headers in an HTTP message. IMHO, 1000/2000 is already an very huge limit.

I attached a patch to improve the configuration parsing and to update 
the documentation. It can be backported in 1.7, 1.6 and 1.5. I finally 
marked this patch as a bug fix.


Thanks siclesang for your help,
--
Christopher Faulet
>From 5b00b9eeda0838a2ab86835c9e3b28b503889b24 Mon Sep 17 00:00:00 2001
From: Christopher Faulet 
Date: Wed, 21 Jun 2017 16:31:35 +0200
Subject: [PATCH] BUG/MINOR: cfgparse: Check if tune.http.maxhdr is in the
 range 1..32767

We cannot store more than 32K headers in the structure hdr_idx, because
internaly we use signed short integers. To avoid any bugs (due to an integers
overflow), a check has been added on tune.http.maxhdr to be sure to not set a
value greater than 32767 and lower than 1 (because this is a nonsense to set
this parameter to a value <= 0).

The documentation has been updated accordingly.

This patch can be backported in 1.7, 1.6 and 1.5.
---
 doc/configuration.txt | 6 +++---
 src/cfgparse.c| 8 +++-
 2 files changed, 10 insertions(+), 4 deletions(-)

diff --git a/doc/configuration.txt b/doc/configuration.txt
index 49bfd85..082b857 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -1374,9 +1374,9 @@ tune.http.maxhdr 
   are blocked with "502 Bad Gateway". The default value is 101, which is enough
   for all usages, considering that the widely deployed Apache server uses the
   same limit. It can be useful to push this limit further to temporarily allow
-  a buggy application to work by the time it gets fixed. Keep in mind that each
-  new header consumes 32bits of memory for each session, so don't push this
-  limit too high.
+  a buggy application to work by the time it gets fixed. The accepted range is
+  1..32767. Keep in mind that each new header consumes 32bits of memory for
+  each session, so don't push this limit too high.
 
 tune.idletimer 
   Sets the duration after which haproxy will consider that an empty buffer is
diff --git a/src/cfgparse.c b/src/cfgparse.c
index 261a0eb..3706bca 100644
--- a/src/cfgparse.c
+++ b/src/cfgparse.c
@@ -916,7 +916,13 @@ int cfg_parse_global(const char *file, int linenum, char **args, int kwm)
 			err_code |= ERR_ALERT | ERR_FATAL;
 			goto out;
 		}
-		global.tune.max_http_hdr = atol(args[1]);
+		global.tune.max_http_hdr = atoi(args[1]);
+		if (global.tune.max_http_hdr < 1 || global.tune.max_http_hdr > 32767) {
+			Alert("parsing [%s:%d] : '%s' expects a numeric value between 1 and 32767\n",
+			  file, linenum, args[0]);
+			err_code |= ERR_ALERT | ERR_FATAL;
+			goto out;
+		}
 	}
 	else if (!strcmp(args[0], "tune.comp.maxlevel")) {
 		if (alertif_too_many_args(1, file, linenum, args, _code))
-- 
2.9.4



Re: Trouble getting rid of Connection Keep-Alive header

2017-06-21 Thread Holger Just
Hi Mats,

Mats Eklund wrote:
> I am running a load balanced Tomcat application on Openshift Online
> v2, with HAProxy ver. 1.4.22 as load balancer.

With your current config, HAProxy will add a "Connection: close" header
to responses. However, since you mentioned you are running this in an
OpenShift environment, there might (and probably is)  be another layer
of proxies involved between your HAProxy and your client.

Since you are speaking plain HTTP here, this other proxy might chose to
use keep-alive connections towards the client, similar to how HAProxy's
option http-server-close works. In that case, you would have to change
the configuration of this other proxy too.

Best,
Holger

P.S. HAProxy 1.4 is OLD and receives only critical fixes now. You should
seriously consider upgrading to a newer version. The current stable
version if 1.7.26.

At the very least, you should upgrade to the latest 1.4 version 1.4.27
has fixed 83 known bugs since 1.4.22. See
https://www.haproxy.org/bugs/bugs-1.4.22.html for details.



Re: Trouble getting rid of Connection Keep-Alive header

2017-06-21 Thread Mats Eklund
Hi,


Thanks, here's the full config:


global
maxconn 256
stats socket ...

defaults
modehttp
log global
option  httplog
option  dontlognull
option http-server-close
#option forwardfor   except 127.0.0.0/8
option  redispatch
retries 3
timeout http-request10s
timeout queue   1m
timeout connect 10s
timeout client  1m
timeout server  1m
timeout http-keep-alive 10s
timeout check   10s
maxconn 128

listen stats 127:...
mode http
stats enable
stats uri /

listen express 127:...
cookie GEAR insert indirect nocache
option httpchk GET /
http-check expect rstatus 2..|3..|401
option httpclose
balance leastconn
server local-gear 127:... check fall 2 rise 3 inter 2000 cookie 
local-...

Thanks,
Mats


>

>

> Am 21.06.2017 um 07:59 schrieb Mats Eklund:
> >
> >
> > Hi,
> >
> >
> > I am running a load balanced Tomcat application on Openshift Online v2, 
> > with HAProxy ver. 1.4.22 as load balancer.
> >
> >
> > I would like to have HTTP connections closed after each response is 
> > returned. But am unable to make the response contain the corresponding 
> > response headers (i.e. "Connection: close").
> >
> >
> > I have tried adding "option httpclose" in the HAProxy config file, but 
> > still the response headers contains "Connection: keep-alive" and a 
> > "Keep-Alive: timeout=15, max=100".
> >
> >
> > I have even tried "rspdel ^Connection:\ .*" but still the response contains 
> > the above mentioned headers. (That I'm editing the right config file I have 
> > verified by successfully adding a rspadd instruction).
> >
> >
> > Any advice is much appreciated!
> >
>
> We are gonna need the full configuration.
>
> Is this http/1.1 or http/1.0 on the front and backend?
> Confirm that you are running in  http mode ("mode http") please.
>
>
> Regards,
> Lukas



Re: HAProxy makes backend unresponsive when handling multiple thousand connections per second

2017-06-21 Thread Igor Cicimov
Sorry, replace httpclose with  http-server-close

On 21 Jun 2017 7:55 pm, "Igor Cicimov" 
wrote:

> Yes saw it but too late. Anyway according to the timers the Tr:26040 means
> it took 26 seconds for the server to send the response. Any errors in the
> backend logs?
>
> client_ip:193.XX.XX.XXX client_port:18935 SSL_version:TLSv1.2
> SSL_cypher:DHE-RSA-AES256-GCM-SHA384 -- Tt:26150 Tq:106 Tw:0 Tc:3 Tr:26040
>
>
> Try adding:
>
> option httpclose
>
> in the backend and see if that helps.
>
> On 21 Jun 2017 7:48 pm, "Daniel Heitepriem" 
> wrote:
>
> Hi Igor,
>
> the config is set to "mode http" (see below) only the log output is set to
> "tcplog" to be able to get a more detailed log output. Please correct me if
> I'm wrong but regarding to the config HTTP-mode is (or at least should be)
> used.
>
>
> defaults
> log global
> option tcplog
> log-format %f\ %b/%s\ client_ip:%ci\ client_port:%cp\
> SSL_version:%sslv\ SSL_cypher:%sslc\ %ts\ Tt:%Tt\ Tq:%Tq\ Tw:%Tw\ Tc:%Tc\
> Tr:%Tr
> mode http
> timeout connect 5000
> timeout check 5000
> timeout client 3
> timeout server 3
> retries 3
>
> frontend ndc
> http-response set-header Strict-Transport-Security max-age=31536000;\
> includeSubdomains;\ preload
> http-response set-header X-Content-Type-Options nosniff
>
> bind *:443 ssl crt /opt/etc/haproxy/domain_com.pem force-tlsv12
> no-sslv3
> maxconn 2
>
> acl fare_availability path_beg /ndc/fare/v1/availability
> acl flight_availability path_beg /ndc/flight/v1/availability
> use_backend vakanz-backend if flight_availability or fare_availability
> default_backend booking-backend
>
> backend booking-backend
> server 10.2.8.28 10.2.8.23:8443 check ssl verify none minconn 500
> maxconn 500
>
> backend vakanz-backend
> server 10.2.8.28 10.2.8.28:8443 check ssl verify none minconn 500
> maxconn 500
> server 10.2.8.40 10.2.8.40:8443 check ssl verify none minconn 500
> maxconn 500
> server 10.2.8.41 10.2.8.41:8443 check ssl verify none minconn 500
> maxconn 500
>
> Regards,
> Daniel
>
> Am 21.06.17 um 11:37 schrieb Igor Cicimov:
>
>
>
> On 21 Jun 2017 6:34 pm, "Daniel Heitepriem" 
> wrote:
>
> Nothing special. No errors, no dropped connections just an increased
> server response time (Tr). An excerpt from low and high traffic times is
> below:
>
> Jun 20 18:05:29 localhost haproxy[13426]: ndc vakanz-backend/10.2.8.28
> client_ip:193.XX.XX.XXX client_port:50876 SSL_version:TLSv1.2
> SSL_cypher:DHE-RSA-AES256-GCM-SHA384 -- Tt:157 Tq:95 Tw:0 Tc:2 Tr:60
> Jun 20 18:05:29 localhost haproxy[13426]: ndc vakanz-backend/10.2.8.41
> client_ip:193.XX.XX.XXX client_port:32910 SSL_version:TLSv1.2
> SSL_cypher:DHE-RSA-AES256-GCM-SHA384 -- Tt:148 Tq:82 Tw:0 Tc:1 Tr:65
> Jun 20 18:05:30 localhost haproxy[13426]: ndc vakanz-backend/10.2.8.40
> client_ip:193.XX.XX.XXX client_port:51077 SSL_version:TLSv1.2
> SSL_cypher:DHE-RSA-AES256-GCM-SHA384 -- Tt:525 Tq:312 Tw:0 Tc:2 Tr:211
>
> Jun 20 22:05:36 localhost haproxy[13426]: ndc vakanz-backend/10.2.8.28
> client_ip:193.XX.XX.XXX client_port:48936 SSL_version:TLSv1.2
> SSL_cypher:DHE-RSA-AES256-GCM-SHA384 -- Tt:25368 Tq:101 Tw:0 Tc:3 Tr:25264
> Jun 20 22:05:36 localhost haproxy[13426]: ndc vakanz-backend/10.2.8.41
> client_ip:193.XX.XX.XXX client_port:43030 SSL_version:TLSv1.2
> SSL_cypher:DHE-RSA-AES256-GCM-SHA384 -- Tt:23474 Tq:88 Tw:0 Tc:2 Tr:23383
> Jun 20 22:05:36 localhost haproxy[13426]: ndc vakanz-backend/10.2.8.40
> client_ip:193.XX.XX.XXX client_port:18935 SSL_version:TLSv1.2
> SSL_cypher:DHE-RSA-AES256-GCM-SHA384 -- Tt:26150 Tq:106 Tw:0 Tc:3 Tr:26040
>
>
> Am 21.06.17 um 10:21 schrieb Igor Cicimov:
>
>
>
> On 21 Jun 2017 6:11 pm, "Daniel Heitepriem" 
> wrote:
>
> Hi Jarno,
>
> yes we are decrypting TLS on the frontend (official SSL-certificate) and
> re-encrypt it before sending it to the backend (company policy so not that
> easy to change it to an unencrypted connection). The CPU usage is not
> higher than 15-20% even during peak times and the memory usage is also
> quite low (200-800MB).
>
> Regards,
> Daniel
>
> Am 21.06.17 um 10:00 schrieb Jarno Huuskonen:
>
> Hi,
>>
>> On Wed, Jun 21, Daniel Heitepriem wrote:
>>
>>> we got a problem recently which we can't explain to ourself. We got
>>> a java application (Tomcat WAR-File) which has to handle several
>>> million of requests per day and several thousand requests per second
>>> during peak times. Due to this high amount we are splitting traffic
>>> using an ACL in "booking traffic" and "availability traffic".
>>> Booking traffic is negligible but the Availability traffic is
>>> load-balanced over several application servers. The problem that
>>> occurs is that our external partner "floods" the
>>> Availability-Frontend with several thousand requests per second and
>>> the backend becomes unresponsive. If we redirect 

Re: HAProxy makes backend unresponsive when handling multiple thousand connections per second

2017-06-21 Thread Igor Cicimov
Yes saw it but too late. Anyway according to the timers the Tr:26040 means
it took 26 seconds for the server to send the response. Any errors in the
backend logs?

client_ip:193.XX.XX.XXX client_port:18935 SSL_version:TLSv1.2
SSL_cypher:DHE-RSA-AES256-GCM-SHA384 -- Tt:26150 Tq:106 Tw:0 Tc:3 Tr:26040


Try adding:

option httpclose

in the backend and see if that helps.

On 21 Jun 2017 7:48 pm, "Daniel Heitepriem" 
wrote:

Hi Igor,

the config is set to "mode http" (see below) only the log output is set to
"tcplog" to be able to get a more detailed log output. Please correct me if
I'm wrong but regarding to the config HTTP-mode is (or at least should be)
used.


defaults
log global
option tcplog
log-format %f\ %b/%s\ client_ip:%ci\ client_port:%cp\
SSL_version:%sslv\ SSL_cypher:%sslc\ %ts\ Tt:%Tt\ Tq:%Tq\ Tw:%Tw\ Tc:%Tc\
Tr:%Tr
mode http
timeout connect 5000
timeout check 5000
timeout client 3
timeout server 3
retries 3

frontend ndc
http-response set-header Strict-Transport-Security max-age=31536000;\
includeSubdomains;\ preload
http-response set-header X-Content-Type-Options nosniff

bind *:443 ssl crt /opt/etc/haproxy/domain_com.pem force-tlsv12 no-sslv3
maxconn 2

acl fare_availability path_beg /ndc/fare/v1/availability
acl flight_availability path_beg /ndc/flight/v1/availability
use_backend vakanz-backend if flight_availability or fare_availability
default_backend booking-backend

backend booking-backend
server 10.2.8.28 10.2.8.23:8443 check ssl verify none minconn 500
maxconn 500

backend vakanz-backend
server 10.2.8.28 10.2.8.28:8443 check ssl verify none minconn 500
maxconn 500
server 10.2.8.40 10.2.8.40:8443 check ssl verify none minconn 500
maxconn 500
server 10.2.8.41 10.2.8.41:8443 check ssl verify none minconn 500
maxconn 500

Regards,
Daniel

Am 21.06.17 um 11:37 schrieb Igor Cicimov:



On 21 Jun 2017 6:34 pm, "Daniel Heitepriem" 
wrote:

Nothing special. No errors, no dropped connections just an increased server
response time (Tr). An excerpt from low and high traffic times is below:

Jun 20 18:05:29 localhost haproxy[13426]: ndc vakanz-backend/10.2.8.28
client_ip:193.XX.XX.XXX client_port:50876 SSL_version:TLSv1.2
SSL_cypher:DHE-RSA-AES256-GCM-SHA384 -- Tt:157 Tq:95 Tw:0 Tc:2 Tr:60
Jun 20 18:05:29 localhost haproxy[13426]: ndc vakanz-backend/10.2.8.41
client_ip:193.XX.XX.XXX client_port:32910 SSL_version:TLSv1.2
SSL_cypher:DHE-RSA-AES256-GCM-SHA384 -- Tt:148 Tq:82 Tw:0 Tc:1 Tr:65
Jun 20 18:05:30 localhost haproxy[13426]: ndc vakanz-backend/10.2.8.40
client_ip:193.XX.XX.XXX client_port:51077 SSL_version:TLSv1.2
SSL_cypher:DHE-RSA-AES256-GCM-SHA384 -- Tt:525 Tq:312 Tw:0 Tc:2 Tr:211

Jun 20 22:05:36 localhost haproxy[13426]: ndc vakanz-backend/10.2.8.28
client_ip:193.XX.XX.XXX client_port:48936 SSL_version:TLSv1.2
SSL_cypher:DHE-RSA-AES256-GCM-SHA384 -- Tt:25368 Tq:101 Tw:0 Tc:3 Tr:25264
Jun 20 22:05:36 localhost haproxy[13426]: ndc vakanz-backend/10.2.8.41
client_ip:193.XX.XX.XXX client_port:43030 SSL_version:TLSv1.2
SSL_cypher:DHE-RSA-AES256-GCM-SHA384 -- Tt:23474 Tq:88 Tw:0 Tc:2 Tr:23383
Jun 20 22:05:36 localhost haproxy[13426]: ndc vakanz-backend/10.2.8.40
client_ip:193.XX.XX.XXX client_port:18935 SSL_version:TLSv1.2
SSL_cypher:DHE-RSA-AES256-GCM-SHA384 -- Tt:26150 Tq:106 Tw:0 Tc:3 Tr:26040


Am 21.06.17 um 10:21 schrieb Igor Cicimov:



On 21 Jun 2017 6:11 pm, "Daniel Heitepriem" 
wrote:

Hi Jarno,

yes we are decrypting TLS on the frontend (official SSL-certificate) and
re-encrypt it before sending it to the backend (company policy so not that
easy to change it to an unencrypted connection). The CPU usage is not
higher than 15-20% even during peak times and the memory usage is also
quite low (200-800MB).

Regards,
Daniel

Am 21.06.17 um 10:00 schrieb Jarno Huuskonen:

Hi,
>
> On Wed, Jun 21, Daniel Heitepriem wrote:
>
>> we got a problem recently which we can't explain to ourself. We got
>> a java application (Tomcat WAR-File) which has to handle several
>> million of requests per day and several thousand requests per second
>> during peak times. Due to this high amount we are splitting traffic
>> using an ACL in "booking traffic" and "availability traffic".
>> Booking traffic is negligible but the Availability traffic is
>> load-balanced over several application servers. The problem that
>> occurs is that our external partner "floods" the
>> Availability-Frontend with several thousand requests per second and
>> the backend becomes unresponsive. If we redirect them directly to
>>
> Looks like you're decrypting tls/ssl on frontend and then
> re-encrypting on backend/server. Is one core(you're not using nbproc?)
> able to handle thousand ssl requests coming in and going out ?
> (is haproxy process using 100% cpu).
>
> -Jarno
>
>
What do you see in the haproxy log when the problem happens?

Daniel, if using 

Re: 1.7.6 redirect regression (commit 73d071ecc84e0f26ebe1b9576fffc1ed0357ef32)

2017-06-21 Thread William Lallemand
On Wed, Jun 21, 2017 at 12:30:47PM +0300, Jarno Huuskonen wrote:
> Hi Christopher,
> 
> On Wed, Jun 21, Christopher Faulet wrote:
> > This bug was fixed in 1.8 (see commit
> > 9f724edbd8d1cf595d4177c3612607f395b4380e "BUG/MEDIUM: http: Drop the
> > connection establishment when a redirect is performed"). I attached
> > the patch. Could you quickly check if it fixes your bug (it should
> > do so) ?
> > 
> > It was not backported in 1.7 because we thought it only affected the
> > 1.8. I will check with Willy.
> 
> Thanks, patch fixes the problem (with test config (I'll try to
> test with prod. config later)).
> 
> -Jarno
> 

Thanks for tests, I will backport it in the 1.7 branch.

-- 
William Lallemand



Re: HAProxy makes backend unresponsive when handling multiple thousand connections per second

2017-06-21 Thread Daniel Heitepriem

Hi Igor,

the config is set to "mode http" (see below) only the log output is set 
to "tcplog" to be able to get a more detailed log output. Please correct 
me if I'm wrong but regarding to the config HTTP-mode is (or at least 
should be) used.


defaults
log global
option tcplog
log-format %f\ %b/%s\ client_ip:%ci\ client_port:%cp\ 
SSL_version:%sslv\ SSL_cypher:%sslc\ %ts\ Tt:%Tt\ Tq:%Tq\ Tw:%Tw\ 
Tc:%Tc\ Tr:%Tr

mode http
timeout connect 5000
timeout check 5000
timeout client 3
timeout server 3
retries 3

frontend ndc
http-response set-header Strict-Transport-Security 
max-age=31536000;\ includeSubdomains;\ preload

http-response set-header X-Content-Type-Options nosniff

bind *:443 ssl crt /opt/etc/haproxy/domain_com.pem force-tlsv12 
no-sslv3

maxconn 2

acl fare_availability path_beg /ndc/fare/v1/availability
acl flight_availability path_beg /ndc/flight/v1/availability
use_backend vakanz-backend if flight_availability or fare_availability
default_backend booking-backend

backend booking-backend
server 10.2.8.28 10.2.8.23:8443 check ssl verify none minconn 500 
maxconn 500


backend vakanz-backend
server 10.2.8.28 10.2.8.28:8443 check ssl verify none minconn 500 
maxconn 500
server 10.2.8.40 10.2.8.40:8443 check ssl verify none minconn 500 
maxconn 500
server 10.2.8.41 10.2.8.41:8443 check ssl verify none minconn 500 
maxconn 500


Regards,
Daniel

Am 21.06.17 um 11:37 schrieb Igor Cicimov:



On 21 Jun 2017 6:34 pm, "Daniel Heitepriem" 
> 
wrote:


Nothing special. No errors, no dropped connections just an
increased server response time (Tr). An excerpt from low and high
traffic times is below:

Jun 20 18:05:29 localhost haproxy[13426]: ndc
vakanz-backend/10.2.8.28 
client_ip:193.XX.XX.XXX client_port:50876 SSL_version:TLSv1.2
SSL_cypher:DHE-RSA-AES256-GCM-SHA384 -- Tt:157 Tq:95 Tw:0 Tc:2 Tr:60
Jun 20 18:05:29 localhost haproxy[13426]: ndc
vakanz-backend/10.2.8.41 
client_ip:193.XX.XX.XXX client_port:32910 SSL_version:TLSv1.2
SSL_cypher:DHE-RSA-AES256-GCM-SHA384 -- Tt:148 Tq:82 Tw:0 Tc:1 Tr:65
Jun 20 18:05:30 localhost haproxy[13426]: ndc
vakanz-backend/10.2.8.40 
client_ip:193.XX.XX.XXX client_port:51077 SSL_version:TLSv1.2
SSL_cypher:DHE-RSA-AES256-GCM-SHA384 -- Tt:525 Tq:312 Tw:0 Tc:2 Tr:211

Jun 20 22:05:36 localhost haproxy[13426]: ndc
vakanz-backend/10.2.8.28 
client_ip:193.XX.XX.XXX client_port:48936 SSL_version:TLSv1.2
SSL_cypher:DHE-RSA-AES256-GCM-SHA384 -- Tt:25368 Tq:101 Tw:0 Tc:3
Tr:25264
Jun 20 22:05:36 localhost haproxy[13426]: ndc
vakanz-backend/10.2.8.41 
client_ip:193.XX.XX.XXX client_port:43030 SSL_version:TLSv1.2
SSL_cypher:DHE-RSA-AES256-GCM-SHA384 -- Tt:23474 Tq:88 Tw:0 Tc:2
Tr:23383
Jun 20 22:05:36 localhost haproxy[13426]: ndc
vakanz-backend/10.2.8.40 
client_ip:193.XX.XX.XXX client_port:18935 SSL_version:TLSv1.2
SSL_cypher:DHE-RSA-AES256-GCM-SHA384 -- Tt:26150 Tq:106 Tw:0 Tc:3
Tr:26040


Am 21.06.17 um 10:21 schrieb Igor Cicimov:



On 21 Jun 2017 6:11 pm, "Daniel Heitepriem"
> wrote:

Hi Jarno,

yes we are decrypting TLS on the frontend (official
SSL-certificate) and re-encrypt it before sending it to the
backend (company policy so not that easy to change it to an
unencrypted connection). The CPU usage is not higher than
15-20% even during peak times and the memory usage is also
quite low (200-800MB).

Regards,
Daniel

Am 21.06.17 um 10:00 schrieb Jarno Huuskonen:

Hi,

On Wed, Jun 21, Daniel Heitepriem wrote:

we got a problem recently which we can't explain to
ourself. We got
a java application (Tomcat WAR-File) which has to
handle several
million of requests per day and several thousand
requests per second
during peak times. Due to this high amount we are
splitting traffic
using an ACL in "booking traffic" and "availability
traffic".
Booking traffic is negligible but the Availability
traffic is
load-balanced over several application servers. The
problem that
occurs is that our external partner "floods" the
Availability-Frontend with several thousand requests
per second and
the backend becomes unresponsive. If we redirect them
directly to

Looks like you're decrypting tls/ssl on frontend and 

Re: HAProxy makes backend unresponsive when handling multiple thousand connections per second

2017-06-21 Thread Igor Cicimov
On 21 Jun 2017 6:34 pm, "Daniel Heitepriem" 
wrote:

Nothing special. No errors, no dropped connections just an increased server
response time (Tr). An excerpt from low and high traffic times is below:

Jun 20 18:05:29 localhost haproxy[13426]: ndc vakanz-backend/10.2.8.28
client_ip:193.XX.XX.XXX client_port:50876 SSL_version:TLSv1.2
SSL_cypher:DHE-RSA-AES256-GCM-SHA384 -- Tt:157 Tq:95 Tw:0 Tc:2 Tr:60
Jun 20 18:05:29 localhost haproxy[13426]: ndc vakanz-backend/10.2.8.41
client_ip:193.XX.XX.XXX client_port:32910 SSL_version:TLSv1.2
SSL_cypher:DHE-RSA-AES256-GCM-SHA384 -- Tt:148 Tq:82 Tw:0 Tc:1 Tr:65
Jun 20 18:05:30 localhost haproxy[13426]: ndc vakanz-backend/10.2.8.40
client_ip:193.XX.XX.XXX client_port:51077 SSL_version:TLSv1.2
SSL_cypher:DHE-RSA-AES256-GCM-SHA384 -- Tt:525 Tq:312 Tw:0 Tc:2 Tr:211

Jun 20 22:05:36 localhost haproxy[13426]: ndc vakanz-backend/10.2.8.28
client_ip:193.XX.XX.XXX client_port:48936 SSL_version:TLSv1.2
SSL_cypher:DHE-RSA-AES256-GCM-SHA384 -- Tt:25368 Tq:101 Tw:0 Tc:3 Tr:25264
Jun 20 22:05:36 localhost haproxy[13426]: ndc vakanz-backend/10.2.8.41
client_ip:193.XX.XX.XXX client_port:43030 SSL_version:TLSv1.2
SSL_cypher:DHE-RSA-AES256-GCM-SHA384 -- Tt:23474 Tq:88 Tw:0 Tc:2 Tr:23383
Jun 20 22:05:36 localhost haproxy[13426]: ndc vakanz-backend/10.2.8.40
client_ip:193.XX.XX.XXX client_port:18935 SSL_version:TLSv1.2
SSL_cypher:DHE-RSA-AES256-GCM-SHA384 -- Tt:26150 Tq:106 Tw:0 Tc:3 Tr:26040


Am 21.06.17 um 10:21 schrieb Igor Cicimov:



On 21 Jun 2017 6:11 pm, "Daniel Heitepriem" 
wrote:

Hi Jarno,

yes we are decrypting TLS on the frontend (official SSL-certificate) and
re-encrypt it before sending it to the backend (company policy so not that
easy to change it to an unencrypted connection). The CPU usage is not
higher than 15-20% even during peak times and the memory usage is also
quite low (200-800MB).

Regards,
Daniel

Am 21.06.17 um 10:00 schrieb Jarno Huuskonen:

Hi,
>
> On Wed, Jun 21, Daniel Heitepriem wrote:
>
>> we got a problem recently which we can't explain to ourself. We got
>> a java application (Tomcat WAR-File) which has to handle several
>> million of requests per day and several thousand requests per second
>> during peak times. Due to this high amount we are splitting traffic
>> using an ACL in "booking traffic" and "availability traffic".
>> Booking traffic is negligible but the Availability traffic is
>> load-balanced over several application servers. The problem that
>> occurs is that our external partner "floods" the
>> Availability-Frontend with several thousand requests per second and
>> the backend becomes unresponsive. If we redirect them directly to
>>
> Looks like you're decrypting tls/ssl on frontend and then
> re-encrypting on backend/server. Is one core(you're not using nbproc?)
> able to handle thousand ssl requests coming in and going out ?
> (is haproxy process using 100% cpu).
>
> -Jarno
>
>
What do you see in the haproxy log when the problem happens?


-- 
Mit freundlichen Gruessen / Best regards
Daniel Heitepriem

pribas GmbH

Valterweg 24-25
65817 Eppstein-Bremthal
Germany

Phone  +49 (0) 6198 57146400 <+49%206198%2057146400>
Fax   +49 (0) 6198 57146433 <+49%206198%2057146433>
eMail   daniel.heitepr...@pribas.com

Corporate Headquarters: Huenfelden-Dauborn Managing Director: Arnulf Pribas
Registration: Amtsgericht Limburg a. d. Lahn 7HRB874 Tax ID: DE113840457

This e-mail is confidential. Information in this e-mail is intended for the
exclusive use of the individual or entity named above and may constitute
information that is privileged or confidential or otherwise protected from
disclosure. The information in this e-mail may be read, published, copied
and/or forwarded only by the individual or entity named above.
Dissemination, distribution, forwarding or copying of this e-mail by anyone
other than the intended recipient is prohibited. If you have received this
e-mail in error, please notify us immediately by telephone or e-mail and
completely delete or destroy any and all disseminated, distributed,
forwarded electronic or other copies of the original message and any
attachments.

Daniel, if using ssl to the backends shouldn't you use http mode? Per your
config you are using tcp which is default one. Afaik tcp is for ssl
passthrough.


Re: 1.7.6 redirect regression (commit 73d071ecc84e0f26ebe1b9576fffc1ed0357ef32)

2017-06-21 Thread Jarno Huuskonen
Hi Christopher,

On Wed, Jun 21, Christopher Faulet wrote:
> This bug was fixed in 1.8 (see commit
> 9f724edbd8d1cf595d4177c3612607f395b4380e "BUG/MEDIUM: http: Drop the
> connection establishment when a redirect is performed"). I attached
> the patch. Could you quickly check if it fixes your bug (it should
> do so) ?
> 
> It was not backported in 1.7 because we thought it only affected the
> 1.8. I will check with Willy.

Thanks, patch fixes the problem (with test config (I'll try to
test with prod. config later)).

-Jarno

-- 
Jarno Huuskonen



Re: 1.7.6 redirect regression (commit 73d071ecc84e0f26ebe1b9576fffc1ed0357ef32)

2017-06-21 Thread Christopher Faulet

Le 21/06/2017 à 07:27, Jarno Huuskonen a écrit :

Hi,

1.7.6 gives me errors (in log) with redirect rules. Example config that
produces 503 errors in logs and curl -v complains:
< HTTP/1.1 301 Moved Permanently
< Content-length: 0
< Location: https://127.0.0.1:8080/
<
* Excess found in a non pipelined read: excess = 212 url = /
* (zero-length body)
* Connection #0 to host 127.0.0.1 left intact

Example config:
frontend test
 bind ipv4@127.0.0.1:8080

 redirect scheme https code 301 if { dst_port 8080 }
 # this also gives 503
 #http-request redirect scheme https code 301 if { dst_port 8080 }
 default_backend test_be2

backend test_be2
 server wp1 ip.add.re.ss:80 id 1

(I have quite a few redirect rules in prod. config and all seem to
produce 503 in logs).

git bisect gives this commit as "bad":

commit 73d071ecc84e0f26ebe1b9576fffc1ed0357ef32
BUG/MINOR: http: Fix conditions to clean up a txn and to handle the next req

If I revert this commit then the example config gives 301 in log and
curl doesn't complain about read.

-Jarno



Hi Jarno,

This bug was fixed in 1.8 (see commit 
9f724edbd8d1cf595d4177c3612607f395b4380e "BUG/MEDIUM: http: Drop the 
connection establishment when a redirect is performed"). I attached the 
patch. Could you quickly check if it fixes your bug (it should do so) ?


It was not backported in 1.7 because we thought it only affected the 
1.8. I will check with Willy.


Thanks.
--
Christopher Faulet
commit 9f724edbd8d1cf595d4177c3612607f395b4380e
Author: Christopher Faulet 
Date:   Thu Apr 20 14:16:13 2017 +0200

BUG/MEDIUM: http: Drop the connection establishment when a redirect is performed

This bug occurs when a redirect rule is applied during the request analysis on a
persistent connection, on a proxy without any server. This means, in a frontend
section or in a listen/backend section with no "server" line.

Because the transaction processing is shortened, no server can be selected to
perform the connection. So if we try to establish it, this fails and a 503 error
is returned, while a 3XX was already sent. So, in this case, HAProxy generates 2
replies and only the first one is expected.

Here is the configuration snippet to easily reproduce the problem:

listen www
bind :8080
mode http
timeout connect 5s
timeout client 3s
timeout server 6s
redirect location /

A simple HTTP/1.1 request without body will trigger the bug:

$ telnet 0 8080
Trying 0.0.0.0...
Connected to 0.
Escape character is '^]'.
GET / HTTP/1.1

HTTP/1.1 302 Found
Cache-Control: no-cache
Content-length: 0
Location: /

HTTP/1.0 503 Service Unavailable
Cache-Control: no-cache
Connection: close
Content-Type: text/html

503 Service Unavailable
No server is available to handle this request.

Connection closed by foreign host.

[wt: only 1.8-dev is impacted though the bug is present in older ones]

diff --git a/src/proto_http.c b/src/proto_http.c
index 24d034a..6c940f1 100644
--- a/src/proto_http.c
+++ b/src/proto_http.c
@@ -4261,6 +4261,8 @@ static int http_apply_redirect_rule(struct redirect_rule *rule, struct stream *s
 		/* Trim any possible response */
 		res->chn->buf->i = 0;
 		res->next = res->sov = 0;
+		/* If not already done, don't perform any connection establishment */
+		channel_dont_connect(req->chn);
 	} else {
 		/* keep-alive not possible */
 		if (unlikely(txn->flags & TX_USE_PX_CONN)) {


Re: HAProxy makes backend unresponsive when handling multiple thousand connections per second

2017-06-21 Thread Igor Cicimov
On 21 Jun 2017 6:11 pm, "Daniel Heitepriem" 
wrote:

Hi Jarno,

yes we are decrypting TLS on the frontend (official SSL-certificate) and
re-encrypt it before sending it to the backend (company policy so not that
easy to change it to an unencrypted connection). The CPU usage is not
higher than 15-20% even during peak times and the memory usage is also
quite low (200-800MB).

Regards,
Daniel

Am 21.06.17 um 10:00 schrieb Jarno Huuskonen:

Hi,
>
> On Wed, Jun 21, Daniel Heitepriem wrote:
>
>> we got a problem recently which we can't explain to ourself. We got
>> a java application (Tomcat WAR-File) which has to handle several
>> million of requests per day and several thousand requests per second
>> during peak times. Due to this high amount we are splitting traffic
>> using an ACL in "booking traffic" and "availability traffic".
>> Booking traffic is negligible but the Availability traffic is
>> load-balanced over several application servers. The problem that
>> occurs is that our external partner "floods" the
>> Availability-Frontend with several thousand requests per second and
>> the backend becomes unresponsive. If we redirect them directly to
>>
> Looks like you're decrypting tls/ssl on frontend and then
> re-encrypting on backend/server. Is one core(you're not using nbproc?)
> able to handle thousand ssl requests coming in and going out ?
> (is haproxy process using 100% cpu).
>
> -Jarno
>
>
What do you see in the haproxy log when the problem happens?


Re: HAProxy makes backend unresponsive when handling multiple thousand connections per second

2017-06-21 Thread Daniel Heitepriem

Hi Jarno,

yes we are decrypting TLS on the frontend (official SSL-certificate) and 
re-encrypt it before sending it to the backend (company policy so not 
that easy to change it to an unencrypted connection). The CPU usage is 
not higher than 15-20% even during peak times and the memory usage is 
also quite low (200-800MB).


Regards,
Daniel

Am 21.06.17 um 10:00 schrieb Jarno Huuskonen:

Hi,

On Wed, Jun 21, Daniel Heitepriem wrote:

we got a problem recently which we can't explain to ourself. We got
a java application (Tomcat WAR-File) which has to handle several
million of requests per day and several thousand requests per second
during peak times. Due to this high amount we are splitting traffic
using an ACL in "booking traffic" and "availability traffic".
Booking traffic is negligible but the Availability traffic is
load-balanced over several application servers. The problem that
occurs is that our external partner "floods" the
Availability-Frontend with several thousand requests per second and
the backend becomes unresponsive. If we redirect them directly to

Looks like you're decrypting tls/ssl on frontend and then
re-encrypting on backend/server. Is one core(you're not using nbproc?)
able to handle thousand ssl requests coming in and going out ?
(is haproxy process using 100% cpu).

-Jarno





Re: HAProxy makes backend unresponsive when handling multiple thousand connections per second

2017-06-21 Thread Jarno Huuskonen
Hi,

On Wed, Jun 21, Daniel Heitepriem wrote:
> we got a problem recently which we can't explain to ourself. We got
> a java application (Tomcat WAR-File) which has to handle several
> million of requests per day and several thousand requests per second
> during peak times. Due to this high amount we are splitting traffic
> using an ACL in "booking traffic" and "availability traffic".
> Booking traffic is negligible but the Availability traffic is
> load-balanced over several application servers. The problem that
> occurs is that our external partner "floods" the
> Availability-Frontend with several thousand requests per second and
> the backend becomes unresponsive. If we redirect them directly to

Looks like you're decrypting tls/ssl on frontend and then
re-encrypting on backend/server. Is one core(you're not using nbproc?)
able to handle thousand ssl requests coming in and going out ?
(is haproxy process using 100% cpu).

-Jarno

-- 
Jarno Huuskonen



Re: HAProxy makes backend unresponsive when handling multiple thousand connections per second

2017-06-21 Thread Benjamin Lee

Sounds like ssl connections are not being reused between haproxy and tomcat. 
Can you send some netstat monitoring metrics showing tcp handshakes and time or 
close waits over time?

--
Benjamin Lee
+61 4 16 BEN LEE

> El 21 jun 2017, a las 17:15, Daniel Heitepriem  
> escribió:
> 
> Hi everyone,
> 
> we got a problem recently which we can't explain to ourself. We got a java 
> application (Tomcat WAR-File) which has to handle several million of requests 
> per day and several thousand requests per second during peak times. Due to 
> this high amount we are splitting traffic using an ACL in "booking traffic" 
> and "availability traffic". Booking traffic is negligible but the 
> Availability traffic is load-balanced over several application servers. The 
> problem that occurs is that our external partner "floods" the 
> Availability-Frontend with several thousand requests per second and the 
> backend becomes unresponsive. If we redirect them directly to our 
> Tomcat-Instance via Firewall-Rules without passing through HAProxy everything 
> is fine. The Tomcat instances have "maxThreads=1024" and "acceptCount=500" as 
> their main connector settings so this shouldn't interfere with the HAProxy 
> configuration.
> 
> Our HAProxy configuration running on Solaris 11 64-bit:
> 
> HA-Proxy version 1.7.5 2017/04/03
> Copyright 2000-2017 Willy Tarreau 
> 
> Build options :
>   TARGET  = solaris
>   CPU = generic
>   CC  = gcc
>   CFLAGS  = -m64 -march=x86-64 -O2 -g -fno-strict-aliasing 
> -Wdeclaration-after-statement -fomit-frame-pointer -DFD_SETSIZE=65536 
> -D_REENTRANT
>   OPTIONS = USE_TPROXY=1 USE_ZLIB=1 USE_OPENSSL=1 USE_PCRE=1
> 
> Default settings :
>   maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200
> 
> Encrypted password support via crypt(3): yes
> Built with zlib version : 1.2.8-T4mods
> Running on zlib version : 1.2.11
> Compression algorithms supported : identity("identity"), deflate("deflate"), 
> raw-deflate("deflate"), gzip("gzip")
> Running on OpenSSL version : OpenSSL 1.0.2k  26 Jan 2017
> Running on OpenSSL version : OpenSSL 1.0.2k  26 Jan 2017
> OpenSSL library supports TLS extensions : yes
> OpenSSL library supports SNI : yes
> OpenSSL library supports prefer-server-ciphers : yes
> Built with PCRE version : 8.39 2016-06-14
> Running on PCRE version : 8.39 2016-06-14
> PCRE library supports JIT : no (USE_PCRE_JIT not set)
> Built without Lua support
> 
> Available polling systems :
>poll : pref=200,  test result OK
>  select : pref=150,  test result OK
> Total: 2 (2 usable), will use poll.
> 
> Available filters :
> [SPOE] spoe
> [TRACE] trace
> [COMP] compression
> ---
> global
> log 127.0.0.1:514 local0 debug
> daemon
> maxconn 5
> stats socket /opt/etc/haproxy/haproxy.sock mode 600 level admin
> stats timeout 2m #Wait up to 2 minutes for input
> tune.ssl.default-dh-param 2048
> ulimit-n 2
> 
> 
> ssl-default-server-options no-sslv3 no-tls-tickets
> ssl-default-bind-ciphers 
> EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH:AES128+EECDH:AES128+EDH
> 
> defaults
> log global
> option tcplog
> log-format %f\ %b/%s\ client_ip:%ci\ client_port:%cp\ SSL_version:%sslv\ 
> SSL_cypher:%sslc\ %ts\ Tt:%Tt\ Tq:%Tq\ Tw:%Tw\ Tc:%Tc\ Tr:%Tr
> mode http
> timeout connect 5000
> timeout check 5000
> timeout client 3
> timeout server 3
> retries 3
> 
> frontend ndc
> http-response set-header Strict-Transport-Security max-age=31536000;\ 
> includeSubdomains;\ preload
> http-response set-header X-Content-Type-Options nosniff
> 
> bind *:443 ssl crt /opt/etc/haproxy/domain_com.pem force-tlsv12 no-sslv3
> maxconn 2
> 
> acl fare_availability path_beg /ndc/fare/v1/availability
> acl flight_availability path_beg /ndc/flight/v1/availability
> use_backend vakanz-backend if flight_availability or fare_availability
> default_backend booking-backend
> 
> backend booking-backend
> server 10.2.8.28 10.2.8.23:8443 check ssl verify none minconn 500 maxconn 
> 500
> 
> backend vakanz-backend
> server 10.2.8.28 10.2.8.28:8443 check ssl verify none minconn 500 maxconn 
> 500
> server 10.2.8.40 10.2.8.40:8443 check ssl verify none minconn 500 maxconn 
> 500
> server 10.2.8.41 10.2.8.41:8443 check ssl verify none minconn 500 maxconn 
> 500
> 
> Hopefully somebody can shed some light if we got a bad configuration and how 
> we could troubleshoot this issue.
> 
> Thanks and regards,
> Daniel


HAProxy makes backend unresponsive when handling multiple thousand connections per second

2017-06-21 Thread Daniel Heitepriem

Hi everyone,

we got a problem recently which we can't explain to ourself. We got a 
java application (Tomcat WAR-File) which has to handle several million 
of requests per day and several thousand requests per second during peak 
times. Due to this high amount we are splitting traffic using an ACL in 
"booking traffic" and "availability traffic". Booking traffic is 
negligible but the Availability traffic is load-balanced over several 
application servers. The problem that occurs is that our external 
partner "floods" the Availability-Frontend with several thousand 
requests per second and the backend becomes unresponsive. If we redirect 
them directly to our Tomcat-Instance via Firewall-Rules without passing 
through HAProxy everything is fine. The Tomcat instances have 
"/maxThreads=1024/" and "/acceptCount=500/" as their main connector 
settings so this shouldn't interfere with the HAProxy configuration.


Our HAProxy configuration running on Solaris 11 64-bit:

HA-Proxy version 1.7.5 2017/04/03
Copyright 2000-2017 Willy Tarreau 

Build options :
  TARGET  = solaris
  CPU = generic
  CC  = gcc
  CFLAGS  = -m64 -march=x86-64 -O2 -g -fno-strict-aliasing 
-Wdeclaration-after-statement -fomit-frame-pointer -DFD_SETSIZE=65536 
-D_REENTRANT

  OPTIONS = USE_TPROXY=1 USE_ZLIB=1 USE_OPENSSL=1 USE_PCRE=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Encrypted password support via crypt(3): yes
Built with zlib version : 1.2.8-T4mods
Running on zlib version : 1.2.11
Compression algorithms supported : identity("identity"), 
deflate("deflate"), raw-deflate("deflate"), gzip("gzip")

Running on OpenSSL version : OpenSSL 1.0.2k  26 Jan 2017
Running on OpenSSL version : OpenSSL 1.0.2k  26 Jan 2017
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports prefer-server-ciphers : yes
Built with PCRE version : 8.39 2016-06-14
Running on PCRE version : 8.39 2016-06-14
PCRE library supports JIT : no (USE_PCRE_JIT not set)
Built without Lua support

Available polling systems :
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 2 (2 usable), will use poll.

Available filters :
[SPOE] spoe
[TRACE] trace
[COMP] compression
---
global
log 127.0.0.1:514 local0 debug
daemon
maxconn 5
stats socket /opt/etc/haproxy/haproxy.sock mode 600 level admin
stats timeout 2m #Wait up to 2 minutes for input
tune.ssl.default-dh-param 2048
ulimit-n 2


ssl-default-server-options no-sslv3 no-tls-tickets
ssl-default-bind-ciphers 
EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH:AES128+EECDH:AES128+EDH


defaults
log global
option tcplog
log-format %f\ %b/%s\ client_ip:%ci\ client_port:%cp\ 
SSL_version:%sslv\ SSL_cypher:%sslc\ %ts\ Tt:%Tt\ Tq:%Tq\ Tw:%Tw\ 
Tc:%Tc\ Tr:%Tr

mode http
timeout connect 5000
timeout check 5000
timeout client 3
timeout server 3
retries 3

frontend ndc
http-response set-header Strict-Transport-Security 
max-age=31536000;\ includeSubdomains;\ preload

http-response set-header X-Content-Type-Options nosniff

bind *:443 ssl crt /opt/etc/haproxy/domain_com.pem force-tlsv12 
no-sslv3

maxconn 2

acl fare_availability path_beg /ndc/fare/v1/availability
acl flight_availability path_beg /ndc/flight/v1/availability
use_backend vakanz-backend if flight_availability or fare_availability
default_backend booking-backend

backend booking-backend
server 10.2.8.28 10.2.8.23:8443 check ssl verify none minconn 500 
maxconn 500


backend vakanz-backend
server 10.2.8.28 10.2.8.28:8443 check ssl verify none minconn 500 
maxconn 500
server 10.2.8.40 10.2.8.40:8443 check ssl verify none minconn 500 
maxconn 500
server 10.2.8.41 10.2.8.41:8443 check ssl verify none minconn 500 
maxconn 500


Hopefully somebody can shed some light if we got a bad configuration and 
how we could troubleshoot this issue.


Thanks and regards,
Daniel


Re: Trouble getting rid of Connection Keep-Alive header

2017-06-21 Thread Lukas Tribus
Hello Mats,


Am 21.06.2017 um 07:59 schrieb Mats Eklund:
>
>
> Hi,
>
>
> I am running a load balanced Tomcat application on Openshift Online v2, with 
> HAProxy ver. 1.4.22 as load balancer.
>
>
> I would like to have HTTP connections closed after each response is returned. 
> But am unable to make the response contain the corresponding response headers 
> (i.e. "Connection: close").
>
>
> I have tried adding "option httpclose" in the HAProxy config file, but still 
> the response headers contains "Connection: keep-alive" and a "Keep-Alive: 
> timeout=15, max=100".
>
>
> I have even tried "rspdel ^Connection:\ .*" but still the response contains 
> the above mentioned headers. (That I'm editing the right config file I have 
> verified by successfully adding a rspadd instruction).
>
>
> Any advice is much appreciated!
>

We are gonna need the full configuration.

Is this http/1.1 or http/1.0 on the front and backend?
Confirm that you are running in  http mode ("mode http") please.


Regards,
Lukas




Re: LoadBalance whole subnet

2017-06-21 Thread Aleksandar Lazic
Hi William Lallemand,

William Lallemand wrote on 20.06.2017:

> On Tue, Jun 20, 2017 at 12:49:32PM +0200, Aleksandar Lazic wrote:
>> Hi Anamarija.
>> 
>> ?!
>> 
>> Do you plan to make the mailing list out of support?!
>> 
>> Best Regards
>> aleks
>> 

> Hi Aleksandar,

> Don't worry that's a mistake, Sarunas put cont...@haproxy.com in copy to his
> mail which lead to this.

> Please don't continue this thread on the mailing list, thanks.


Well, I assume I understand you.

-- 
Best Regards
Aleks




Trouble getting rid of Connection Keep-Alive header

2017-06-21 Thread Mats Eklund

Hi,


I am running a load balanced Tomcat application on Openshift Online v2, with 
HAProxy ver. 1.4.22 as load balancer.


I would like to have HTTP connections closed after each response is returned. 
But am unable to make the response contain the corresponding response headers 
(i.e. "Connection: close").


I have tried adding "option httpclose" in the HAProxy config file, but still 
the response headers contains "Connection: keep-alive" and a "Keep-Alive: 
timeout=15, max=100".


I have even tried "rspdel ^Connection:\ .*" but still the response contains the 
above mentioned headers. (That I'm editing the right config file I have 
verified by successfully adding a rspadd instruction).


Any advice is much appreciated!


/ Mats