Re: redispatch still cause error response

2018-01-31 Thread flamesea12
Thanks for reply, any plan to support this requirement?

If a backend server get killed when processing request, that haproxy 
re-forwarad the request to another backend server?

Thanks



- Original Message -
>From: Lukas Tribus 
>To: flamese...@yahoo.co.jp 
>Cc: "haproxy@formilux.org" 
>Date: 2018/1/31, Wed 17:24
>Subject: Re: redispatch still cause error response
> 
>Hello,
>
>On 31 January 2018 at 03:00,   wrote:
>> Hello,
>>
>> What exactly does option redispatch do?
>
>
>As per the documentation:
>http://cbonte.github.io/haproxy-dconv/1.7/configuration.html#4-option%20redispatch
>
>"In HTTP mode, if a server designated by a cookie is down, clients may
>definitely stick to it because they cannot flush the cookie, so they
>will not be able to access the service anymore.
>
>Specifying "option redispatch" will allow the proxy to break their
>persistence and redistribute them to a working server."
>
>
>This is about breaking cookie persistence.
>
>
>
>> I have a haproxy in front of two web servers, if one web server get killed
>> when haproxy is forwarding a request to that server,
>>
>> will this request get re-forwarded to another server?
>
>No. Haproxy will never resend a HTTP request that has already been
>sent. If you break a backend with in flight HTTP transactions, those
>transactions are supposed to break with it and that' exactly what
>haproxy does.
>
>
>
>Regards,
>Lukas
>
>
>
>

Configuring HAproxy to Mbed tls implementation of TLS

2018-01-31 Thread Mariam Abboush
Hello dear HAproxy stuff


How can I configure HAproxy to a specific implementation of TLS, I mean for
example " Mbed TLS" which is a security library dedicated to the embedded
systems.

Thanks in advance


Mariam Abboush


Re: haproxy http2 benchmark

2018-01-31 Thread Willy Tarreau
Hi,

On Wed, Jan 31, 2018 at 10:41:44AM +0800, ??? wrote:
> *hi all,*
> *recently we are ready to upgrade to haproxy 1.8,however, when testing
> HTTP2, we found a drop in performance,below is the test scenario:*
(...)
> *Use h2load test, respectively, test http1.1 and http2, A total of three
> sets of data,haproxy reached cpu 100%,*
> * group 1:*
> 
>   h2load -n100 -c20 -m5 https://$0.172.144.113:1999/128
> 
>   starting benchmark...
>   spawning thread #0: 20 total client(s). 100 total requests
>   TLS Protocol: TLSv1.2
>   Cipher: ECDHE-RSA-AES256-GCM-SHA384
>   Application protocol: h2
>   ..
> 
>   finished in 86.23s, 11596.77 req/s, 2.90MB/s
(...)
>  *group2:*
> 
>   h2load -n100 -c20 -m1 https://10.172.144.113:1999/128 --h1
>   starting benchmark...
>   spawning thread #0: 20 total client(s). 100 total requests
>   TLS Protocol: TLSv1.2
>   Cipher: ECDHE-RSA-AES256-GCM-SHA384
>   Application protocol: http/1.1
>   ..
> 
>   finished in 73.72s, 13564.36 req/s, 4.42MB/s
(...)
>   * group3:*
> 
>h2load -n100 -c100 -m1 https://10.172.144.113:1999/128 --h1
>starting benchmark...
>spawning thread #0: 100 total client(s). 100 total requests
>TLS Protocol: TLSv1.2
>Cipher: ECDHE-RSA-AES256-GCM-SHA384
>Application protocol: http/1.1
>..
> 
>finished in 67.84s, 14739.69 req/s, 4.81MB/s
(...)
> *Is this phenomenon normal? Or my way of using is wrong?*

"Normal" isn't the exact word, but I'd say reasonably expected however.

The main difference between the H1 and H2 tests is that when H2 is used
on the frontend, we can't yet reuse the connection on the backend, so
you're working exactly in the same situation as if you were running with
"option http-server-close". You may be interested in doing this test by
the way, just to compare similar stuff.

Willy



Re: redispatch still cause error response

2018-01-31 Thread Lukas Tribus
Hello,

On 31 January 2018 at 03:00,   wrote:
> Hello,
>
> What exactly does option redispatch do?


As per the documentation:
http://cbonte.github.io/haproxy-dconv/1.7/configuration.html#4-option%20redispatch

"In HTTP mode, if a server designated by a cookie is down, clients may
definitely stick to it because they cannot flush the cookie, so they
will not be able to access the service anymore.

Specifying "option redispatch" will allow the proxy to break their
persistence and redistribute them to a working server."


This is about breaking cookie persistence.



> I have a haproxy in front of two web servers, if one web server get killed
> when haproxy is forwarding a request to that server,
>
> will this request get re-forwarded to another server?

No. Haproxy will never resend a HTTP request that has already been
sent. If you break a backend with in flight HTTP transactions, those
transactions are supposed to break with it and that' exactly what
haproxy does.



Regards,
Lukas



Re: redispatch still cause error response

2018-01-31 Thread Cyril Bonté

Hi,

Le 31/01/2018 à 03:00, flamese...@yahoo.co.jp a écrit :

Hello,

What exactly does option redispatch do?


It gives a chance to retry on another server if the connection can't be 
established on the one chosen by the loadbalancing algorithm. It's 
important to understand that it concerns only the connection 
establishment step. See below for details.


I have a haproxy in front of two web servers, if one web server get 
killed when haproxy is forwarding a request to that server,


will this request get re-forwarded to another server?

So I had an test.

In one server, I run

ruby -run -ehttpd . -p8001

and

ruby -run -ehttpd . -p8000

and i start haproxy

then do a ab

ab -n 2 -c 10 http://127.0.0.1:8080/

while the ab is running, I killed 8001 ruby, then after some time I 
started it again.


here is the ab result:

Complete requests:  2
Failed requests:    5
    (Connect: 0, Receive: 0, Length: 5, Exceptions: 0)


And haproxy.log

grep -c GET haproxy.log
2

grep GET haproxy.log | grep -v 200
Jan 31 10:48:12 localhost haproxy[5948]: 127.0.0.1:45516 
[31/Jan/2018:10:48:12.386] web1 app1/b 0/0/0/-1/5 -1 0 - - SD-- 
10/10/8/3/0 0/0 "GET / HTTP/1.0"
Jan 31 10:48:12 localhost haproxy[5948]: 127.0.0.1:45538 
[31/Jan/2018:10:48:12.391] web1 app1/b 0/0/1/-1/1 -1 0 - - SD-- 
10/10/8/4/0 0/0 "GET / HTTP/1.0"
Jan 31 10:48:12 localhost haproxy[5948]: 127.0.0.1:45528 
[31/Jan/2018:10:48:12.389] web1 app1/b 0/0/1/-1/3 -1 0 - - SD-- 
9/9/8/3/0 0/0 "GET / HTTP/1.0"
Jan 31 10:48:12 localhost haproxy[5948]: 127.0.0.1:45524 
[31/Jan/2018:10:48:12.388] web1 app1/b 0/0/1/-1/4 -1 0 - - SD-- 
8/8/7/2/0 0/0 "GET / HTTP/1.0"
Jan 31 10:48:12 localhost haproxy[5948]: 127.0.0.1:45544 
[31/Jan/2018:10:48:12.392] web1 app1/b 0/0/0/-1/1 -1 0 - - SD-- 
7/7/6/0/0 0/0 "GET / HTTP/1.0"


Here, your logs indicate that the connection was already established and 
the request was also already sent to the server app1/b, but the 
connection was dropped in the middle (the server was stopped in the 
middle as you previously said). It's too late to redispatch a 
connection, so the behaviour is expected.




Here is the config:

global
     maxconn 5000
log 127.0.0.1 local2 debug
     daemon
defaults
     log global
     option httplog
     retries 3
     option redispatch
     timeout client  30s
     timeout connect 30s
     timeout server  30s
     timeout http-request 30s
     timeout http-keep-alive 30s
frontend web1
     bind *:8080
     mode http
     default_backend app1
backend app1
     balance roundrobin
     mode http
     server a 127.0.0.1:8000 check
     server b 127.0.0.1:8001 check

I have tried v1.8.3 and v1.7.10

And I missing something?

Thanks



--
Cyril Bonté