Re: redispatch still cause error response

2018-02-01 Thread flamesea12


Hello,

Thanks for the lucid explain, understood it now.


Anyway I find that haproxy slow down the requests to alive server when there is 
a server down.

Still take the test I made before,

ruby -run -ehttpd . -p8001 and ruby -run -ehttpd . -p8000 and i start haproxy 
then do a ab ab -n 2 -c 10 http://127.0.0.1:8080/

So both ruby servers start to log requests, very fast(say 100 request per 
second), meaning that haproxy forwards request very fast.

Then I killed ruby 8001, the strange thing is that ruby 8000's log becomes very 
slow(say 10 requests per seconds) for a while.

I understand that there will be some check against 8001, but why it will slow 
down the request to 8000?

Why is that?


Thanks


- Original Message -
>From: Lukas Tribus 
>To: flamese...@yahoo.co.jp 
>Cc: "haproxy@formilux.org" ; "cyril.bo...@free.fr" 
>; "lu...@ltri.eu" 
>Date: 2018/2/1, Thu 17:45
>Subject: Re: redispatch still cause error response
> 
>Hello,
>
>
>On 1 February 2018 at 04:43,   wrote:
>> Thanks for reply, any plan to support this requirement?
>>
>> If a backend server get killed when processing request, that haproxy
>> re-forwarad the request to another backend server?
>
>No, this is problematic for a number of reasons. First of all this can
>only be done for idempotent methods, and yet it could still trigger
>applications bugs. Then we would need to keep a copy of the request in
>memory, until we know for sure the response is "valid". Then we would
>need to validate the response, where every users wants something
>different. Some may ask to resend the request to another server on a
>404 response ...
>
>So no, I don't see how this can easily be achieved.
>
>
>cheers,
>Lukas
>
>
>
>

Re: redispatch still cause error response

2018-02-01 Thread Lukas Tribus
Hello,


On 1 February 2018 at 04:43,   wrote:
> Thanks for reply, any plan to support this requirement?
>
> If a backend server get killed when processing request, that haproxy
> re-forwarad the request to another backend server?

No, this is problematic for a number of reasons. First of all this can
only be done for idempotent methods, and yet it could still trigger
applications bugs. Then we would need to keep a copy of the request in
memory, until we know for sure the response is "valid". Then we would
need to validate the response, where every users wants something
different. Some may ask to resend the request to another server on a
404 response ...

So no, I don't see how this can easily be achieved.


cheers,
Lukas



Re: redispatch still cause error response

2018-01-31 Thread flamesea12
Thanks for reply, any plan to support this requirement?

If a backend server get killed when processing request, that haproxy 
re-forwarad the request to another backend server?

Thanks



- Original Message -
>From: Lukas Tribus 
>To: flamese...@yahoo.co.jp 
>Cc: "haproxy@formilux.org" 
>Date: 2018/1/31, Wed 17:24
>Subject: Re: redispatch still cause error response
> 
>Hello,
>
>On 31 January 2018 at 03:00,   wrote:
>> Hello,
>>
>> What exactly does option redispatch do?
>
>
>As per the documentation:
>http://cbonte.github.io/haproxy-dconv/1.7/configuration.html#4-option%20redispatch
>
>"In HTTP mode, if a server designated by a cookie is down, clients may
>definitely stick to it because they cannot flush the cookie, so they
>will not be able to access the service anymore.
>
>Specifying "option redispatch" will allow the proxy to break their
>persistence and redistribute them to a working server."
>
>
>This is about breaking cookie persistence.
>
>
>
>> I have a haproxy in front of two web servers, if one web server get killed
>> when haproxy is forwarding a request to that server,
>>
>> will this request get re-forwarded to another server?
>
>No. Haproxy will never resend a HTTP request that has already been
>sent. If you break a backend with in flight HTTP transactions, those
>transactions are supposed to break with it and that' exactly what
>haproxy does.
>
>
>
>Regards,
>Lukas
>
>
>
>

Re: redispatch still cause error response

2018-01-31 Thread Lukas Tribus
Hello,

On 31 January 2018 at 03:00,   wrote:
> Hello,
>
> What exactly does option redispatch do?


As per the documentation:
http://cbonte.github.io/haproxy-dconv/1.7/configuration.html#4-option%20redispatch

"In HTTP mode, if a server designated by a cookie is down, clients may
definitely stick to it because they cannot flush the cookie, so they
will not be able to access the service anymore.

Specifying "option redispatch" will allow the proxy to break their
persistence and redistribute them to a working server."


This is about breaking cookie persistence.



> I have a haproxy in front of two web servers, if one web server get killed
> when haproxy is forwarding a request to that server,
>
> will this request get re-forwarded to another server?

No. Haproxy will never resend a HTTP request that has already been
sent. If you break a backend with in flight HTTP transactions, those
transactions are supposed to break with it and that' exactly what
haproxy does.



Regards,
Lukas



Re: redispatch still cause error response

2018-01-31 Thread Cyril Bonté

Hi,

Le 31/01/2018 à 03:00, flamese...@yahoo.co.jp a écrit :

Hello,

What exactly does option redispatch do?


It gives a chance to retry on another server if the connection can't be 
established on the one chosen by the loadbalancing algorithm. It's 
important to understand that it concerns only the connection 
establishment step. See below for details.


I have a haproxy in front of two web servers, if one web server get 
killed when haproxy is forwarding a request to that server,


will this request get re-forwarded to another server?

So I had an test.

In one server, I run

ruby -run -ehttpd . -p8001

and

ruby -run -ehttpd . -p8000

and i start haproxy

then do a ab

ab -n 2 -c 10 http://127.0.0.1:8080/

while the ab is running, I killed 8001 ruby, then after some time I 
started it again.


here is the ab result:

Complete requests:  2
Failed requests:    5
    (Connect: 0, Receive: 0, Length: 5, Exceptions: 0)


And haproxy.log

grep -c GET haproxy.log
2

grep GET haproxy.log | grep -v 200
Jan 31 10:48:12 localhost haproxy[5948]: 127.0.0.1:45516 
[31/Jan/2018:10:48:12.386] web1 app1/b 0/0/0/-1/5 -1 0 - - SD-- 
10/10/8/3/0 0/0 "GET / HTTP/1.0"
Jan 31 10:48:12 localhost haproxy[5948]: 127.0.0.1:45538 
[31/Jan/2018:10:48:12.391] web1 app1/b 0/0/1/-1/1 -1 0 - - SD-- 
10/10/8/4/0 0/0 "GET / HTTP/1.0"
Jan 31 10:48:12 localhost haproxy[5948]: 127.0.0.1:45528 
[31/Jan/2018:10:48:12.389] web1 app1/b 0/0/1/-1/3 -1 0 - - SD-- 
9/9/8/3/0 0/0 "GET / HTTP/1.0"
Jan 31 10:48:12 localhost haproxy[5948]: 127.0.0.1:45524 
[31/Jan/2018:10:48:12.388] web1 app1/b 0/0/1/-1/4 -1 0 - - SD-- 
8/8/7/2/0 0/0 "GET / HTTP/1.0"
Jan 31 10:48:12 localhost haproxy[5948]: 127.0.0.1:45544 
[31/Jan/2018:10:48:12.392] web1 app1/b 0/0/0/-1/1 -1 0 - - SD-- 
7/7/6/0/0 0/0 "GET / HTTP/1.0"


Here, your logs indicate that the connection was already established and 
the request was also already sent to the server app1/b, but the 
connection was dropped in the middle (the server was stopped in the 
middle as you previously said). It's too late to redispatch a 
connection, so the behaviour is expected.




Here is the config:

global
     maxconn 5000
log 127.0.0.1 local2 debug
     daemon
defaults
     log global
     option httplog
     retries 3
     option redispatch
     timeout client  30s
     timeout connect 30s
     timeout server  30s
     timeout http-request 30s
     timeout http-keep-alive 30s
frontend web1
     bind *:8080
     mode http
     default_backend app1
backend app1
     balance roundrobin
     mode http
     server a 127.0.0.1:8000 check
     server b 127.0.0.1:8001 check

I have tried v1.8.3 and v1.7.10

And I missing something?

Thanks



--
Cyril Bonté



redispatch still cause error response

2018-01-30 Thread flamesea12
Hello, 


What exactly does option redispatch do?

I have a haproxy in front of two web servers, if one web server get killed when 
haproxy is forwarding a request to that server,

will this request get re-forwarded to another server?

So I had an test.

In one server, I run

ruby -run -ehttpd . -p8001

and

ruby -run -ehttpd . -p8000

and i start haproxy

then do a ab

ab -n 2 -c 10 http://127.0.0.1:8080/

while the ab is running, I killed 8001 ruby, then after some time I started it 
again.

here is the ab result:

Complete requests:  2
Failed requests:    5
   (Connect: 0, Receive: 0, Length: 5, Exceptions: 0)


And haproxy.log

grep -c GET haproxy.log
2

grep GET haproxy.log | grep -v 200
Jan 31 10:48:12 localhost haproxy[5948]: 127.0.0.1:45516 
[31/Jan/2018:10:48:12.386] web1 app1/b 0/0/0/-1/5 -1 0 - - SD-- 10/10/8/3/0 0/0 
"GET / HTTP/1.0"
Jan 31 10:48:12 localhost haproxy[5948]: 127.0.0.1:45538 
[31/Jan/2018:10:48:12.391] web1 app1/b 0/0/1/-1/1 -1 0 - - SD-- 10/10/8/4/0 0/0 
"GET / HTTP/1.0"
Jan 31 10:48:12 localhost haproxy[5948]: 127.0.0.1:45528 
[31/Jan/2018:10:48:12.389] web1 app1/b 0/0/1/-1/3 -1 0 - - SD-- 9/9/8/3/0 0/0 
"GET / HTTP/1.0"
Jan 31 10:48:12 localhost haproxy[5948]: 127.0.0.1:45524 
[31/Jan/2018:10:48:12.388] web1 app1/b 0/0/1/-1/4 -1 0 - - SD-- 8/8/7/2/0 0/0 
"GET / HTTP/1.0"
Jan 31 10:48:12 localhost haproxy[5948]: 127.0.0.1:45544 
[31/Jan/2018:10:48:12.392] web1 app1/b 0/0/0/-1/1 -1 0 - - SD-- 7/7/6/0/0 0/0 
"GET / HTTP/1.0"

Here is the config:

global
    maxconn 5000
    log 127.0.0.1 local2 debug
    daemon
defaults
    log global
    option httplog
    retries 3
    option redispatch
    timeout client  30s
    timeout connect 30s
    timeout server  30s
    timeout http-request 30s
    timeout http-keep-alive 30s
frontend web1
    bind *:8080
    mode http
    default_backend app1
backend app1
    balance roundrobin
    mode http
    server a 127.0.0.1:8000 check
    server b 127.0.0.1:8001 check

I have tried v1.8.3 and v1.7.10

And I missing something?

Thanks