Re: Question regarding haproxy backend behaviour

2018-04-18 Thread Ayush Goyal
Hi

Thanks Igor/Moemen for your response. I hadn't considered frontend queuing,
although I am not sure where to measure it. I have wound down the benchmark
infrastructure for time being and it would take me some time to replicate
it again for providing additional stats. In the meantime, I am attaching
the sample logs of 200 lines for benchmarks from 1 of the haproxy server.

Reading the logs however, I could see that both srv_queue and backend_queue
are 0. One detail that you may notice reading the logs, that I had omitted
earlier for sake of simplicity is that nginx_ssl_fe frontend is bound on 2
processes to split cpu load. So instead of this:

frontend nginx_ssl_fe
>> bind *:8443 ssl 
>> maxconn 10
>> bind-process 2
>>
>> It has
> bind-process 2 3

In these logs haproxy ssl_sess_id_router frontend is doing 21k frontend
connections, and both processes of nginx_ssl_fe are doing approx 10k
frontend connections for total of ~20k frontend connections. This is just
one node there are 3 more nodes like this, making the frontend connections
in the ssl_sess_id_router frontend ~63k and ~60k in all frontends for
nginx_ssl_fe. The nginx is still handling only 32k connections from
nginx_backend.

Please let me know if you need more info.

Thanks,
Ayush Goyal



On Tue, Apr 17, 2018 at 10:03 PM Moemen MHEDHBI <mmhed...@haproxy.com>
wrote:

> Hi
>
> On 16/04/2018 12:04, Igor Cicimov wrote:
>
>
>
> On Mon, 16 Apr 2018 6:09 pm Ayush Goyal <ay...@helpshift.com> wrote:
>
>> Hi Moemen,
>>
>> Thanks for your response. But I think I need to clarify a few things
>> here.
>>
>> On Mon, Apr 16, 2018 at 4:33 AM Moemen MHEDHBI <mmhed...@haproxy.com>
>> wrote:
>>
>>> Hi
>>>
>>> On 12/04/2018 19:16, Ayush Goyal wrote:
>>>
>>> Hi,
>>>
>>> I have a question regarding haproxy backend connection behaviour. We
>>> have following setup:
>>>
>>>   +-+ +---+
>>>   | haproxy |>| nginx |
>>>   +-+ +---+
>>>
>>> We use a haproxy cluster for ssl off-loading and then load balance
>>> request to
>>> nginx cluster. We are currently benchmarking this setup with 3 nodes for
>>> haproxy
>>> cluster and 1 nginx node. Each haproxy node has two frontend/backend
>>> pair. First
>>> frontend is a router for ssl connection which redistributes request to
>>> the second
>>> frontend in the haproxy cluster. The second frontend is for ssl
>>> handshake and
>>> routing requests to nginx servers. Our configuration is as follows:
>>>
>>> ```
>>> global
>>> maxconn 10
>>> user haproxy
>>> group haproxy
>>> nbproc 2
>>> cpu-map 1 1
>>> cpu-map 2 2
>>>
>>> defaults
>>> mode http
>>> option forwardfor
>>> timeout connect 5s
>>> timeout client 30s
>>> timeout server 30s
>>> timeout tunnel 30m
>>> timeout client-fin 5s
>>>
>>> frontend ssl_sess_id_router
>>> bind *:443
>>> bind-process 1
>>> mode tcp
>>> maxconn 10
>>> log global
>>> option tcp-smart-accept
>>> option splice-request
>>> option splice-response
>>> default_backend ssl_sess_id_router_backend
>>>
>>> backend ssl_sess_id_router_backend
>>> bind-process 1
>>> mode tcp
>>> fullconn 5
>>> balance roundrobin
>>> ..
>>> option tcp-smart-connect
>>> server lbtest01 :8443 weight 1 check send-proxy
>>> server lbtest02 :8443 weight 1 check send-proxy
>>> server lbtest03 :8443 weight 1 check send-proxy
>>>
>>> frontend nginx_ssl_fe
>>> bind *:8443 ssl 
>>> maxconn 10
>>> bind-process 2
>>> option tcp-smart-accept
>>> option splice-request
>>> option splice-response
>>> option forwardfor
>>> reqadd X-Forwarded-Proto:\ https
>>> timeout client-fin 5s
>>> timeout http-request 8s
>>> timeout http-keep-alive 30s
>>> default_backend nginx_backend
>>>
>>> backend nginx_backend
>>> bind-process 2
>>> balance roundrobin
>>> http-reuse safe
>>> option 

Re: Question regarding haproxy backend behaviour

2018-04-16 Thread Ayush Goyal
Hi Moemen,

Thanks for your response. But I think I need to clarify a few things here.

On Mon, Apr 16, 2018 at 4:33 AM Moemen MHEDHBI <mmhed...@haproxy.com> wrote:

> Hi
>
> On 12/04/2018 19:16, Ayush Goyal wrote:
>
> Hi,
>
> I have a question regarding haproxy backend connection behaviour. We have
> following setup:
>
>   +-+ +---+
>   | haproxy |>| nginx |
>   +-+ +---+
>
> We use a haproxy cluster for ssl off-loading and then load balance request
> to
> nginx cluster. We are currently benchmarking this setup with 3 nodes for
> haproxy
> cluster and 1 nginx node. Each haproxy node has two frontend/backend pair.
> First
> frontend is a router for ssl connection which redistributes request to the
>  second
> frontend in the haproxy cluster. The second frontend is for ssl handshake
> and
> routing requests to nginx servers. Our configuration is as follows:
>
> ```
> global
> maxconn 10
> user haproxy
> group haproxy
> nbproc 2
> cpu-map 1 1
> cpu-map 2 2
>
> defaults
> mode http
> option forwardfor
> timeout connect 5s
> timeout client 30s
> timeout server 30s
> timeout tunnel 30m
> timeout client-fin 5s
>
> frontend ssl_sess_id_router
> bind *:443
> bind-process 1
> mode tcp
> maxconn 10
> log global
> option tcp-smart-accept
> option splice-request
> option splice-response
> default_backend ssl_sess_id_router_backend
>
> backend ssl_sess_id_router_backend
> bind-process 1
> mode tcp
> fullconn 5
> balance roundrobin
> ..
> option tcp-smart-connect
> server lbtest01 :8443 weight 1 check send-proxy
> server lbtest02 :8443 weight 1 check send-proxy
> server lbtest03 :8443 weight 1 check send-proxy
>
> frontend nginx_ssl_fe
> bind *:8443 ssl 
> maxconn 10
> bind-process 2
> option tcp-smart-accept
> option splice-request
> option splice-response
> option forwardfor
> reqadd X-Forwarded-Proto:\ https
> timeout client-fin 5s
> timeout http-request 8s
> timeout http-keep-alive 30s
> default_backend nginx_backend
>
> backend nginx_backend
> bind-process 2
> balance roundrobin
> http-reuse safe
> option tcp-smart-connect
> option splice-request
> option splice-response
> timeout tunnel 30m
> timeout http-request 8s
> timeout http-keep-alive 30s
> server testnginx :80  weight 1 check
> ```
>
> The nginx node has nginx with 4 workers and 8192 max clients, therefore
> the max
> number of connection it can accept is 32768.
>
> For benchmark, we are generating ~3k new connections per second where each
> connection makes 1 http request and then holds the connection for next 30
> seconds. This results in a high established connection on the first
> frontend,
> ssl_sess_id_router, ~25k per haproxy node (Total ~77k connections on 3
> haproxy
> nodes). The second frontend (nginx_ssl_fe) receives the same number of
> connection on the frontend. On nginx node, we see that active connections
> increase to ~32k.
>
> Our understanding is that haproxy should keep a 1:1 connection mapping for
> each
> new connection in frontend/backend. But there is a connection count
> mismatch
> between haproxy and nginx (Total 77k connections in all 3 haproxy for both
> frontends vs 32k connections in nginx made by nginx_backend), We are still
> not
> facing any major 5xx or connection errors. We are assuming that this is
> happening because haproxy is terminating old idle ssl connections to serve
> the
> new ones. We have following questions:
>
> 1. How the nginx_backend connections are being terminated to serve the new
> connections?
>
> Connections are usually terminated when the client receives the whole
> response. Closing the connection can be initiated by the client, server of
> HAProxy (timeouts, etc..)
>

Client connections are keep-alive here for 30 seconds from client side.
Various timeout values in both nginx and haproxy are sufficiently high of
the order of 60 seconds. Still what we are observing here is that nginx is
closing the connection after 7-14 seconds to serve new client requests. Not
sure why nginx or haproxy will close existing keep-alive connections to
serve new requests when timeouts are sufficiently high?

> 2. Why haproxy is not terminating connections on the frontend to keep it
> them at 32k
> for 1:1

Question regarding haproxy backend behaviour

2018-04-12 Thread Ayush Goyal
Hi,

I have a question regarding haproxy backend connection behaviour. We have
following setup:

  +-+ +---+
  | haproxy |>| nginx |
  +-+ +---+

We use a haproxy cluster for ssl off-loading and then load balance request
to
nginx cluster. We are currently benchmarking this setup with 3 nodes for
haproxy
cluster and 1 nginx node. Each haproxy node has two frontend/backend pair.
First
frontend is a router for ssl connection which redistributes request to the
 second
frontend in the haproxy cluster. The second frontend is for ssl handshake
and
routing requests to nginx servers. Our configuration is as follows:

```
global
maxconn 10
user haproxy
group haproxy
nbproc 2
cpu-map 1 1
cpu-map 2 2

defaults
mode http
option forwardfor
timeout connect 5s
timeout client 30s
timeout server 30s
timeout tunnel 30m
timeout client-fin 5s

frontend ssl_sess_id_router
bind *:443
bind-process 1
mode tcp
maxconn 10
log global
option tcp-smart-accept
option splice-request
option splice-response
default_backend ssl_sess_id_router_backend

backend ssl_sess_id_router_backend
bind-process 1
mode tcp
fullconn 5
balance roundrobin
..
option tcp-smart-connect
server lbtest01 :8443 weight 1 check send-proxy
server lbtest02 :8443 weight 1 check send-proxy
server lbtest03 :8443 weight 1 check send-proxy

frontend nginx_ssl_fe
bind *:8443 ssl 
maxconn 10
bind-process 2
option tcp-smart-accept
option splice-request
option splice-response
option forwardfor
reqadd X-Forwarded-Proto:\ https
timeout client-fin 5s
timeout http-request 8s
timeout http-keep-alive 30s
default_backend nginx_backend

backend nginx_backend
bind-process 2
balance roundrobin
http-reuse safe
option tcp-smart-connect
option splice-request
option splice-response
timeout tunnel 30m
timeout http-request 8s
timeout http-keep-alive 30s
server testnginx :80  weight 1 check
```

The nginx node has nginx with 4 workers and 8192 max clients, therefore the
max
number of connection it can accept is 32768.

For benchmark, we are generating ~3k new connections per second where each
connection makes 1 http request and then holds the connection for next 30
seconds. This results in a high established connection on the first
frontend,
ssl_sess_id_router, ~25k per haproxy node (Total ~77k connections on 3
haproxy
nodes). The second frontend (nginx_ssl_fe) receives the same number of
connection on the frontend. On nginx node, we see that active connections
increase to ~32k.

Our understanding is that haproxy should keep a 1:1 connection mapping for
each
new connection in frontend/backend. But there is a connection count mismatch
between haproxy and nginx (Total 77k connections in all 3 haproxy for both
frontends vs 32k connections in nginx made by nginx_backend), We are still
not
facing any major 5xx or connection errors. We are assuming that this is
happening because haproxy is terminating old idle ssl connections to serve
the
new ones. We have following questions:

1. How the nginx_backend connections are being terminated to serve the new
connections?
2. Why haproxy is not terminating connections on the frontend to keep it
them at 32k
for 1:1 mapping?

Thanks
Ayush Goyal


Cannot enable a config "disabled" server via socket command

2015-09-14 Thread Ayush Goyal
Hi,

We are testing haproxy-1.6dev4, we have added a server in backend as
disabled, but we are not able
to bring it up using socket command.

Our backend conf looks like this:

=cut
backend apiservers
server api101 localhost:1234   maxconn 128 weight 1 check
server api102 localhost:1235 disabled  maxconn 128 weight 1 check
server api103 localhost:1236 disabled  maxconn 128 weight 1 check
=cut

But, when I run the "enable apiservers/api103" command, it is still in
MAINT mode. Disabling and enabling of non "disabled" servers like api101
are happening properly.

Enabling a config "disabled" server works correctly with haproxy1.5. Can
you confirm whether its a bug in 1.6-dev4?

Thanks,
Ayush Goyal