Thank you both for your reply.
>>>> Well, you've not configured any persistence option nor any load balancing 
>>>>algorithm
My application is totally stateless, so persistence is not a good option IMO.
I added "balance leastconn" and still saw haproxy closing server connections. 
But looking more closely at the wireshark capture, I realized that haproxy 
closes a connection immediately after receiving a new request from the client, 
and immediately before opening a connection to the other backend server.
That confirms what Cyril is saying: haproxy closes the server connection 
because it has made a load-balancing decision and has selected the other server.
I added "option prefer-last-server" and that solved the issue.
I have a couple of follow-up questions. For clarity and simplicity, let's call 
the requests that my client makes request A and request B, and let's assume 
those are the very first requests my client sends to haproxy.
When haproxy receives request A, it selects server1. After forwarding the 
response back to the client, haproxy keeps the connection to server1 open. Now 
haproxy receives request B. "balance leastconn" without "option 
prefer-last-server" causes haproxy to select server2. That is correct from a 
least-connection perspective because indeed server2 has less connections than 
server1 (0 vs. 1). However, server1 is not processing any requests at this time 
- i.e., there are out no outstanding requests pending on server1. In addition, 
haproxy already has an idle connection to server1, so choosing server1 would be 
a better choice - it would avoid the overhead of opening a new connection to 
server2.
It seems to me that a load-balancing algorithm that chooses the server that has 
the least pending requests - regardless of the number of connections - would be 
more appropriate in this case. The server with the least pending requests is 
most likely the one that's least busy. In case 2 or more servers have the same 
number of pending requests, haproxy could automatically prefer the server - if 
any - to which haproxy already has an idle connection, to avoid the overhead of 
opening a new connection.
Thoughts?
Finally, another question: Reading through the mailing list, it looks like you 
guys are working on adding true connection pooling to haproxy. Do you know when 
that feature will be available?
Thank you again,Alessandro

    On Friday, July 27, 2018, 12:35:54 PM MDT, Baptiste <bed...@gmail.com> 
wrote:  
 
 In other words, you may want to enable "option prefer-last-server". But in 
such case, you won't load-balance anymore (all requests should go to the same 
server.
Baptiste
On Fri, Jul 27, 2018 at 7:09 PM, Cyril Bonté <cyril.bo...@free.fr> wrote:

Hi Alessandro,

Le 27/07/2018 à 17:50, Alessandro Gherardi a écrit :

Hi,
I'm running haproxy 1.8.12 on Ubuntu 14.04. For some reason, haproxy does not 
reuse connections to backend servers. For testing purposes, I'm sending the 
same HTTP request multiple times over the same TCP connection.

The servers do not respond with Connection: close and do not close the 
connections. The wireshark capture shows haproxy RST-ing the connections  a few 
hundred milliseconds after the servers reply. The servers send no FIN nor RST 
to haproxy.

I tried various settings (http-reuse always, option http-keep-alive, both at 
global and backend level), no luck.

The problem goes away if I have a single backend server, but obviously that's 
not a viable option in real life.

Here's my haproxy.cfg:

global
         #daemon
         maxconn 256

defaults
         mode http
         timeout connect 5000ms
         timeout client 50000ms
         timeout server 50000ms

         option http-keep-alive
         timeout http-keep-alive 30s
         http-reuse always

frontend http-in
         bind 10.220.178.236:80
         default_backend servers

backend servers
         server server1 10.220.178.194:80 maxconn 32
         server server2 10.220.232.132:80 maxconn 32

Any suggestions?


Well, you've not configured any persistence option nor any load balancing 
algorithm. So, the default is to do a roundrobin between the 2 backend servers. 
If there's no traffic, it's very likely that there's no connection to reuse 
when switching to the second server for the second request.



Thanks in advance,
Alessandro



-- 
Cyril Bonté



  

Reply via email to