For sure TIME_WAIT connections are not an issue when thay keep information
about sockets to clients, but when TIME_WAIT connections keep sockets bussy
for your host where HAProxy is deployed to the backend the limit can be
reached - it's defined by ip_local_port_range.
Here is what I mean:
Client -(1)-> HAProxy -(2)-> Webserver
 / it doesn't metter if the web server and haproxy are on the same server./
I) client connects to haproxy
socket is tooken - clientIP:random_port:haproxy_ip:haproxy_port

II) haproxy connects to webserver
socket is tooken haproxy_local_ip:random_port:backend_ip:backend_port

III) client closes a conneciton to haproxy (1) in normal way -
FIN/FIN-ACK/ACK
this way we have one connections that goes from ESTABLISHED to TIME_WAIT
state. we don't really care about this TIME_WAIT connection beacause the
socket that is tooken is between a client and haproxy
- clientIP:random_port:haproxy_ip:haproxy_port

IV) haproxy closes the connection to backend (2) with FIN/FIN-ACK/ACK
Now this ESTABLISHED connection goes to TIME_WAIT state. and the socket
that is tooken is between the haporxy and backend server.
it looks like haproxy_local_ip:random_port:backend_ip:backend_port
if we say that haproxy and webserver will comunicate on 127.0.0.1 and web
server working on port 8080 - then we have a socket like that tooken:
127.0.0.1:RANDOM_PORT:127.0.0.1:8080

This RANDOM_PORT is in range defined in Sysctl ip_local_port_range
This connection on CentOS will be kept for 60seconds.
As you see on a loaded server this limit of open ports might be reached.
(some math - by default we have about 30000 open ports for 60 seconds is
about 500 new_connections/second.)

That is why it would be great one to be able to configure haproxy to reset
connection to backend.
I believe that the common architecture is that backend servers are
phisically close to haproxy and are on high speed networks where no packet
lost is expected. So we don't really need TIME_WAIT state here. It's not
needed on localhost for sure.

All the best !



2011/11/29 Willy Tarreau <[email protected]>

> On Tue, Nov 29, 2011 at 09:41:30AM -0500, James Bardin wrote:
> > From looking into this, I don't see an option in HAProxy to RST all
> > closed connections on a backend, though the documentation makes it
> > sound like the nolinger options does do this. Hopefully one of the
> > devs (Willy?) can chime in with some advice.
>
> Indeed, nolinger does this but it's strongly advised not to use it,
> because it precisely kills the TCP connection (reason why there is no
> time_wait left), which causes truncated objects on the remote server
> if the last segments are lost. The reason is that these lost segments
> will not be retransmitted and the client will get an RST instead.
>
> TIME_WAIT sockets are not an issue on a server. The only trouble they're
> causing is that they pollute the "netstat -a" output. But that's all.
> These sockets are totally normal and expected. My record is 5 million
> on a heavily loaded server :-)
>
> There is absolutely no reason to worry about these sockets, they're
> closed and waiting for either the TCP timer, a SYN or an RST to expire.
>
> Best regards,
> Willy
>
>

Reply via email to