Le 16/02/2026 à 4:56 PM, Luke Seelenbinder a écrit :
Hi List,
After upgrading from 3.2.11 to 3.2.12, we're seeing TCP connections from
HAProxy's DNS resolver to nameservers accumulate in CLOSE_WAIT state
indefinitely. The nameservers send their FIN, but HAProxy never completes
the teardown on its side. The leak is continuous — we observed growth from
~1K to ~5K total connections in about 4 hours.
The issue appeared immediately after upgrading with no configuration
changes. It does not occur on 3.2.11.
We use TCP resolvers (tcp6@ prefix) with SRV record resolution via
server-template. The relevant config:
resolvers default
nameserver g1 tcp6@[2001:4860:4860::8888]:53 source [<ipv6>]
nameserver g2 tcp6@[2001:4860:4860::8844]:53 source [<ipv6>]
nameserver opendns tcp6@[2620:0:ccc::2]:53 source [<ipv6>]
accepted_payload_size 8192
resolve_retries 4
hold valid 60s
hold obsolete 30s
hold timeout 300s
timeout resolve 20s
timeout retry 1s
defaults
default-server resolvers default inter 5s [...]
option abortonclose
backend example
server-template myserver 2 ipv6@_svc._tcp.example.com [...]
We have several backends using this pattern with SRV records.
The CLOSE_WAIT connections are exclusively to port 53 on the configured
nameservers:
$ ss -tn state close-wait | awk '{print $5}' | \
sed 's/:[0-9]*$//' | sort | uniq -c | sort -rn
1100 [2620:0:ccc::2]
1097 [2001:4860:4860::8844]
1092 [2001:4860:4860::8888]
$ ss -tn state close-wait -o | head -5
Recv-Q Send-Q Local Address:Port Peer Address:Port
0 0 [2600:3c03::xxxx]:49570 [2001:4860:4860::8844]:53
0 0 [2600:3c03::xxxx]:40414 [2001:4860:4860::8888]:53
0 0 [2600:3c03::xxxx]:38996 [2001:4860:4860::8888]:53
0 0 [2600:3c03::xxxx]:53882 [2620:0:ccc::2]:53
Meanwhile actual client and backend connections are healthy without leakage.
Two commits in 3.2.12 seem like potential candidates since DNS resolver
TCP connections go through the raw socket path:
- rawsock: introduce CO_RFL_TRY_HARDER to detect closures on complete
reads (Willy)
- ssl: don't always process pending handshakes on closed connections
(Willy)
Note that "option abortonclose" is enabled in our defaults, which the
second commit explicitly interacts with.
We've confirmed reverting to 3.2.11 also resolves it,
though we'd prefer to stay on 3.2.12 for the QUIC CVE fixes.
Happy to provide haproxy -vv output, full config, or any additional
debugging if helpful.
Hi Luke,
Could you test the attached patch ? It should fix your issue.
I should push it soon, I must just verify it also fixes another issue reported
on the peer applet.
Once validated and pushed, a new 3.2 release will be emitted.
Regards,
--
Christopher Faulet
From 4ec2f6d70b3735630c2588f8a598f85a79f29425 Mon Sep 17 00:00:00 2001
From: Christopher Faulet <[email protected]>
Date: Mon, 16 Feb 2026 19:14:57 +0100
Subject: [PATCH] BUG/MEDIUM: applet: Fix test on shut flags for legacy applets
(v2)
The previous fix was wrong. When shut flags are tested for legacy applets,
to know if the I/O handler can be called or not, we must be sure shut for
reads and for writes are both set to skip the applet I/O handler.
This bug introduced regression, at least for the peer applet and for the DNS
applet.
This patch must be backported with abc1947e1 ("BUG/MEDIUM: applet: Fix test
on shut flags for legacy applets"), so as far as 3.0.
---
src/applet.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/src/applet.c b/src/applet.c
index dffc73fde..7b08539a3 100644
--- a/src/applet.c
+++ b/src/applet.c
@@ -852,7 +852,7 @@ struct task *task_run_applet(struct task *t, void *context, unsigned int state)
/* Don't call I/O handler if the applet was shut (release callback was
* already called)
*/
- if (!se_fl_test(app->sedesc, SE_FL_SHR | SE_FL_SHW))
+ if (!se_fl_test(app->sedesc, SE_FL_SHR) || !se_fl_test(app->sedesc, SE_FL_SHW))
app->applet->fct(app);
TRACE_POINT(APPLET_EV_PROCESS, app);
--
2.52.0