Re: Suppression de l’extension
Hi. On 20/07/2018 11:16, -- wrote: Hello, In fact I just want the display of the .html extension on my site no longer displayed I use haproxy with Nginx, I can make url rewrite with Nginx that works well: server { rewrite ^(/.*)\.html(\?.*)?$ $1$2 permanent; How about to use this in the rewrite? rewrite ^(/.*)\.html(\?.*)?$ https://$host/$1$2 permanent; index index.html; try_files $uri.html $uri/ $uri =404; } Best regards Aleks But since Nginx runs on port 8889 (on the same machine as Haproxy) it redirects me to this port and I lose the connection (haproxy is listening on port 443) I wish to do this with Haproxy, is it possible? example: https://site.com/index.html -> https://site.com/index (the resource without the .html does not exist but i want it to be displayed like this in the browser) Thank you Envoyé de mon iPhone Le 19 juil. 2018 à 23:48, Aleksandar Lazic a écrit : Hi. On 19/07/2018 15:09, -- wrote: Hello, Je souhaite supprimer l’extension présentée par mon serveur nginx mais depuis Haproxy Type A.com/index.html en A.com/index Est ce possible ? Maybe, but please can you ask in English, thanks. But let me try to interpret you question. reqrep ^([^\ :]*)\ /index.html \1\ /index https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#4.2-reqrep Merci Envoyé de mon iPhone Best regards Aleksg
Re: Regexp
On 07/20/2018 12:03 AM, Aleksandar Lazic wrote: Hi. On 18/07/2018 13:10, Haim Ari wrote: Hello, Trying to set backend by regexp This regexp works outside of haproxy String: /1.0/manage/bu/ca?token=68bf68bf68bf68bf68bf&segId=1212121212&partner=123456789 Regexp: ^\/1\.0\/manage\/bu\/ca\?token=.*.segId=.*=123456789 What is the right syntax for this in haproxy ? I would use https://regex101.com/r/TjH7Ul/1/ ^\/1\.0\/manage\/bu\/ca\?token=(.*).segId=(.*).partner=123456789 AFAIK, even if this is correct, you do not have to escape the '/' characters to match them. You had to do that in your GUI because you selected a regex form with '/' as delimiter character (/.../gm). haproxy uses POSIX regexes with ^.[$()|*+?{\ as list of characters which must be escaped if you want them to be interpreted as literal characters (see regex(7)). There is an explanation in your GUI which indicates exactly that: "\/ matches the character / literally (case sensitive)" So, the regex above may be shortened as follows: ^/1\.0/manage/bu/ca\?token=(.*).segId=(.*).partner=123456789 which is a bit more readable. Fred.
Re: Issue with TCP splicing
Hi Julien. On 23/07/2018 13:59, Julien Semaan wrote: Doing it with the patch does the equivalent of disabling it with the option (realized there was an option afterwards). We're more looking to know if the haproxy team is interested in getting the issue addressed more than just getting the workaround From my experience with haproxy team I would say yes. Such cases are not really easy to find because the developer should be able to reproduce the behaviour, which can take some time and if in case it's special to your environment then there would be some amount of data and time from you and your team. As you can see on the mailing list archive there are a lot of long running and discussion threads to solve some specific and some common bugs ;-) https://www.mail-archive.com/haproxy@formilux.org/ As I'm just a member of the community and not a common developer I can only invite you to help us to solve this issue, I also want to tell you that I don't know how difficult or long running debug session this will be so let me please you to have some patience if it takes some time to debug and solve the issue. For starting it helps to know the full version, config and some " bt full " from core dumps from debug compiled version. One question from the enterprise vendor supported experience is. Can you produce the behaviour with the latest version ;-) Thanks! -- Julien Semaan Best regards Aleks jsem...@inverse.ca :: +1 (866) 353-6153 *155 ::www.inverse.ca Inverse inc. :: Leaders behind SOGo (www.sogo.nu) and PacketFence (www.packetfence.org) On 2018-07-23 11:25 AM, Aleksandar Lazic wrote: Hi Julien. On 23/07/2018 09:07, Julien Semaan wrote: Hi all, We're currently using haproxy in our project PacketFence (https://packetfence.org) and are currently experiencing an issue with haproxy segfaulting when TCP splicing is enabled. We're currently running version 1.8.9 and are occasionally getting segfaults on this specific line in stream.c (line 2131): (objt_cs(si_b->end) && __objt_cs(si_b->end)->conn->xprt && __objt_cs(si_b->end)->conn->xprt->snd_pipe) && I wasn't too bright when I found it through gdb and forgot to copy the backtrace, so I'm hoping that the issue can be found with this limited information. After commenting out the code for TCP splicing with the patch attached to the email, then the issue stopped happening. Have you tried to disable splice via config? https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#nosplice Best Regards, -- Julien Semaan jsem...@inverse.ca :: +1 (866) 353-6153 *155 ::www.inverse.ca Inverse inc. :: Leaders behind SOGo (www.sogo.nu) and PacketFence (www.packetfence.org) Best regards aleks
Re: Connections stuck in CLOSE_WAIT state with h2
Hi Milan, On Mon, Jul 23, 2018 at 08:41:03AM +0200, Milan Petruzelka wrote: > After weekend CLOSE_WAIT connections are still there. Ah bad :-( > What > does cflg=0x80203300 in "show fd" mean? These are the connection flags. You can figure them with contrib/debug/flags : $ ./flags 0x80203300 | grep ^conn conn->flags = CO_FL_XPRT_TRACKED | CO_FL_CONNECTED | CO_FL_ADDR_TO_SET | CO_FL_ADDR_FROM_SET | CO_FL_XPRT_READY | CO_FL_CTRL_READY So basically it says that everything is configured on the connection and that there is no request for polling (CO_FL_{CURR,SOCK,XPRT}_{WR,RD}_ENA). > FDs with cflg=0x80203300 are either > CLOSE_WAIT or "sock - protocol: TCP" - see FDs 14, 15, 16, 18, 19 and 25 in > following dumps. And - sockets in lsof state "sock - protocol: TCP" can't > be found in netstat. That totally makes sense. If the connection is not monitored at all (but why, this is the question), and has no timeout, definitely it will wander forever. Do you *think* that you got less CLOSE_WAITs or that the latest fixes didn't change anything ? I suspect that for some reason you might be hit by several bugs, which is what has complicated the diagnostic, but that's just pure guess. > > SHOW FD 3300 > 14 : st=0x20(R:pra W:pRa) ev=0x00(heopi) [nlc] cache=0 owner=0x23d0340 > iocb=0x4d4c90(conn_fd_handler) tmask=0x1 umask=0x0 cflg=0x80203300 > fe=fe-http mux=H2 mux_ctx=0x2494cc0 (...) If you run this with the latest 1.8 (not just the two patches above), some extra debug information is provided in show fd : $ echo show fd | socat - /tmp/sock1 | grep -i mux=H2 19 : st=0x25(R:PrA W:pRa) ev=0x00(heopi) [nlc] cache=0 owner=0x77fd8e80 iocb=0x58cb44(conn_fd_handler) tmask=0x1 umask=0x0 cflg=0x80201306 fe=httpgw mux=H2 mux_ctx=0x77f8ff10 st0=2 flg=0x nbst=1 nbcs=1 fctl_cnt=0 send_cnt=0 tree_cnt=1 orph_cnt=0 dbuf=0/0 mbuf=0/0 In particular, st0, the mux flags, the number of streams and the buffer states will give important information about what state the connection is in (and whether it still has streams attached or not). Oh I'm just seeing you already did that in the next e-mail. Thank you :-) So we have this : 25 : st=0x20(R:pra W:pRa) ev=0x00(heopi) [nlc] cache=0 owner=0x24f0a70 iocb=0x4d34c0(conn_fd_handler) tmask=0x1 umask=0x0 cflg=0x80203300 fe=fe-http mux=H2 mux_ctx=0x258a880 st0=7 flg=0x1000 nbst=8 nbcs=0 fctl_cnt=0 send_cnt=8 tree_cnt=8 orph_cnt=8 dbuf=0/0 mbuf=0/16384 - st0=7 => H2_CS_ERROR2 : an error was sent, either it succeeded or could not be sent and had to be aborted nonetheless ; - flg=1000 => H2_CF_GOAWAY_SENT : the GOAWAY frame was sent to the mux buffer. - nbst=8 => 8 streams still attached - nbcs=0 => 0 conn_streams found (application layer detached or not attached yet) - send_cnt=8 => 8 streams still in the send_list, waiting for the mux to pick their contentx. - tree_cnt=8 => 8 streams known in the tree (hence they are still valid from the H2 protocol perspective) - orph_cnt=8 => 8 streams are orphaned : these streams have quit at the application layer (very likely a timeout). - mbuf=0/16384 : the mux buffer is empty but allocated. It's not very common. At this point what it indicates is that : - 8 streams were active on this connection and a response was sent (at least partially) and probably waited for the mux buffer to be empty due to data from other previous streams. I'm realising it would be nice to also report the highest stream index to get an idea of the number of past streams on the connection. - an error happened (protocol error, network issue, etc, no more info at the moment) and caused haproxy to emit a GOAWAY frame. While doing so, the pending streams in the send_list were not destroyed. - then for an unknown reason the situation doesn't move anymore. I'm realising that one case I figured in the past with an error possibly blocking the connection at least partially covers one point here, it causes the mux buffer to remain allocated, so this patch would have caused it to be released, but it's still incomplete. Now I have some elements to dig through, I'll try to mentally reproduce the complex sequence of a blocked response with a GOAWAY being sent at the same time to see what happens. Thank you very much for all these information! Willy
Re: [PATCH] BUG/MINOR: build: Fix compilation with debug mode enabled
Hi Cyril, On Mon, Jul 23, 2018 at 10:04:34PM +0200, Cyril Bonté wrote: > Some monthes ago, I began writing a compilation test script for haproxy, but > as you may have noticed, I was not very available recently ;-) Oh it happens to all of us unfortunately :-/ > I should be > more available now. I'll try to finish this little work as it would have > detected such type of error. Great! > The script parses the Makefile to find all USE_* settings and performs a > compilation test for each one. I still have some work to do to prepare some > compilations (dependencies like slz, deviceatlas, 51degrees, ...), but it > looks to be already useful. I've now added DEBUG=-DDEBUG_FULL in the > compilation options. > > The main issue is that it takes hours on the tiny atom server I wanted to > use for that job. But well, on my laptop it takes less that 2 minutes : > that's acceptable, I've added it in the git hooks so it is executed each > time I fetch commits from the repository. Nice! A full build takes around 3-5 seconds on the build farm I have at the office (I extended the initial distcc farm with the load generators). > Some ideas for future versions : > - randomly mix USE_* options: for example, it would have triggered an error > to indicate an incompatbility between USE_DEVICEATLAS and USE_PCRE2. I tend to think that randomly mixing settings will not detect much in fact. If you have 20 settings, you have 1 million combinations. If a few of them are incompatible, you'll very rarely meet them. However you may face some which are expected to fail. Some very likely make sense and probably just need to be hard-coded, especially once they have been reported to occasionally fail. In the end you'll have less combinations with a higher chance to detect failures by having just an iteration over all settings one at a time and a selected set of combinations. At least that's how I see it :-) > - use different SSL libs/versions Good point! I've done this a few times when we touched ssl_sock.c and that's definitely needed. Cheers, Willy
RE: Missing SRV cookie
Epiphany! I was conflating the stick table with peers - thinking it was required in order to not lose a connection if one of the HAProxy servers failed. As it turns out, I can't "stick on src" as the users in the client's data center will all present with the identical NAT address to the HAProxy servers. So I have to use the cookies. I do find it weird that some machines would see the SRV cookie and some not. If I delete the following lines, will my users lose their connection if one of the HAProxy servers fail (the HAProxy servers are protected by DNS failover)? stick-table type ip size 20k peers mypeers stick on src Or does the peers section mitigate that? peers mypeers # include hap_servers-haproxy declarations peer ip-10-241-1-140 10.241.1.140:1024 peer ip-10-241-1-237 10.241.1.237:1024 -Original Message- From: Cyril Bonté Sent: Monday, July 23, 2018 3:31 PM To: Norman Branitsky Cc: haproxy Subject: Re: Missing SRV cookie Hi Norman, Le 23/07/2018 à 18:36, Norman Branitsky a écrit : > My client's environment had 3 HAProxy servers. > > Due to a routing issue, my client's users could only see the old > HAProxy > 1.5 server when connecting from their data center. > They could not see the 2 new HAProxy 1.7 servers. > > The routing issue was resolved last week and they could now see the 2 > new HAProxy servers, as well the old server. > > They started getting quick disconnects from their Java application - > > the SEVERE error indicated that they had arrived at the wrong server > and had no current session. > [...] > New HAProxy servers configuration: > > backend ssl_backend-vr > balance roundrobin > stick-table type ip size 20k peers mypeers > stick on src Here you are using stick tables for session persistence. > [...] > cookie SRV insert indirect nocache httponly secure > server i-067c94ded1c8e212c 10.241.1.138:9001 check cookie > i-067c94ded1c8e212c > server i-07035eca525e56235 10.241.1.133:9001 check cookie > i-07035eca525e56235 But here, you are using cookies for the same purpose. > I realized that the cookie mechanism was different so I shut down the > old HAProxy server and the problem appeared to be resolved. > > This morning that client is complaining that the problem has returned > - disconnects resulting in the user being kicked out to the login screen. > > Checking with multiple browsers, I can see both the old JSESSIONID > cookie (with the machine name appended) and the new SRV cookie. > > Checking with multiple browsers, my colleagues can *NOT* see the new > SRV cookie from any browser in this office - > > but they can see the SRV cookie when browsing from a virtual PC in our > Atlanta data center! > Even more puzzling, though my client cannot see the SRV cookie (either > in the F12 cookies sent list, or in the browser's cookies folder) he > *never* experiences an unexpected disconnect. > > Suggestions, please? You have to make a choice, either you use stick tables, either you use cookies, but don't mix both otherwise you'll have the situation you are describing. -- Cyril Bonté
Re: Missing SRV cookie
Hi Norman, Le 23/07/2018 à 18:36, Norman Branitsky a écrit : My client’s environment had 3 HAProxy servers. Due to a routing issue, my client’s users could only see the old HAProxy 1.5 server when connecting from their data center. They could not see the 2 new HAProxy 1.7 servers. The routing issue was resolved last week and they could now see the 2 new HAProxy servers, as well the old server. They started getting quick disconnects from their Java application – the SEVERE error indicated that they had arrived at the wrong server and had no current session. [...] New HAProxy servers configuration: backend ssl_backend-vr balance roundrobin stick-table type ip size 20k peers mypeers stick on src Here you are using stick tables for session persistence. [...] cookie SRV insert indirect nocache httponly secure server i-067c94ded1c8e212c 10.241.1.138:9001 check cookie i-067c94ded1c8e212c server i-07035eca525e56235 10.241.1.133:9001 check cookie i-07035eca525e56235 But here, you are using cookies for the same purpose. I realized that the cookie mechanism was different so I shut down the old HAProxy server and the problem appeared to be resolved. This morning that client is complaining that the problem has returned – disconnects resulting in the user being kicked out to the login screen. Checking with multiple browsers, I can see both the old JSESSIONID cookie (with the machine name appended) and the new SRV cookie. Checking with multiple browsers, my colleagues can *NOT* see the new SRV cookie from any browser in this office – but they can see the SRV cookie when browsing from a virtual PC in our Atlanta data center! Even more puzzling, though my client cannot see the SRV cookie (either in the F12 cookies sent list, or in the browser’s cookies folder) he *never* experiences an unexpected disconnect. Suggestions, please? You have to make a choice, either you use stick tables, either you use cookies, but don't mix both otherwise you'll have the situation you are describing. -- Cyril Bonté
Re: Issue with TCP splicing
Doing it with the patch does the equivalent of disabling it with the option (realized there was an option afterwards). We're more looking to know if the haproxy team is interested in getting the issue addressed more than just getting the workaround Thanks! -- Julien Semaan jsem...@inverse.ca :: +1 (866) 353-6153 *155 ::www.inverse.ca Inverse inc. :: Leaders behind SOGo (www.sogo.nu) and PacketFence (www.packetfence.org) On 2018-07-23 11:25 AM, Aleksandar Lazic wrote: Hi Julien. On 23/07/2018 09:07, Julien Semaan wrote: Hi all, We're currently using haproxy in our project PacketFence (https://packetfence.org) and are currently experiencing an issue with haproxy segfaulting when TCP splicing is enabled. We're currently running version 1.8.9 and are occasionally getting segfaults on this specific line in stream.c (line 2131): (objt_cs(si_b->end) && __objt_cs(si_b->end)->conn->xprt && __objt_cs(si_b->end)->conn->xprt->snd_pipe) && I wasn't too bright when I found it through gdb and forgot to copy the backtrace, so I'm hoping that the issue can be found with this limited information. After commenting out the code for TCP splicing with the patch attached to the email, then the issue stopped happening. Have you tried to disable splice via config? https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#nosplice Best Regards, -- Julien Semaan jsem...@inverse.ca :: +1 (866) 353-6153 *155 ::www.inverse.ca Inverse inc. :: Leaders behind SOGo (www.sogo.nu) and PacketFence (www.packetfence.org) Best regards aleks
Missing SRV cookie
My client's environment had 3 HAProxy servers. Due to a routing issue, my client's users could only see the old HAProxy 1.5 server when connecting from their data center. They could not see the 2 new HAProxy 1.7 servers. The routing issue was resolved last week and they could now see the 2 new HAProxy servers, as well the old server. They started getting quick disconnects from their Java application - the SEVERE error indicated that they had arrived at the wrong server and had no current session. Old HAProxy server configuration: backend ssl_backend-vr balance roundrobin option httpchk GET /le5/about.txt http-check disable-on-404 http-request allow if { src -f /etc/CONFIG/haproxy/whitelist.lst } || { ssl_c_used } http-request deny appsession JSESSIONID len 52 timeout 3h acl path_root path / redirect location /le5/ if path_root # include ssl_servers-vr declarations server i-067c94ded1c8e212c 10.241.1.138:9001 check server i-07035eca525e56235 10.241.1.133:9001 check New HAProxy servers configuration: backend ssl_backend-vr balance roundrobin stick-table type ip size 20k peers mypeers stick on src option httpchk GET /le5/about.txt http-check disable-on-404 http-request allow if { src -f /etc/CONFIG/haproxy/whitelist.lst } || { ssl_c_used } http-request deny cookie SRV insert indirect nocache httponly secure acl path_root path / redirect location /le5/ if path_root # include ssl_servers-vr declarations server i-067c94ded1c8e212c 10.241.1.138:9001 check cookie i-067c94ded1c8e212c server i-07035eca525e56235 10.241.1.133:9001 check cookie i-07035eca525e56235 I realized that the cookie mechanism was different so I shut down the old HAProxy server and the problem appeared to be resolved. This morning that client is complaining that the problem has returned - disconnects resulting in the user being kicked out to the login screen. Checking with multiple browsers, I can see both the old JSESSIONID cookie (with the machine name appended) and the new SRV cookie. Checking with multiple browsers, my colleagues can NOT see the new SRV cookie from any browser in this office - but they can see the SRV cookie when browsing from a virtual PC in our Atlanta data center! Even more puzzling, though my client cannot see the SRV cookie (either in the F12 cookies sent list, or in the browser's cookies folder) he never experiences an unexpected disconnect. Suggestions, please?
[PATCH] MINOR: ssl: BoringSSL matches OpenSSL 1.1.0
Hi Willy, This patch is necessary to build with current BoringSSL (SSL_SESSION is now opaque). BoringSSL correctly matches OpenSSL 1.1.0 since 3b2ff028 for haproxy needs. The patch revert part of haproxy 019f9b10 (openssl-compat.h). This will not break openssl/libressl compat. Can you consider it for 1.9? Thanks. Manu 0001-MINOR-ssl-BoringSSL-matches-OpenSSL-1.1.0.patch Description: Binary data
Re: Issue with TCP splicing
Hi Julien. On 23/07/2018 09:07, Julien Semaan wrote: Hi all, We're currently using haproxy in our project PacketFence (https://packetfence.org) and are currently experiencing an issue with haproxy segfaulting when TCP splicing is enabled. We're currently running version 1.8.9 and are occasionally getting segfaults on this specific line in stream.c (line 2131): (objt_cs(si_b->end) && __objt_cs(si_b->end)->conn->xprt && __objt_cs(si_b->end)->conn->xprt->snd_pipe) && I wasn't too bright when I found it through gdb and forgot to copy the backtrace, so I'm hoping that the issue can be found with this limited information. After commenting out the code for TCP splicing with the patch attached to the email, then the issue stopped happening. Have you tried to disable splice via config? https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#nosplice Best Regards, -- Julien Semaan jsem...@inverse.ca :: +1 (866) 353-6153 *155 ::www.inverse.ca Inverse inc. :: Leaders behind SOGo (www.sogo.nu) and PacketFence (www.packetfence.org) Best regards aleks
looking for help with redirect + acl
I need help with a current ACL and redirect that looks like this: acl has_statistical_uri path_beg -i /statistical http-request redirect code 301 prefix https://statistical.example.com/statisticalinsight if has_statistical_uri When the request like this comes in: https://statistical.example.com/statistical/example?key=value it gets redirected to this: https://statistical.example.com/statisticalinsight/statistical/example?key=value They would like it to be redirected to: https://statistical.example.com/statisticalinsight/example?key=value
Reload certificates file without downtime
Hi, I'm running HAproxy (1.7 or 1.8) inside Docker containers, through by SystemD unit files on the host. I would like to force HAproxy to reload certificates (bind ssl crt) with minimal downtime whenever they are renewed on disk (another process inside the container). I tried to send HUP signals to HAproxy through Docker, but this doesn't seem to work. Of course, any other signal just kills the containers, so there is a downtime of about 30s to 1min. Any idea?
Issue with TCP splicing
Hi all, We're currently using haproxy in our project PacketFence (https://packetfence.org) and are currently experiencing an issue with haproxy segfaulting when TCP splicing is enabled. We're currently running version 1.8.9 and are occasionally getting segfaults on this specific line in stream.c (line 2131): (objt_cs(si_b->end) && __objt_cs(si_b->end)->conn->xprt && __objt_cs(si_b->end)->conn->xprt->snd_pipe) && I wasn't too bright when I found it through gdb and forgot to copy the backtrace, so I'm hoping that the issue can be found with this limited information. After commenting out the code for TCP splicing with the patch attached to the email, then the issue stopped happening. Best Regards, -- Julien Semaan jsem...@inverse.ca :: +1 (866) 353-6153 *155 ::www.inverse.ca Inverse inc. :: Leaders behind SOGo (www.sogo.nu) and PacketFence (www.packetfence.org) diff -ruN haproxy-1.8.9.orig/src/stream.c haproxy-1.8.9/src/stream.c --- haproxy-1.8.9.orig/src/stream.c 2018-05-18 09:10:29.0 -0400 +++ haproxy-1.8.9/src/stream.c 2018-07-20 13:06:41.861913134 -0400 @@ -2122,8 +2122,9 @@ if (s->txn) s->txn->req.sov = s->txn->req.eoh + s->txn->req.eol - req->buf->o; } - /* check if it is wise to enable kernel splicing to forward request data */ + /* DON'T ENABLE TCP SPLICING AT ALL BECAUSE OF OCCASIONNAL SEGFAULTS WE'VE SEEN + * jsem...@inverse.ca if (!(req->flags & (CF_KERN_SPLICING|CF_SHUTR)) && req->to_forward && (global.tune.options & GTUNE_USE_SPLICE) && @@ -2135,7 +2136,7 @@ (req->flags & CF_STREAMER_FAST { req->flags |= CF_KERN_SPLICING; } - + */ /* reflect what the L7 analysers have seen last */ rqf_last = req->flags; @@ -2306,6 +2307,8 @@ } /* check if it is wise to enable kernel splicing to forward response data */ + /* DON'T ENABLE TCP SPLICING AT ALL BECAUSE OF OCCASIONNAL SEGFAULTS WE'VE SEEN + * jsem...@inverse.ca if (!(res->flags & (CF_KERN_SPLICING|CF_SHUTR)) && res->to_forward && (global.tune.options & GTUNE_USE_SPLICE) && @@ -2318,6 +2321,7 @@ res->flags |= CF_KERN_SPLICING; } + */ /* reflect what the L7 analysers have seen last */ rpf_last = res->flags;
Re: Connections stuck in CLOSE_WAIT state with h2
Hi, I've compiled latest haproxy 1.8.12 from Git repo (HAProxy version 1.8.12-5e100b-15, released 2018/07/20) with latest h2 patches and extended h2 debug info. And after some time I caught one CLOSE_WAIT connection. Here is extended show fd debug: 25 : st=0x20(R:pra W:pRa) ev=0x00(heopi) [nlc] cache=0 owner=0x24f0a70 iocb=0x4d34c0(conn_fd_handler) tmask=0x1 umask=0x0 cflg=0x80203300 fe=fe-http mux=H2 mux_ctx=0x258a880 st0=7 flg=0x1000 nbst=8 nbcs=0 fctl_cnt=0 send_cnt=8 tree_cnt=8 orph_cnt=8 dbuf=0/0 mbuf=0/16384 LSOF CLOSE_WAIT haproxy 26364 haproxy 25u IPv47140390 0t0 TCP ip:https->ip:50041 (CLOSE_WAIT) Milan