Re: half-closed solved ? (was Re: [PATCH] BUG/MAJOR: stream: fix tcp half connection expire causes) cpu 100%

2017-03-12 Thread Willy Tarreau
On Mon, Mar 13, 2017 at 11:22:57AM +0800, ??? wrote:
> Hi Willy,
>I tested the patch you provided, able to solve my problems encountered.
>Thank you so enthusiastic help. my name is Hongbo Long.

Great, thank you!

Willy



Re: half-closed solved ? (was Re: [PATCH] BUG/MAJOR: stream: fix tcp half connection expire causes) cpu 100%

2017-03-12 Thread 龙红波
Hi Willy,
   I tested the patch you provided, able to solve my problems encountered.
   Thank you so enthusiastic help. my name is Hongbo Long.

2017-03-11 1:52 GMT+08:00 Willy Tarreau :

> Guys,
>
> On Thu, Mar 09, 2017 at 05:06:43PM +0100, Willy Tarreau wrote:
> > So I thought about dealing with it inside shutw() itself, but at most
> > places we don't know what side to use nor if the tunnel timeout should
> > be used instead. I'm now starting to think that we should probably have
> > an "fto" for FIN timeout in each channel that's automatically set when
> > doing a half-way close, or maybe have it in the stream interface.
> >
> > That's definitely an ugly stuff that I created myself 3 years ago without
> > spotting some possible corner cases and I'm not proud of it :-(
> >
> > At the moment I have no idea how to *reliably* fix this. Your patch
> > addresses a part of it but I'd rather kill the bug as a whole :-/
>
> So I found a solution to this crap. I store the half-closed timeout in
> the stream interface so that wherever we perform a shutw(), we have all
> the elements we need to switch to it and do it only upon this call. This
> means that other update places were removed.
>
> For me it now works fine. 1) I can't reproduce the endless loop anymore,
> and 2) my fin timeouts seem to always work. I'm attaching a candidate
> patch (it's an RFC, I'll put longhb's detailed analysis there).
>
> Longhb, please confirm that you don't have the problem anymore with it.
>
> Richard, if you continue to face this problem of client-fin not always
> working, I'd be interested in you to test this patch to confirm that
> the issue is now gone.
>
> I intent to merge it early next week.
>
> BTW longhb, since you did all the analysis and wrote the initial patch
> I'll set you as the patch's author (unless you disagree of course). So
> it's still time to provide a real name and not just an e-mail address.
>
> Cheers,
> Willy
>


Re: half-closed solved ? (was Re: [PATCH] BUG/MAJOR: stream: fix tcp half connection expire causes) cpu 100%

2017-03-12 Thread Richard Gray



On 2017-03-11 06:52, Willy Tarreau wrote:

Richard, if you continue to face this problem of client-fin not always
working, I'd be interested in you to test this patch to confirm that
the issue is now gone.


Thanks Willy,

I'll find some time to give this a try and let you know how it goes.

--
Richard
_

This email has been filtered by SMX. For more info visit http://smxemail.com
_




half-closed solved ? (was Re: [PATCH] BUG/MAJOR: stream: fix tcp half connection expire causes) cpu 100%

2017-03-10 Thread Willy Tarreau
Guys,

On Thu, Mar 09, 2017 at 05:06:43PM +0100, Willy Tarreau wrote:
> So I thought about dealing with it inside shutw() itself, but at most
> places we don't know what side to use nor if the tunnel timeout should
> be used instead. I'm now starting to think that we should probably have
> an "fto" for FIN timeout in each channel that's automatically set when
> doing a half-way close, or maybe have it in the stream interface.
> 
> That's definitely an ugly stuff that I created myself 3 years ago without
> spotting some possible corner cases and I'm not proud of it :-(
> 
> At the moment I have no idea how to *reliably* fix this. Your patch
> addresses a part of it but I'd rather kill the bug as a whole :-/

So I found a solution to this crap. I store the half-closed timeout in
the stream interface so that wherever we perform a shutw(), we have all
the elements we need to switch to it and do it only upon this call. This
means that other update places were removed.

For me it now works fine. 1) I can't reproduce the endless loop anymore,
and 2) my fin timeouts seem to always work. I'm attaching a candidate
patch (it's an RFC, I'll put longhb's detailed analysis there).

Longhb, please confirm that you don't have the problem anymore with it.

Richard, if you continue to face this problem of client-fin not always
working, I'd be interested in you to test this patch to confirm that
the issue is now gone.

I intent to merge it early next week.

BTW longhb, since you did all the analysis and wrote the initial patch
I'll set you as the patch's author (unless you disagree of course). So
it's still time to provide a real name and not just an e-mail address.

Cheers,
Willy
>From 3c413a0cd1bea095a1db0feaac314432d4c16e60 Mon Sep 17 00:00:00 2001
From: Willy Tarreau 
Date: Fri, 10 Mar 2017 18:41:51 +0100
Subject: WIP/BUG/MEDIUM: stream: fix client-fin/server-fin handling
X-Bogosity: Ham, tests=bogofilter, spamicity=0.00, version=1.2.4

... longhb's description here...

reproduced with :
 $ tcploop  L W N20 A P100 F P1 & tcploop 127.0.0.1:1990 C S1000 F

move the timeouts to the stream interface instead so that we
can enforce their application as soon as we perform a shutw().
Longhb's problem cannot be reproduced anymore now.
---
 include/types/stream_interface.h |  1 +
 src/proto_http.c |  1 +
 src/proxy.c  |  3 +++
 src/stream.c | 10 ++
 src/stream_interface.c   | 15 +++
 5 files changed, 22 insertions(+), 8 deletions(-)

diff --git a/include/types/stream_interface.h b/include/types/stream_interface.h
index 51bb4d6..95cf47a 100644
--- a/include/types/stream_interface.h
+++ b/include/types/stream_interface.h
@@ -99,6 +99,7 @@ struct stream_interface {
/* struct members below are the "remote" part, as seen from the buffer 
side */
unsigned int err_type;  /* first error detected, one of SI_ET_* */
int conn_retries;   /* number of connect retries left */
+   unsigned int hcto;  /* half-closed timeout (0 = unset) */
 };
 
 /* operations available on a stream-interface */
diff --git a/src/proto_http.c b/src/proto_http.c
index 2d567c1..1ddb3fc 100644
--- a/src/proto_http.c
+++ b/src/proto_http.c
@@ -9007,6 +9007,7 @@ void http_reset_txn(struct stream *s)
s->res.rex = TICK_ETERNITY;
s->res.wex = TICK_ETERNITY;
s->res.analyse_exp = TICK_ETERNITY;
+   s->si[1].hcto = TICK_ETERNITY;
 }
 
 void free_http_res_rules(struct list *r)
diff --git a/src/proxy.c b/src/proxy.c
index 41e40e6..48d7e83 100644
--- a/src/proxy.c
+++ b/src/proxy.c
@@ -1151,6 +1151,9 @@ int stream_set_backend(struct stream *s, struct proxy *be)
if (be->options2 & PR_O2_INDEPSTR)
s->si[1].flags |= SI_FL_INDEP_STR;
 
+   if (tick_isset(be->timeout.serverfin))
+   s->si[1].hcto = be->timeout.serverfin;
+
/* We want to enable the backend-specific analysers except those which
 * were already run as part of the frontend/listener. Note that it would
 * be more reliable to store the list of analysers that have been run,
diff --git a/src/stream.c b/src/stream.c
index 94f7e5a..78cc7ff 100644
--- a/src/stream.c
+++ b/src/stream.c
@@ -168,6 +168,7 @@ struct stream *stream_new(struct session *sess, struct task 
*t, enum obj_type *o
/* this part should be common with other protocols */
si_reset(&s->si[0]);
si_set_state(&s->si[0], SI_ST_EST);
+   s->si[0].hcto = sess->fe->timeout.clientfin;
 
/* attach the incoming connection to the stream interface now. */
if (conn)
@@ -182,6 +183,7 @@ struct stream *stream_new(struct session *sess, struct task 
*t, enum obj_type *o
 * callbacks will be initialized before attempting to connect.
 */
si_reset(&s->si[1]);
+   s->si[1].hcto = TICK_ETERNITY;
 
if (likely(sess->fe->options2 & PR_O2_INDEPSTR))
s->si[1].flags |= SI_F

Re: [PATCH] BUG/MAJOR: stream: fix tcp half connection expire causes cpu 100%

2017-03-10 Thread Willy Tarreau
On Fri, Mar 10, 2017 at 04:09:10PM +0100, Willy Tarreau wrote:
> Hi again,
> 
> I'm having an issue with your reproducer, it doesn't work at
> all for me and I'm a bit surprized by this :
> 
> On Wed, Mar 08, 2017 at 10:09:25PM +0800, longhb wrote:
> >  [PATCH] BUG/MAJOR: stream: fix tcp half connection expire causes cpu 100%
> > 
> >  Repetition condition: 
> >  haproxy config: 
> >  global:
> >  tune.bufsize 10485760 
> >  defaults 
> >  timeout server-fin 90s
> >  timeout client-fin 90s
> >  backend node2
> >  mode tcp
> >  timeout server 900s
> >  timeout connect 10s
> >  server def 127.0.0.1:
> >  frontend fe_api
> >  mode  tcp
> >  timeout client 900s
> >  bind :1990
> >  use_backend node2
> > timeout server-fin shorter than timeout server, the backend server
> > sends data, this package is left in the cache of haproxy, the backend
> > server continue sending fin package, haproxy recv fin package. this
> > time the session information is as follows:
> > 0x2373470: proto=tcpv4 src=127.0.0.1:39513 fe=fe_api be=node2
> > srv=def ts=08 age=1s calls=3 
> > rq[f=848000h,i=0,an=00h,rx=14m58s,wx=,ax=]
> > rp[f=8004c020h,i=0,an=00h,rx=,wx=14m58s,ax=] s0=[7,0h,fd=6,ex=]
> > s1=[7,18h,fd=7,ex=] exp=14m58s
> > rp has set the CF_SHUTR state, next, the client sends the fin package,
> > session information is as follows:
> > 0x2373470: proto=tcpv4 src=127.0.0.1:39513 fe=fe_api be=node2
> > srv=def ts=08 age=38s calls=4 rq[f=84a020h,i=0,an=00h,rx=,wx=,ax=]
> > rp[f=8004c020h,i=0,an=00h,rx=1m11s,wx=14m21s,ax=] s0=[7,0h,fd=6,ex=]
> > s1=[9,10h,fd=7,ex=] exp=1m11s
> 
> Here, as you mentionned, both remotes have sent their FIN so once the
> data are transferred the sessions should close. So I'm definitely missing
> something. Does the server (or the client) send more data than the buffer
> can store ? Does one of the other side refrain from reading all the data ?
> I've tried various such scenarios and I cannot reproduce your situation
> unfortunately. I have an idea of how to definitely get rid of all this
> mess but I have no way to validate that it will work in your case. Any
> help would be much appreciated. BTW, if you want to have more detailed
> session dumps, you can type "show sess " or "show sess all", you'll
> get many more details about the internals.
> 
> Also, could you tell me what version you are using ?

OK don't waste your time, I finally managed to get it to work by filling
the buffer from the client to the server and preventing the server from
reading these data. I did it with tcploop (I've also reduced the timeouts) :

# config :

global
tune.bufsize 10485760

defaults
timeout server-fin 3s
timeout client-fin 3s

backend node2
mode tcp
timeout server 90s
timeout connect 1s
server def 127.0.0.1:

frontend fe_api
mode  tcp
timeout client 90s
bind :1990
use_backend node2

$ tcploop  L W N20 A P100 F P1 &
$ tcploop 127.0.0.1:1990 C S100 F

strace shows that epoll_wait() loops after 3 seconds.

I think we can address it centrally in the shutw() functions by setting
the clientfin/serverfin values into the stream interface, which will
allow us to remove all the incomplete tests that are spread all over
the code.

Willy



Re: [PATCH] BUG/MAJOR: stream: fix tcp half connection expire causes cpu 100%

2017-03-10 Thread Willy Tarreau
Hi again,

I'm having an issue with your reproducer, it doesn't work at
all for me and I'm a bit surprized by this :

On Wed, Mar 08, 2017 at 10:09:25PM +0800, longhb wrote:
>  [PATCH] BUG/MAJOR: stream: fix tcp half connection expire causes cpu 100%
> 
>  Repetition condition: 
>  haproxy config: 
>  global:
>  tune.bufsize 10485760 
>  defaults 
>  timeout server-fin 90s
>  timeout client-fin 90s
>  backend node2
>  mode tcp
>  timeout server 900s
>  timeout connect 10s
>  server def 127.0.0.1:
>  frontend fe_api
>  mode  tcp
>  timeout client 900s
>  bind :1990
>  use_backend node2
> timeout server-fin shorter than timeout server, the backend server
> sends data, this package is left in the cache of haproxy, the backend
> server continue sending fin package, haproxy recv fin package. this
> time the session information is as follows:
> 0x2373470: proto=tcpv4 src=127.0.0.1:39513 fe=fe_api be=node2
> srv=def ts=08 age=1s calls=3 
> rq[f=848000h,i=0,an=00h,rx=14m58s,wx=,ax=]
> rp[f=8004c020h,i=0,an=00h,rx=,wx=14m58s,ax=] s0=[7,0h,fd=6,ex=]
> s1=[7,18h,fd=7,ex=] exp=14m58s
> rp has set the CF_SHUTR state, next, the client sends the fin package,
> session information is as follows:
> 0x2373470: proto=tcpv4 src=127.0.0.1:39513 fe=fe_api be=node2
> srv=def ts=08 age=38s calls=4 rq[f=84a020h,i=0,an=00h,rx=,wx=,ax=]
> rp[f=8004c020h,i=0,an=00h,rx=1m11s,wx=14m21s,ax=] s0=[7,0h,fd=6,ex=]
> s1=[9,10h,fd=7,ex=] exp=1m11s

Here, as you mentionned, both remotes have sent their FIN so once the
data are transferred the sessions should close. So I'm definitely missing
something. Does the server (or the client) send more data than the buffer
can store ? Does one of the other side refrain from reading all the data ?
I've tried various such scenarios and I cannot reproduce your situation
unfortunately. I have an idea of how to definitely get rid of all this
mess but I have no way to validate that it will work in your case. Any
help would be much appreciated. BTW, if you want to have more detailed
session dumps, you can type "show sess " or "show sess all", you'll
get many more details about the internals.

Also, could you tell me what version you are using ?

Thanks!
Willy



Re: [PATCH] BUG/MAJOR: stream: fix tcp half connection expire causes cpu 100%

2017-03-09 Thread Willy Tarreau
Hi again,

CCing Richard who reported client-fin not always working in the past.

Updates below.

On Wed, Mar 08, 2017 at 09:21:37PM +0100, Willy Tarreau wrote:
> Hi,
> 
> On Wed, Mar 08, 2017 at 10:09:25PM +0800, longhb wrote:
> >  [PATCH] BUG/MAJOR: stream: fix tcp half connection expire causes cpu 100%
> > 
> >  Repetition condition: 
> >  haproxy config: 
> >  global:
> >  tune.bufsize 10485760 
> >  defaults 
> >  timeout server-fin 90s
> >  timeout client-fin 90s
> >  backend node2
> >  mode tcp
> >  timeout server 900s
> >  timeout connect 10s
> >  server def 127.0.0.1:
> >  frontend fe_api
> >  mode  tcp
> >  timeout client 900s
> >  bind :1990
> >  use_backend node2
> > timeout server-fin shorter than timeout server, the backend server
> > sends data, this package is left in the cache of haproxy, the backend
> > server continue sending fin package, haproxy recv fin package. this
> > time the session information is as follows:
> > 0x2373470: proto=tcpv4 src=127.0.0.1:39513 fe=fe_api be=node2
> > srv=def ts=08 age=1s calls=3 
> > rq[f=848000h,i=0,an=00h,rx=14m58s,wx=,ax=]
> > rp[f=8004c020h,i=0,an=00h,rx=,wx=14m58s,ax=] s0=[7,0h,fd=6,ex=]
> > s1=[7,18h,fd=7,ex=] exp=14m58s
> > rp has set the CF_SHUTR state, next, the client sends the fin package,
> > session information is as follows:
> > 0x2373470: proto=tcpv4 src=127.0.0.1:39513 fe=fe_api be=node2
> > srv=def ts=08 age=38s calls=4 rq[f=84a020h,i=0,an=00h,rx=,wx=,ax=]
> > rp[f=8004c020h,i=0,an=00h,rx=1m11s,wx=14m21s,ax=] s0=[7,0h,fd=6,ex=]
> > s1=[9,10h,fd=7,ex=] exp=1m11s
> > After waiting 90s, session information is as follows:
> > 0x2373470: proto=tcpv4 src=127.0.0.1:39513 fe=fe_api be=node2
> > srv=def ts=04 age=4m11s calls=718074391 
> > rq[f=84a020h,i=0,an=00h,rx=,wx=,ax=]
> > rp[f=8004c020h,i=0,an=00h,rx=?,wx=10m49s,ax=] s0=[7,0h,fd=6,ex=]
> > s1=[9,10h,fd=7,ex=] exp=? run(nice=0)
> > cpu information:
> > 6899 root  20   0  112224  21408   4260 R 100.0  0.7   3:04.96 
> > haproxy
> > Buffering is set to ensure that there is data in the haproxy buffer,
> > and haproxy can receive the fin package, set the CF_SHUTR flag, If the 
> > CF_SHUTR
> > flag has been set, The following code does not clear the timeout 
> > message,
> > causing cpu 100%:
> > stream.c:process_stream:
> > if (unlikely((res->flags & (CF_SHUTR|CF_READ_TIMEOUT)) == 
> > CF_READ_TIMEOUT)) {
> > if (si_b->flags & SI_FL_NOHALF)
> > si_b->flags |= SI_FL_NOLINGER;
> > si_shutr(si_b);
> > }
> >If you have closed the read, set the read timeout does not make sense.
> >With or without cf_shutr, read timeout is set:
> >if (tick_isset(s->be->timeout.serverfin)) {
> >res->rto = s->be->timeout.serverfin;
> >res->rex = tick_add(now_ms, res->rto);
> >}
> > ---
> >  src/stream.c | 4 ++--
> >  1 file changed, 2 insertions(+), 2 deletions(-)
> > 
> > diff --git a/src/stream.c b/src/stream.c
> > index b333dec..ab07505 100644
> > --- a/src/stream.c
> > +++ b/src/stream.c
> > @@ -2095,7 +2095,7 @@ struct task *process_stream(struct task *t)
> > if (req->flags & CF_READ_ERROR)
> > si_b->flags |= SI_FL_NOLINGER;
> > si_shutw(si_b);
> > -   if (tick_isset(s->be->timeout.serverfin)) {
> > +   if (tick_isset(s->be->timeout.serverfin) && !(res->flags & 
> > CF_SHUTR)) {
> > res->rto = s->be->timeout.serverfin;
> > res->rex = tick_add(now_ms, res->rto);
> > }
> > @@ -2278,7 +2278,7 @@ struct task *process_stream(struct task *t)
> > if (unlikely((res->flags & (CF_SHUTW|CF_SHUTW_NOW)) == CF_SHUTW_NOW &&
> >  channel_is_empty(res))) {
> > si_shutw(si_f);
> > -   if (tick_isset(sess->fe->timeout.clientfin)) {
> > +   if (tick_isset(sess->fe->timeout.clientfin) && !(req->flags & 
> > CF_SHUTR)) {
> >  

Re: [PATCH] BUG/MAJOR: stream: fix tcp half connection expire causes cpu 100%

2017-03-08 Thread Willy Tarreau
Hi,

On Wed, Mar 08, 2017 at 10:09:25PM +0800, longhb wrote:
>  [PATCH] BUG/MAJOR: stream: fix tcp half connection expire causes cpu 100%
> 
>  Repetition condition: 
>  haproxy config: 
>  global:
>  tune.bufsize 10485760 
>  defaults 
>  timeout server-fin 90s
>  timeout client-fin 90s
>  backend node2
>  mode tcp
>  timeout server 900s
>  timeout connect 10s
>  server def 127.0.0.1:
>  frontend fe_api
>  mode  tcp
>  timeout client 900s
>  bind :1990
>  use_backend node2
> timeout server-fin shorter than timeout server, the backend server
> sends data, this package is left in the cache of haproxy, the backend
> server continue sending fin package, haproxy recv fin package. this
> time the session information is as follows:
> 0x2373470: proto=tcpv4 src=127.0.0.1:39513 fe=fe_api be=node2
> srv=def ts=08 age=1s calls=3 
> rq[f=848000h,i=0,an=00h,rx=14m58s,wx=,ax=]
> rp[f=8004c020h,i=0,an=00h,rx=,wx=14m58s,ax=] s0=[7,0h,fd=6,ex=]
> s1=[7,18h,fd=7,ex=] exp=14m58s
> rp has set the CF_SHUTR state, next, the client sends the fin package,
> session information is as follows:
> 0x2373470: proto=tcpv4 src=127.0.0.1:39513 fe=fe_api be=node2
> srv=def ts=08 age=38s calls=4 rq[f=84a020h,i=0,an=00h,rx=,wx=,ax=]
> rp[f=8004c020h,i=0,an=00h,rx=1m11s,wx=14m21s,ax=] s0=[7,0h,fd=6,ex=]
> s1=[9,10h,fd=7,ex=] exp=1m11s
> After waiting 90s, session information is as follows:
> 0x2373470: proto=tcpv4 src=127.0.0.1:39513 fe=fe_api be=node2
> srv=def ts=04 age=4m11s calls=718074391 
> rq[f=84a020h,i=0,an=00h,rx=,wx=,ax=]
> rp[f=8004c020h,i=0,an=00h,rx=?,wx=10m49s,ax=] s0=[7,0h,fd=6,ex=]
> s1=[9,10h,fd=7,ex=] exp=? run(nice=0)
> cpu information:
> 6899 root  20   0  112224  21408   4260 R 100.0  0.7   3:04.96 
> haproxy
> Buffering is set to ensure that there is data in the haproxy buffer,
> and haproxy can receive the fin package, set the CF_SHUTR flag, If the 
> CF_SHUTR
> flag has been set, The following code does not clear the timeout message,
> causing cpu 100%:
> stream.c:process_stream:
> if (unlikely((res->flags & (CF_SHUTR|CF_READ_TIMEOUT)) == 
> CF_READ_TIMEOUT)) {
> if (si_b->flags & SI_FL_NOHALF)
> si_b->flags |= SI_FL_NOLINGER;
> si_shutr(si_b);
> }
>If you have closed the read, set the read timeout does not make sense.
>With or without cf_shutr, read timeout is set:
>if (tick_isset(s->be->timeout.serverfin)) {
>res->rto = s->be->timeout.serverfin;
>res->rex = tick_add(now_ms, res->rto);
>}
> ---
>  src/stream.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/src/stream.c b/src/stream.c
> index b333dec..ab07505 100644
> --- a/src/stream.c
> +++ b/src/stream.c
> @@ -2095,7 +2095,7 @@ struct task *process_stream(struct task *t)
>   if (req->flags & CF_READ_ERROR)
>   si_b->flags |= SI_FL_NOLINGER;
>   si_shutw(si_b);
> - if (tick_isset(s->be->timeout.serverfin)) {
> + if (tick_isset(s->be->timeout.serverfin) && !(res->flags & 
> CF_SHUTR)) {
>   res->rto = s->be->timeout.serverfin;
>   res->rex = tick_add(now_ms, res->rto);
>   }
> @@ -2278,7 +2278,7 @@ struct task *process_stream(struct task *t)
>   if (unlikely((res->flags & (CF_SHUTW|CF_SHUTW_NOW)) == CF_SHUTW_NOW &&
>channel_is_empty(res))) {
>   si_shutw(si_f);
> - if (tick_isset(sess->fe->timeout.clientfin)) {
> + if (tick_isset(sess->fe->timeout.clientfin) && !(req->flags & 
> CF_SHUTR)) {
>   req->rto = sess->fe->timeout.clientfin;
>   req->rex = tick_add(now_ms, req->rto);
>   }

I think the patch is fine but I'm worried because I think we changed
this recently (in 1.6 or so), so I'd like to investigate the cause of
the change (if any) and see what the motives were because I suspect
we possibly broke something else at the same time.

Thanks,
Willy



[PATCH] BUG/MAJOR: stream: fix tcp half connection expire causes cpu 100%

2017-03-08 Thread longhb
 [PATCH] BUG/MAJOR: stream: fix tcp half connection expire causes cpu 100%

 Repetition condition: 
 haproxy config: 
 global:
 tune.bufsize 10485760 
 defaults 
 timeout server-fin 90s
 timeout client-fin 90s
 backend node2
 mode tcp
 timeout server 900s
 timeout connect 10s
 server def 127.0.0.1:
 frontend fe_api
 mode  tcp
 timeout client 900s
 bind :1990
 use_backend node2
timeout server-fin shorter than timeout server, the backend server
sends data, this package is left in the cache of haproxy, the backend
server continue sending fin package, haproxy recv fin package. this
time the session information is as follows:
0x2373470: proto=tcpv4 src=127.0.0.1:39513 fe=fe_api be=node2
srv=def ts=08 age=1s calls=3 rq[f=848000h,i=0,an=00h,rx=14m58s,wx=,ax=]
rp[f=8004c020h,i=0,an=00h,rx=,wx=14m58s,ax=] s0=[7,0h,fd=6,ex=]
s1=[7,18h,fd=7,ex=] exp=14m58s
rp has set the CF_SHUTR state, next, the client sends the fin package,
session information is as follows:
0x2373470: proto=tcpv4 src=127.0.0.1:39513 fe=fe_api be=node2
srv=def ts=08 age=38s calls=4 rq[f=84a020h,i=0,an=00h,rx=,wx=,ax=]
rp[f=8004c020h,i=0,an=00h,rx=1m11s,wx=14m21s,ax=] s0=[7,0h,fd=6,ex=]
s1=[9,10h,fd=7,ex=] exp=1m11s
After waiting 90s, session information is as follows:
0x2373470: proto=tcpv4 src=127.0.0.1:39513 fe=fe_api be=node2
srv=def ts=04 age=4m11s calls=718074391 
rq[f=84a020h,i=0,an=00h,rx=,wx=,ax=]
rp[f=8004c020h,i=0,an=00h,rx=?,wx=10m49s,ax=] s0=[7,0h,fd=6,ex=]
s1=[9,10h,fd=7,ex=] exp=? run(nice=0)
cpu information:
6899 root  20   0  112224  21408   4260 R 100.0  0.7   3:04.96 
haproxy
Buffering is set to ensure that there is data in the haproxy buffer,
and haproxy can receive the fin package, set the CF_SHUTR flag, If the 
CF_SHUTR
flag has been set, The following code does not clear the timeout message,
causing cpu 100%:
stream.c:process_stream:
if (unlikely((res->flags & (CF_SHUTR|CF_READ_TIMEOUT)) == 
CF_READ_TIMEOUT)) {
if (si_b->flags & SI_FL_NOHALF)
si_b->flags |= SI_FL_NOLINGER;
si_shutr(si_b);
}
   If you have closed the read, set the read timeout does not make sense.
   With or without cf_shutr, read timeout is set:
   if (tick_isset(s->be->timeout.serverfin)) {
   res->rto = s->be->timeout.serverfin;
   res->rex = tick_add(now_ms, res->rto);
   }
---
 src/stream.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/src/stream.c b/src/stream.c
index b333dec..ab07505 100644
--- a/src/stream.c
+++ b/src/stream.c
@@ -2095,7 +2095,7 @@ struct task *process_stream(struct task *t)
if (req->flags & CF_READ_ERROR)
si_b->flags |= SI_FL_NOLINGER;
si_shutw(si_b);
-   if (tick_isset(s->be->timeout.serverfin)) {
+   if (tick_isset(s->be->timeout.serverfin) && !(res->flags & 
CF_SHUTR)) {
res->rto = s->be->timeout.serverfin;
res->rex = tick_add(now_ms, res->rto);
}
@@ -2278,7 +2278,7 @@ struct task *process_stream(struct task *t)
if (unlikely((res->flags & (CF_SHUTW|CF_SHUTW_NOW)) == CF_SHUTW_NOW &&
 channel_is_empty(res))) {
si_shutw(si_f);
-   if (tick_isset(sess->fe->timeout.clientfin)) {
+   if (tick_isset(sess->fe->timeout.clientfin) && !(req->flags & 
CF_SHUTR)) {
req->rto = sess->fe->timeout.clientfin;
req->rex = tick_add(now_ms, req->rto);
}
-- 
1.9.1