[ 
https://issues.apache.org/jira/browse/FLINK-21884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-21884:
-----------------------------------
    Description: 
In Flink 1.13 (and older versions), TaskManager failures stall the processing 
for a significant amount of time, even though the system gets indications for 
the failure almost immediately through network connection losses.

This is due to a high (default) heartbeat timeout of 50 seconds [1] to 
accommodate for GC pauses, transient network disruptions or generally slow 
environments (otherwise, we would unregister a healthy TaskManager).

Such a high timeout can lead to disruptions in the processing (no processing 
for certain periods, high latencies, buildup of consumer lag etc.). In Reactive 
Mode (FLINK-10407), the issue surfaces on scale-down events, where the loss of 
a TaskManager is immediately visible in the logs, but the job is stuck in 
"FAILING" for quite a while until the TaskManger is really deregistered. (Note 
that this issue is not that critical in a autoscaling setup, because Flink can 
control the scale-down events and trigger them proactively)

On this metrics dashboard, one can see that the job has significant throughput 
drops / consumer lags during scale down (and also CPU usage spikes on 
processing the queued events, leading to incorrect scale up events again).

 !image-2021-03-19-20-10-40-324.png|thumbnail!

One idea to solve this problem is to:
- Score TaskManagers based on certain signals (# exceptions reported, exception 
types (connection losses, akka failures), failure frequencies,  ...) and 
blacklist them accordingly.

[1] 
https://ci.apache.org/projects/flink/flink-docs-master/docs/deployment/config/#heartbeat-timeout




  was:
In Flink 1.13 (and older versions), TaskManager failures stall the processing 
for a significant amount of time, even though the system gets indications for 
the failure almost immediately through network connection losses.

This is due to a high (default) heartbeat timeout of 50 seconds [1] to 
accommodate for GC pauses, transient network disruptions or generally slow 
environments (otherwise, we would unregister a healthy TaskManager).

Such a high timeout can lead to disruptions in the processing (no processing 
for certain periods, high latencies, buildup of consumer lag etc.). In Reactive 
Mode (FLINK-10407), the issue surfaces on scale-down events, where the loss of 
a TaskManager is immediately visible in the logs, but the job is stuck in 
"FAILING" for quite a while until the TaskManger is really deregistered. (Note 
that this issue is not that critical in a autoscaling setup, because Flink can 
control the scale-down events and trigger them proactively)

On this metrics dashboard, one can see that the job has significant throughput 
drops / consumer lags during scale down (and also CPU usage spikes on 
processing the queued events, leading to incorrect scale up events again).
 !image-2021-03-19-20-10-40-324.png! 

One idea to solve this problem is to:
- Score TaskManagers based on certain signals (# exceptions reported, exception 
types (connection losses, akka failures), failure frequencies,  ...) and 
blacklist them accordingly.

[1] 
https://ci.apache.org/projects/flink/flink-docs-master/docs/deployment/config/#heartbeat-timeout





> Reduce TaskManager failure detection time
> -----------------------------------------
>
>                 Key: FLINK-21884
>                 URL: https://issues.apache.org/jira/browse/FLINK-21884
>             Project: Flink
>          Issue Type: Improvement
>          Components: Runtime / Coordination
>            Reporter: Robert Metzger
>            Priority: Critical
>             Fix For: 1.14.0
>
>         Attachments: image-2021-03-19-20-10-40-324.png
>
>
> In Flink 1.13 (and older versions), TaskManager failures stall the processing 
> for a significant amount of time, even though the system gets indications for 
> the failure almost immediately through network connection losses.
> This is due to a high (default) heartbeat timeout of 50 seconds [1] to 
> accommodate for GC pauses, transient network disruptions or generally slow 
> environments (otherwise, we would unregister a healthy TaskManager).
> Such a high timeout can lead to disruptions in the processing (no processing 
> for certain periods, high latencies, buildup of consumer lag etc.). In 
> Reactive Mode (FLINK-10407), the issue surfaces on scale-down events, where 
> the loss of a TaskManager is immediately visible in the logs, but the job is 
> stuck in "FAILING" for quite a while until the TaskManger is really 
> deregistered. (Note that this issue is not that critical in a autoscaling 
> setup, because Flink can control the scale-down events and trigger them 
> proactively)
> On this metrics dashboard, one can see that the job has significant 
> throughput drops / consumer lags during scale down (and also CPU usage spikes 
> on processing the queued events, leading to incorrect scale up events again).
>  !image-2021-03-19-20-10-40-324.png|thumbnail!
> One idea to solve this problem is to:
> - Score TaskManagers based on certain signals (# exceptions reported, 
> exception types (connection losses, akka failures), failure frequencies,  
> ...) and blacklist them accordingly.
> [1] 
> https://ci.apache.org/projects/flink/flink-docs-master/docs/deployment/config/#heartbeat-timeout



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to