[ https://issues.apache.org/jira/browse/FLINK-5830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15877984#comment-15877984 ]
ASF GitHub Bot commented on FLINK-5830: --------------------------------------- Github user StephanEwen commented on the issue: https://github.com/apache/flink/pull/3360 Looking at this from another angle: If any Runnable that is scheduled ever lets an exception bubble out, can we still assume that the JobManager is in a sane state? Or should be actually make every uncaught exception in the RPC executors a fatal error and send a `notifyFatalError` to the `RpcEndpoint`? > OutOfMemoryError during notify final state in TaskExecutor may cause job stuck > ------------------------------------------------------------------------------ > > Key: FLINK-5830 > URL: https://issues.apache.org/jira/browse/FLINK-5830 > Project: Flink > Issue Type: Bug > Reporter: zhijiang > Assignee: zhijiang > > The scenario is like this: > {{JobMaster}} tries to cancel all the executions when process failed > execution, and the task executor already acknowledge the cancel rpc message. > When notify the final state in {{TaskExecutor}}, it causes OOM in > {{AkkaRpcActor}} and this error is caught to log the info. The final state > will not be sent any more. > The {{JobMaster}} can not receive the final state and trigger the restart > strategy. > One solution is to catch the {{OutOfMemoryError}} and throw it, then it will > cause to shut down the {{ActorSystem}} resulting in exiting the > {{TaskExecutor}}. The {{JobMaster}} can be notified of {{TaskExecutor}} > failure and fail all the tasks to trigger restart successfully. -- This message was sent by Atlassian JIRA (v6.3.15#6346)