[ 
https://issues.apache.org/jira/browse/FLINK-19909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17224511#comment-17224511
 ] 

Till Rohrmann commented on FLINK-19909:
---------------------------------------

I think the behaviour should not be different if the job is canceled or has 
failed terminally. In both cases, the HA data should be cleaned up and the 
cluster should shut down. Only if the system encounters a framework exception 
we should call the fatal error handler.

> Flink application in attach mode could not terminate when the only job is 
> canceled
> ----------------------------------------------------------------------------------
>
>                 Key: FLINK-19909
>                 URL: https://issues.apache.org/jira/browse/FLINK-19909
>             Project: Flink
>          Issue Type: Bug
>          Components: Deployment / Kubernetes, Deployment / YARN, Runtime / 
> Coordination
>    Affects Versions: 1.12.0, 1.11.3
>            Reporter: Yang Wang
>            Assignee: Kostas Kloudas
>            Priority: Blocker
>             Fix For: 1.12.0, 1.11.3
>
>         Attachments: log.jm
>
>
> Currently, the Yarn and Kubernetes application in attach mode could not 
> terminate the Flink cluster after the only job is canceled. Because we are 
> throwing {{ApplicationExecutionException}} in 
> {{ApplicationDispatcherBootstrap#runApplicationEntryPoint}}. However, we are 
> only checking {{ApplicationFailureException}} in 
> {{runApplicationAndShutdownClusterAsync}}. Then we will go to fatal error 
> handler which make the jobmanager directly exits. And it has no chance to 
> deregister itself to the cluster manager(Yarn/Kubernetes). That means the 
> jobmanager will be relaunched by cluster manager again and again until it 
> exhausts the retry attempts.
>  
> cc [~kkl0u], I am not sure is this an expected change? I think it could work 
> in 1.11.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to