[ 
https://issues.apache.org/jira/browse/FLINK-27576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17534934#comment-17534934
 ] 

Aitozi edited comment on FLINK-27576 at 5/11/22 2:37 PM:
---------------------------------------------------------

Hi [~zhisheng] there is a ticket tracking this 
https://issues.apache.org/jira/browse/FLINK-24713 I will open a PR for this 
soon 


was (Author: aitozi):
Hi [~zhisheng] there is a ticket tracking this 
[https://issues.apache.org/jira/projects/FLINK/issues/FLINK-24713] I will open 
a PR for this soon 

> Flink will request new pod when jm pod is delete, but will remove when 
> TaskExecutor exceeded the idle timeout 
> --------------------------------------------------------------------------------------------------------------
>
>                 Key: FLINK-27576
>                 URL: https://issues.apache.org/jira/browse/FLINK-27576
>             Project: Flink
>          Issue Type: Bug
>          Components: Deployment / Kubernetes
>    Affects Versions: 1.12.0
>            Reporter: zhisheng
>            Priority: Major
>         Attachments: image-2022-05-11-20-06-58-955.png, 
> image-2022-05-11-20-08-01-739.png, jobmanager_log.txt
>
>
> flink 1.12.0 enable the ha(zk) and checkpoint, when i use kubectl delete the 
> jm pod, the job will  request new jm pod failover from the last checkpoint , 
> it is ok.  But it will request new tm pod again, but not use actually, the 
> new tm pod will closed when TaskExecutor exceeded the idle timeout . actually 
> it will use the old tm, why need to request for new tm pod? whether the job 
> will fail if the cluster has no resource for the new tm?Can we optimize and 
> reuse the old tm directly?
>  
> [^jobmanager_log.txt]
> ^!image-2022-05-11-20-06-58-955.png!^
> ^!image-2022-05-11-20-08-01-739.png!^



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

Reply via email to