[
https://issues.apache.org/jira/browse/FLINK-24894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17443711#comment-17443711
]
Yangze Guo commented on FLINK-24894:
------------------------------------
I think what you need is:
- Stop your job with savepoint
- Edit the job config
- Run the job from the savepoint.
> Flink on k8s, open the HA mode based on KubernetesHaServicesFactory ,When I
> deleted the job, the config map created under the HA mechanism was not
> deleted.
> -----------------------------------------------------------------------------------------------------------------------------------------------------------
>
> Key: FLINK-24894
> URL: https://issues.apache.org/jira/browse/FLINK-24894
> Project: Flink
> Issue Type: Bug
> Components: Deployment / Kubernetes
> Environment: 1.13.2
> Reporter: john
> Priority: Major
>
> Flink on k8s, open the HA mode based on KubernetesHaServicesFactory. When I
> deleted the job, the config map created under the HA mechanism was not
> deleted. This leads to a problem: if my last concurrency was 100, changing to
> 40 this time will not take effect. This can be understood because jobgraph
> recovered from high-availability.storageDir and ignored the client's.
> My question is: When deleting a job, the config map created under the HA
> mechanism is not deleted. Is this the default mechanism of HA, or is it a bug?
--
This message was sent by Atlassian Jira
(v8.20.1#820001)