Hi Enrique,
I think you are actually seeing a mixture of FLINK-20219 and FLINK-20695.
If any of these problems is solved, then the problem should be gone. Also
note that the K8s HA services won't clean up the ConfigMaps if you delete
the deployment as documented here [1].
[1]
Hi Till,
I'm not using Zookeeper HA, but the new Native Kubernetes HA. I'm deploying
the Flink Cluster using a StatefulSet one for each JM/TM and PVC to store HA
metadata/checkpointing/savepointing. When I delete both StatefulSets and the
JM/TM pods terminate the HA Config Maps are not deleted.
Hi Enrique,
I think it is related with FLINK-20219. Currently, the HA related
ConfigMap/ZNodes could not be cleaned up properly.
The HA related ConfigMaps clean up mechanism for session could get improved
in the following two ways.
* Delete the jobmanager leader ConfigMap once the job reached to
Hi Enrique,
I think you are running into this problem FLINK-20695 [1]. In a nutshell,
Flink only deletes the config maps when it shuts down at the moment. We
want to change this with the next release.
[1] https://issues.apache.org/jira/browse/FLINK-20695
Cheers,
Till
On Wed, May 5, 2021 at
Hi all,
I am deploying a Flink Cluster in session mode using Kubernetes HA and have
seen it working with the different config maps for the dispatcher,
restserver and resourcemanager. I also have configured storage for
checkpointing and HA metadata.
When I submit a job, I can see that a config