I second Matthias's suggestion. If you are using the "standalone Flink on K8s", then you need some external tools(e.g. K8s operator[1][2]) to help with the lifecycle management.
Also we have the native Kubernetes integration, all the K8s resources will be cleaned up automatically when the Flink job finished/failed/cancelled. [1]. https://github.com/GoogleCloudPlatform/flink-on-k8s-operator [2]. https://github.com/lyft/flinkk8soperator Best, Yang Matthias Pohl <matth...@ververica.com> 于2020年10月29日周四 上午5:05写道: > Hi Ruben, > thanks for reaching out to us. Flink's native Kubernetes Application mode > [1] might be what you're looking for. > > Best, > Matthias > > [1] > https://ci.apache.org/projects/flink/flink-docs-release-1.11/ops/deployment/native_kubernetes.html#flink-kubernetes-application > > On Wed, Oct 28, 2020 at 11:50 AM Ruben Laguna <ruben.lag...@gmail.com> > wrote: > >> Hi, >> >> First time user , I'm just evaluating Flink at the moment, and I was >> reading >> https://ci.apache.org/projects/flink/flink-docs-stable/ops/deployment/kubernetes.html#deploy-job-cluster >> and I don't fully understand if a Job Cluster will autoterminate after >> the job is completed (for at batch job) ? >> >> The examples look to me like like the task manager pods will continue >> running as it's configured as Deployment. >> >> So is there any way to achieve "autotermination" or am I supposed to >> monitor the job status externally (like from airflow) and delete the >> JobManager and TaskManager kubernetes resources from there? >> >> -- >> /Rubén Laguna >> >