Hello,
In order to maintain at least one pod for both the Flink Kubernetes
Operator and JobManagers through Kubernetes processes that use the Eviction
API
<https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/#eviction-api>
such as when draining a node, we have deployed Pod Disruption Budgets
<https://kubernetes.io/docs/concepts/workloads/pods/disruptions/#pod-disruption-budgets>
in the appropriate namespaces.
Here is the flink-kubernetes-operator PDB:
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: flink-kubernetes-operator
spec:
minAvailable: 1
selector:
matchLabels:
app: flink-kubernetes-operator
Where the Flink Kubernetes Operator has the flink-kubernetes-operator app
label defined:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: flink-kubernetes-operator
Here is the jobmanager PDB (deployed alongside each FlinkDeployment):
apiVersion: policy/v1
kind: PodDisruptionBudget
spec:
minAvailable: 1
selector:
matchLabels:
name: jobmanager
Where the FlinkDeployment has the jobmanager name label defined:
apiVersion: flink.apache.org/v1beta1
kind: FlinkDeployment
spec:
jobManager:
podTemplate:
metadata:
labels:
name: jobmanager
We were wondering if it would make sense for the Flink Kubernetes Operator
to automatically create the PDBs as they are a native Kubernetes resource
like the Ingress that the operator currently creates.
Thanks,
Jeremy