hi all,
I'm working at my company on: spark on Kubernetes POC during the work I've
built an operator for spark on Kubernetes and trying to contribute it to
airflow(https://github.com/apache/airflow/pull/7163) in the process I
started thinking about:
1. building hooks for managed Kubernetes engines on Amazon, GCP and
others(EKS, GKE...) using their own connection, for example, using AWS
connection to get kubeconfig then Kubernetes API client.
2. building more generalized Kubernetes operators: alongside
sparkK8sOperator that send SparkApplication CRD to Kubernetes cluster, I
can build kubernetesCrdOperator that will create any kind of CRD. and
sensor that will be given which field in the CRD to check and the fail or
success keywords. with the same principle, I can build
kubernetesJobOperator that will create and sense Kubernetes Job(although
it's very close to what Kubernetes Pod operator does)

can you share your thoughts about it? and if it will be useful for Airflow
Thank you
Roi Teveth

Reply via email to