Thank you for the input Jarek,
I'll try to start working on it this week and see how it goes

On Mon, Jan 27, 2020 at 1:45 AM Jarek Potiuk <[email protected]>
wrote:

> I have a bit of a different point of view than Kamil - I think once we are
> authenticated, the Kubernetes API is pretty standard - so having hooks for
> different services that provide the same interface for the operators  might
> make sense. Such hooks could authenticate in a "cloud-specific way" using
> the connection provided, but then after authentication it should be
> "generic".
>
> I like the idea.
>
> J.
>
> On Sat, Jan 25, 2020 at 9:22 PM Kamil Breguła <[email protected]>
> wrote:
>
> > Hello,
> >
> > The issue of connection configuration for individual providers has not
> > been standardized in any way by the Kubernetes community. I'm afraid
> > we won't create a new solution. Some groups are trying to create their
> > solution[1][2], but for now it is a mess.[3] Google employees use
> > gcloud to obtain cluster credentials.[2]
> >
> > I am not sure if we want to create generic operators because they can
> > be very difficult to use in practice. The operator who will be
> > designed for a specific purpose will be easier to use and will work
> > better because we can improve it. Very generic solutions are very
> > difficult to improve and refactor.
> >
> > Best regards,
> > Kamil
> >
> > [1]
> >
> https://github.com/kubernetes/community/blob/master/contributors/design-proposals/auth/kubectl-exec-plugins.md
> > [2]
> >
> https://github.com/kubernetes-client/python-base/blob/a2d1024524de78b62e5f9aa72d34cb9ea9de2b97/config/exec_provider.py
> > [3]
> >
> https://github.com/kubernetes-client/python-base/blob/a2d1024524de78b62e5f9aa72d34cb9ea9de2b97/config/kube_config.py#L219-L224
> > [4]
> >
> https://github.com/apache/airflow/pull/3532/files#diff-3cfc2b387652665d77ae50581081560eR266-R270
> >
> > On Sat, Jan 25, 2020 at 7:51 PM Roi Teveth <[email protected]>
> wrote:
> > >
> > > hi all,
> > > I'm working at my company on: spark on Kubernetes POC during the work
> > I've
> > > built an operator for spark on Kubernetes and trying to contribute it
> to
> > > airflow(https://github.com/apache/airflow/pull/7163) in the process I
> > > started thinking about:
> > > 1. building hooks for managed Kubernetes engines on Amazon, GCP and
> > > others(EKS, GKE...) using their own connection, for example, using AWS
> > > connection to get kubeconfig then Kubernetes API client.
> > > 2. building more generalized Kubernetes operators: alongside
> > > sparkK8sOperator that send SparkApplication CRD to Kubernetes cluster,
> I
> > > can build kubernetesCrdOperator that will create any kind of CRD. and
> > > sensor that will be given which field in the CRD to check and the fail
> or
> > > success keywords. with the same principle, I can build
> > > kubernetesJobOperator that will create and sense Kubernetes
> Job(although
> > > it's very close to what Kubernetes Pod operator does)
> > >
> > > can you share your thoughts about it? and if it will be useful for
> > Airflow
> > > Thank you
> > > Roi Teveth
> >
>
>
> --
>
> Jarek Potiuk
> Polidea <https://www.polidea.com/> | Principal Software Engineer
>
> M: +48 660 796 129 <+48660796129>
> [image: Polidea] <https://www.polidea.com/>
>

Reply via email to