I think it is good to start simple.
I would start out using a single machine, use local executor, running on
docker, using docker compose, with the puckel image:
https://github.com/puckel/docker-airflow (or your own customization
thereof).
I would use a cloud database running postgres, e.g. on RD
Hi,
My name is Yair, I'm working at Matrix BI in Israel as an Big Data architect.
I tried to integrate Kubernetes and Airflow following the blog
https://kubernetes.io/blog/2018/06/28/airflow-on-kubernetes-part-1-a-different-kind-of-operator/
.
After a lot of efforts, I think the repository
http
Yesterday we had another master breakage - this time from elasticsearch
releasing MINOR version 7.6 breaking our builds (not it was MINOR version
so should be compatible it was not for us). I fixed it quickly
yesterday by limiting it to < 7.6 but for me - this is quite clear that
trying to rel
I think the main idea here was to delegate the authentication to what
connexion provides (it has various authentication plugins). And I agree
authorization should be addressed in the design as it cannot be solved by
connexion "standard" plugins nor Open API definition - this is more of
application
The structure is indeed big and there are some cyclic dependencies and
especially classes/modules that have multiple responsibilities.
First of - all let me say what a lot of committers and contributors do
recently to fight with that - we are unentangling that slowly so
hopefully it will be easie
I feel some of the stuff for instance Schedular HA could wait for a point
release of version 2 (although maybe this a lot further a long than I am
aware). Like you mentioned Spark did with K8s.
Also does the new API need to be feature complete or just enough
functionality to warrant removing the