Hi Kamil,

Could you explain your use-case a little further? Is it that your k8s
cluster runs into issues launching 250 tasks at the same time or that
airflow runs into issues launching 250 tasks at the same time? I'd love to
know more so I could try to address it in a future airflow release.

Thanks!

Daniel

On Tue, Apr 16, 2019 at 3:32 AM Kamil Gałuszka <[email protected]> wrote:

> Hey,
>
> We are quite interested in that Executor too but my main concern isn't it a
> > waste of resource to start a whole pod to run thing like DummyOperator
> for
> > example ? We have a cap of 200 tasks at any given time and we regularly
> hit
> > this cap, we cope with that with 20 celery workers but with the
> > KubernetesExecutor that would mean 200 pods, does it really scale that
> > easily ?
> >
>
> Unfortunately no.
>
> We are now having problem of having a DAG with 300 tasks in DAG that should
> start parallel at once, and there is only about 140 task instances started.
> Setting parallelism to 256 didn't help and system struggles to get the
> numbers up that high for running tasks.
>
> The biggest problem that we have now, is to find bottleneck in scheduler,
> but it's taking time to debug it.
>
> We will definitely be investigating that further and share findings but as
> for now, I wouldn't say it's "non-problematic" as some other people stated.
>
> Thanks
> Kamil
>

Reply via email to