Hey, We are quite interested in that Executor too but my main concern isn't it a > waste of resource to start a whole pod to run thing like DummyOperator for > example ? We have a cap of 200 tasks at any given time and we regularly hit > this cap, we cope with that with 20 celery workers but with the > KubernetesExecutor that would mean 200 pods, does it really scale that > easily ? >
Unfortunately no. We are now having problem of having a DAG with 300 tasks in DAG that should start parallel at once, and there is only about 140 task instances started. Setting parallelism to 256 didn't help and system struggles to get the numbers up that high for running tasks. The biggest problem that we have now, is to find bottleneck in scheduler, but it's taking time to debug it. We will definitely be investigating that further and share findings but as for now, I wouldn't say it's "non-problematic" as some other people stated. Thanks Kamil
