We are using airflow version 1.9 with celery executor. And we are observing 
that Airflow Scheduler is not honouring the "non_pooled_task_slot_count" 
config.  We are using default setting which is set to 128. But we could 
schedule and run >128 tasks concurrently.
>From code it seems that scheduler is re-initialising the open_slots with 128 
>instead of setting the remaining left over slots.
In jobs.py
        for pool, task_instances in pool_to_task_instances.items():
            if not pool:
                # Arbitrary:
                # If queued outside of a pool, trigger no more than
                # non_pooled_task_slot_count per run
                open_slots = conf.getint('core', 'non_pooled_task_slot_count')
Thanks,
Raman Gupta

Reply via email to