cheery550 commented on issue #45636:
URL: https://github.com/apache/airflow/issues/45636#issuecomment-2899798999

   @Nataneljpwd 
   > If we assume a dag with hundreds or thousands of tasks (could be mapped 
tasks), it is still possible to exit the loop after the proposed max iterations 
with less than 32 ti's set to running.
   
   Yes, this approach only helps identify ready task more efficiently(instead 
of picking `max_tis` ready tasks). If we want achieve this goal,  filters 
(e.g., starved pool/dag/task) should be applied before or integrated into 
`query.limit(max_tis)`.
   
   > if you have a lot of tasks which are not ready to be running, however, are 
queued, the issue may still persist,  in addition to the scheduler now being 
stuck in the critical section for longer (as it loops over the critical section)
   
   Does this mean that many tasks are in SCHEDULED state (not QUEUE) but not 
ready to be run will cause the critical section to be stuck longer?  If so, the 
answer is yes. To avoid this, we need to:
   1. Apply filters (e.g., starved pool/dag/task) be applied before or 
integrated into `query.limit(max_tis)`
   2. Or balance the value of max_loop_count.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to