Asquator commented on issue #45636:
URL: https://github.com/apache/airflow/issues/45636#issuecomment-2902243730

   @cheery550 
   Yes, this was the initial solution proposed by @nevcohen above. The only 
issue I see with it is possibly a very long where clause that filters out 
starved tasks and dags. Also, the max loop count makes it cumbersome to tune 
manually and may get large for big workloads, causing the critical section lock 
held to be held for a while. Nonetheless I like the concept of optimistic 
microbatching that comes with it. If we find a way to avoid query explosion, 
other issues can be fixed too. 
   
   What if we do limit(max_tis_d) where max_tis_d is some dynamically computed 
parameter based on recent scheduler heuristics in order to pull N tasks where N 
is close to real max_tis we will schedule? We either drop the extra tasks or 
schedule them too, based on policy.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to