holdenk opened a new pull request #29367:
URL: https://github.com/apache/spark/pull/29367


   ### What changes were proposed in this pull request?
   
   If graceful decommissioning is enabled, Spark's dynamic scaling uses this 
instead of directly killing executors.
   
   ### Why are the changes needed?
   
   When scaling down Spark we should avoid triggering recomputes as much as 
possible.
   
   ### Does this PR introduce _any_ user-facing change?
   
   Hopefully their jobs run faster or at the same speed. It also enables 
experimental shuffle service free dynamic scaling when graceful decommissioning 
is enabled (using the same code as the shuffle tracking dynamic scaling).
   
   ### How was this patch tested?
   
   For now I've extended the ExecutorAllocationManagerSuite for both core & 
streaming.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to