Github user tgravescs commented on the issue:

    https://github.com/apache/spark/pull/19881
  
    No we don't strictly need it in the name, the reasoning behind it was to 
indicate that this was a divisor based on if you have fully allocated executors 
for all the tasks and were running full parallelism. 
    Are you suggesting just use 
spark.dynamicAllocation.executorAllocationDivisor?  other ones thrown are were 
like maxExecutorAllocationDivisor.  One thing we were trying to keep from doing 
is confusing it with the maxExecutors config as well.  Opinions?


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to