Github user jiangxb1987 commented on a diff in the pull request:

    https://github.com/apache/spark/pull/19881#discussion_r177362439
  
    --- Diff: 
core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala ---
    @@ -116,9 +120,12 @@ private[spark] class ExecutorAllocationManager(
       // TODO: The default value of 1 for spark.executor.cores works right now 
because dynamic
       // allocation is only supported for YARN and the default number of cores 
per executor in YARN is
       // 1, but it might need to be attained differently for different cluster 
managers
    -  private val tasksPerExecutor =
    +  private val tasksPerExecutorForFullParallelism =
    --- End diff --
    
    I was originally thinking we may avoid introducing the concept 
`tasksPerExecutorForFullParallelism`, but rather only have executorCores and 
taskCPUs, but I don't have a strong opinion over that.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to