Github user holdenk commented on a diff in the pull request:

    https://github.com/apache/spark/pull/21977#discussion_r207596709
  
    --- Diff: 
resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/Client.scala 
---
    @@ -333,7 +340,7 @@ private[spark] class Client(
         val maxMem = newAppResponse.getMaximumResourceCapability().getMemory()
         logInfo("Verifying our application has not requested more than the 
maximum " +
           s"memory capability of the cluster ($maxMem MB per container)")
    -    val executorMem = executorMemory + executorMemoryOverhead
    +    val executorMem = executorMemory + executorMemoryOverhead + 
pysparkWorkerMemory
         if (executorMem > maxMem) {
           throw new IllegalArgumentException(s"Required executor memory 
($executorMemory" +
             s"+$executorMemoryOverhead MB) is above the max threshold ($maxMem 
MB) of this cluster! " +
    --- End diff --
    
    Maybe just switch it to use the total `$executorMem` instead?


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to