Repository: spark
Updated Branches:
  refs/heads/branch-1.6 3471244f7 -> 7e17ce5b6


[SPARK-11771][YARN][TRIVIAL] maximum memory in yarn is controlled by two params 
have both in error msg

When we exceed the max memory tell users to increase both params instead of 
just the one.

Author: Holden Karau <hol...@us.ibm.com>

Closes #9758 from 
holdenk/SPARK-11771-maximum-memory-in-yarn-is-controlled-by-two-params-have-both-in-error-msg.

(cherry picked from commit 52c734b589277267be07e245c959199db92aa189)
Signed-off-by: Andrew Or <and...@databricks.com>


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/7e17ce5b
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/7e17ce5b
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/7e17ce5b

Branch: refs/heads/branch-1.6
Commit: 7e17ce5b637f37ac1b5731bfd981efb6f787a9d9
Parents: 3471244
Author: Holden Karau <hol...@us.ibm.com>
Authored: Tue Nov 17 15:51:03 2015 -0800
Committer: Andrew Or <and...@databricks.com>
Committed: Tue Nov 17 15:51:10 2015 -0800

----------------------------------------------------------------------
 yarn/src/main/scala/org/apache/spark/deploy/yarn/Client.scala | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark/blob/7e17ce5b/yarn/src/main/scala/org/apache/spark/deploy/yarn/Client.scala
----------------------------------------------------------------------
diff --git a/yarn/src/main/scala/org/apache/spark/deploy/yarn/Client.scala 
b/yarn/src/main/scala/org/apache/spark/deploy/yarn/Client.scala
index a3f33d8..ba79988 100644
--- a/yarn/src/main/scala/org/apache/spark/deploy/yarn/Client.scala
+++ b/yarn/src/main/scala/org/apache/spark/deploy/yarn/Client.scala
@@ -258,7 +258,8 @@ private[spark] class Client(
     if (executorMem > maxMem) {
       throw new IllegalArgumentException(s"Required executor memory 
(${args.executorMemory}" +
         s"+$executorMemoryOverhead MB) is above the max threshold ($maxMem MB) 
of this cluster! " +
-        "Please increase the value of 'yarn.scheduler.maximum-allocation-mb'.")
+        "Please check the values of 'yarn.scheduler.maximum-allocation-mb' 
and/or " +
+        "'yarn.nodemanager.resource.memory-mb'.")
     }
     val amMem = args.amMemory + amMemoryOverhead
     if (amMem > maxMem) {


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to