[ https://issues.apache.org/jira/browse/SPARK-16382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15363592#comment-15363592 ]
Saisai Shao commented on SPARK-16382: ------------------------------------- I would suggest to fail and complain. Max usually specifies the upper bound of resources can be used for Spark, usually it should not be exceeded. Also in the YarnSparkHadoopUtil.scala, we have such constraint: {code} require(initialNumExecutors >= minNumExecutors && initialNumExecutors <= maxNumExecutors, s"initial executor number $initialNumExecutors must between min executor number " + s"$minNumExecutors and max executor number $maxNumExecutors") {code} > YARN - Dynamic allocation with spark.executor.instances should increase max > executors. > -------------------------------------------------------------------------------------- > > Key: SPARK-16382 > URL: https://issues.apache.org/jira/browse/SPARK-16382 > Project: Spark > Issue Type: Bug > Components: YARN > Reporter: Ryan Blue > > SPARK-13723 changed the behavior of dynamic allocation when > {{--num-executors}} ({{spark.executor.instances}}) is set. Rather than > turning off dynamic allocation, the value is used as the initial number of > executors. This did not change the behavior of > {{spark.dynamicAllocation.maxExecutors}}. We've noticed that some users set > {{--num-executors}} higher than the max and the expectation is that the max > increases. > I think that either max should be increased, or Spark should fail and > complain that the executors requested is higher than the max. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org