[ 
https://issues.apache.org/jira/browse/SPARK-16382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15364363#comment-15364363
 ] 

Thomas Graves commented on SPARK-16382:
---------------------------------------

I think we should fail and complain and I actually thought it already did that. 
 That way if they are picking up some cluster defaults they explicitly have to 
override that setting to go higher.  If not then they just have conflicting 
configs and should fix anyway.

What is the behavior you are getting now?

It fails for me in client mode but maybe there are cases it isn't?
java.lang.IllegalArgumentException: requirement failed: initial executor number 
2000 must between min executor number 0 and max executor number 1000


> YARN - Dynamic allocation with spark.executor.instances should increase max 
> executors.
> --------------------------------------------------------------------------------------
>
>                 Key: SPARK-16382
>                 URL: https://issues.apache.org/jira/browse/SPARK-16382
>             Project: Spark
>          Issue Type: Bug
>          Components: YARN
>            Reporter: Ryan Blue
>
> SPARK-13723 changed the behavior of dynamic allocation when 
> {{--num-executors}} ({{spark.executor.instances}}) is set. Rather than 
> turning off dynamic allocation, the value is used as the initial number of 
> executors. This did not change the behavior of 
> {{spark.dynamicAllocation.maxExecutors}}. We've noticed that some users set 
> {{--num-executors}} higher than the max and the expectation is that the max 
> increases.
> I think that either max should be increased, or Spark should fail and 
> complain that the executors requested is higher than the max.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to