[ 
https://issues.apache.org/jira/browse/SPARK-16435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15371975#comment-15371975
 ] 

Saisai Shao commented on SPARK-16435:
-------------------------------------

OK, I will file a small patch to add the warning log about this invalid 
configuration.

> Behavior changes if initialExecutor is less than minExecutor for dynamic 
> allocation
> -----------------------------------------------------------------------------------
>
>                 Key: SPARK-16435
>                 URL: https://issues.apache.org/jira/browse/SPARK-16435
>             Project: Spark
>          Issue Type: Bug
>          Components: Scheduler, Spark Core
>    Affects Versions: 2.0.0
>            Reporter: Saisai Shao
>            Priority: Minor
>
> After SPARK-13723, the behavior changed for 
> {{spark.dynamicAllocation.initialExecutors}} less then 
> {{spark.dynamicAllocation.minExecutors}} situation.
> initialExecutors < minExecutors is an invalid setting,
> h4. Before SPARK-13723
> If initialExecutors < minExecutors, Spark will throw exception with:
> {code}
> java.lang.IllegalArgumentException: requirement failed: initial executor 
> number xxx must between min executor number xxx and max executor number xxx
> {code}
> This will clearly let user know that current configuration is invalid.
> h4. After SPARK-13723
> Because we also consider {{spark.executor.instances}}, so the initial number 
> is the max value between minExecutors, initialExecutors, numExecutors.
> This will silently ignore the situation where initialExecutors < minExecutors.
> So at least we should add some warning logs to let user know this is an 
> invalid configuration.
> What do you think [~tgraves], [~rdblue]?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to