[ 
https://issues.apache.org/jira/browse/SPARK-4585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14224257#comment-14224257
 ] 

Sandy Ryza commented on SPARK-4585:
-----------------------------------

I was discussing this with [~brocknoland].  The issue is that picking a number 
of executors at app-start-time is difficult.  In the common Hive case, we don't 
know what queries the user will run when the app starts, so, even if there was 
a good heuristic, we wouldn't really be able to apply it.  If number of 
starting executors and max executors are linked, there's no good way of setting 
a reasonable default for the starting number of executors that also allows apps 
to scale up to big queries

> Spark dynamic scaling executors use upper limit value as default.
> -----------------------------------------------------------------
>
>                 Key: SPARK-4585
>                 URL: https://issues.apache.org/jira/browse/SPARK-4585
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core, YARN
>    Affects Versions: 1.1.0
>            Reporter: Chengxiang Li
>
> With SPARK-3174, one can configure a minimum and maximum number of executors 
> for a Spark application on Yarn. However, the application always starts with 
> the maximum. It seems more reasonable, at least for Hive on Spark, to start 
> from the minimum and scale up as needed up to the maximum.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to