[ 
https://issues.apache.org/jira/browse/SPARK-7699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14557911#comment-14557911
 ] 

Sandy Ryza edited comment on SPARK-7699 at 5/25/15 1:26 AM:
------------------------------------------------------------

[~sowen] I think the possible flaw in your argument is that it relies on 
"initial load" being defined in some reasonable way.

I.e. I think the worry is that the following can happen:
* initial = 3 and min = 1
* cluster is large and uncontended
* first line of user code is a job submission that can make use of at least 3
* because the executor allocation thread starts immediately, requested 
executors ramps down to 1 before the user code has a chance to submit the job

Which is to say: what guarantees do we provide about initialExecutors other 
than that it's the number of executors requests we have before some opaque 
internal thing happens to adjust it down?  One possible such guarantee we could 
provide is that we won't adjust down for some fixed number of seconds after the 
SparkContext starts.


was (Author: sandyr):
[~sowen] I think the possible flaw in your argument is that it relies on 
"initial load" being defined in some reasonable.

I.e. I think the worry is that the following can happen:
* initial = 3 and min = 1
* cluster is large and uncontended
* first line of user code is a job submission that can make use of at least 3
* because the executor allocation thread starts immediately, requested 
executors ramps down to 1 before the user code has a chance to submit the job

Which is to say: what guarantees do we provide about initialExecutors other 
than that it's the number of executors requests we have before some opaque 
internal thing happens to adjust it down?  One possible such guarantee we could 
provide is that we won't adjust down for some fixed number of seconds after the 
SparkContext starts.

> Config "spark.dynamicAllocation.initialExecutors" has no effect 
> ----------------------------------------------------------------
>
>                 Key: SPARK-7699
>                 URL: https://issues.apache.org/jira/browse/SPARK-7699
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>            Reporter: meiyoula
>
> spark.dynamicAllocation.minExecutors 2
> spark.dynamicAllocation.initialExecutors  3
> spark.dynamicAllocation.maxExecutors 4
> Just run the spark-shell with above configurations, the initial executor 
> number is 2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to