[ 
https://issues.apache.org/jira/browse/SPARK-7699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14555871#comment-14555871
 ] 

Sean Owen commented on SPARK-7699:
----------------------------------

Say initial = 3 and min = 1. There is no load at the start. You may start with 
3 executors (ramp-down occurred after the initial executor request was 
fulfilled) or 1 (ramp-down happened before the request, or the request was 
successfully cancelled). This JIRA says that it should not be 1; you're saying 
it might start at 3. I agree it might start at 3 but disagree with the idea 
that it can't be 1.

In the short term, the right number of executors is 1; 3 executors will ramp 
down to 1 soon anyway. So either of these seems like a reasonable result.

You can say, well, initial = 3 has no effect or is pointless in this situation. 
That's true. There is no point in setting initial = 3 in this situation, and 
the caller shouldn't do that if he/she knows there is no load. But either 
actual resulting behavior seems to be fine.

I don't see a reason to artificially prevent changing from initial for a while, 
and I don't think it should be scrapped since it does serve a good purpose: 
when there *is* initial load, this lets you start at a much more reasonable 
number of executors and increase from there rather than start from the minimum. 
That's the core purpose of initialExecutors, right? 

That's why I disagree with the premise of the JIRA that is has no effect, and 
say that the current behavior seems correct in the no-load case assuming rapid 
ramp-down is desired, and I think everyone agrees with that. That's why I'd say 
this isn't a problem -- does this hold together?

> Config "spark.dynamicAllocation.initialExecutors" has no effect 
> ----------------------------------------------------------------
>
>                 Key: SPARK-7699
>                 URL: https://issues.apache.org/jira/browse/SPARK-7699
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>            Reporter: meiyoula
>
> spark.dynamicAllocation.minExecutors 2
> spark.dynamicAllocation.initialExecutors  3
> spark.dynamicAllocation.maxExecutors 4
> Just run the spark-shell with above configurations, the initial executor 
> number is 2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to