[ 
https://issues.apache.org/jira/browse/SPARK-7699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14555837#comment-14555837
 ] 

Sandy Ryza commented on SPARK-7699:
-----------------------------------

Sorry for the delay here.  The desired behavior is to never have outstanding 
requests for more than the number of executors we'd need to satisfy all current 
tasks (unless minExecutors is set).  I.e. we don't want to ramp down gradually.

So I think the concerns here are valid - this policy means that as soon as the 
dynamic allocation thread starts being active, it will cancel any container 
requests that were made as a result of initialExecutors.  If, however, these 
executor requests were actually fulfilled, dynamic allocation wouldn't throw 
away the executors.

This means that the relevant questions are: is there a window of time after 
we've requested the initial executors but before the dynamic allocation thread 
starts?  Is there something fundamental about this window of time that means it 
will probably still be there after future scheduling optimizations?  If not, do 
we want to make sure that ExecutorAllocationManager itself doesn't ramp down 
below initialExecutors until some criteria (probably time) is satisfied?  Or 
should we just scrap the property.


> Config "spark.dynamicAllocation.initialExecutors" has no effect 
> ----------------------------------------------------------------
>
>                 Key: SPARK-7699
>                 URL: https://issues.apache.org/jira/browse/SPARK-7699
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>            Reporter: meiyoula
>
> spark.dynamicAllocation.minExecutors 2
> spark.dynamicAllocation.initialExecutors  3
> spark.dynamicAllocation.maxExecutors 4
> Just run the spark-shell with above configurations, the initial executor 
> number is 2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to