[ 
https://issues.apache.org/jira/browse/SPARK-19380?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Owen updated SPARK-19380:
------------------------------
    Target Version/s:   (was: 1.6.4)

(Don't set target version)
This doesn't sound like a problem. You're saying the number of executors can 
grow to the max number. Of course it can.

> YARN - Dynamic allocation should use configured number of executors as max 
> number of executors
> ----------------------------------------------------------------------------------------------
>
>                 Key: SPARK-19380
>                 URL: https://issues.apache.org/jira/browse/SPARK-19380
>             Project: Spark
>          Issue Type: Improvement
>          Components: YARN
>    Affects Versions: 1.6.3
>            Reporter: Zhe Zhang
>
>  SPARK-13723 only uses user's number of executors as the initial number of 
> executors when dynamic allocation is turned on.
> If the configured max number of executors is larger than the number of 
> executors requested by the user, user's application could continue to request 
> for more executors to reach the configured max number if there're tasks 
> backed up. This behavior is not very friendly to the cluster if we allow 
> every Spark application to reach the max number of executors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to