[ 
https://issues.apache.org/jira/browse/SPARK-2945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14091817#comment-14091817
 ] 

Sandy Ryza commented on SPARK-2945:
-----------------------------------

spark.executor.instances apparently isn't used for anything other than the 
calculations for how long to wait for executors to register before running 
jobs.  So I believe there actually would be some work required to make 
spark.executor.instances function this way.

> Allow specifying num of executors in the context configuration
> --------------------------------------------------------------
>
>                 Key: SPARK-2945
>                 URL: https://issues.apache.org/jira/browse/SPARK-2945
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core, YARN
>    Affects Versions: 1.0.0
>         Environment: Ubuntu precise, on YARN (CDH 5.1.0)
>            Reporter: Shay Rojansky
>
> Running on YARN, the only way to specify the number of executors seems to be 
> on the command line of spark-submit, via the --num-executors switch.
> In many cases this is too early. Our Spark app receives some cmdline 
> arguments which determine the amount of work that needs to be done - and that 
> affects the number of executors it ideally requires. Ideally, the Spark 
> context configuration would support specifying this like any other config 
> param.
> Our current workaround is a wrapper script that determines how much work is 
> needed, and which itself launches spark-submit with the number passed to 
> --num-executors - it's a shame to have to do this.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to