[ https://issues.apache.org/jira/browse/SPARK-2945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14092058#comment-14092058 ]
Shay Rojansky commented on SPARK-2945: -------------------------------------- I just did a quick test on Spark 1.0.2, and spark.executor.instances does indeed appear to control the number of executors allocated (at least in YARN). Should I keep this open for you guys to take a look and update the docs? > Allow specifying num of executors in the context configuration > -------------------------------------------------------------- > > Key: SPARK-2945 > URL: https://issues.apache.org/jira/browse/SPARK-2945 > Project: Spark > Issue Type: Improvement > Components: Spark Core, YARN > Affects Versions: 1.0.0 > Environment: Ubuntu precise, on YARN (CDH 5.1.0) > Reporter: Shay Rojansky > > Running on YARN, the only way to specify the number of executors seems to be > on the command line of spark-submit, via the --num-executors switch. > In many cases this is too early. Our Spark app receives some cmdline > arguments which determine the amount of work that needs to be done - and that > affects the number of executors it ideally requires. Ideally, the Spark > context configuration would support specifying this like any other config > param. > Our current workaround is a wrapper script that determines how much work is > needed, and which itself launches spark-submit with the number passed to > --num-executors - it's a shame to have to do this. -- This message was sent by Atlassian JIRA (v6.2#6252) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org