[ 
https://issues.apache.org/jira/browse/SPARK-11555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14993815#comment-14993815
 ] 

Thomas Graves commented on SPARK-11555:
---------------------------------------

To fix this we can simply pass in an optional numExecutors as a second 
parameter and have it default to DEFAULT_NUMBER_EXECUTORS.  I'll have a patch 
up shortly after testing.

> spark on yarn spark-class --num-workers doesn't work
> ----------------------------------------------------
>
>                 Key: SPARK-11555
>                 URL: https://issues.apache.org/jira/browse/SPARK-11555
>             Project: Spark
>          Issue Type: Bug
>          Components: YARN
>    Affects Versions: 1.5.2
>            Reporter: Thomas Graves
>            Assignee: Thomas Graves
>            Priority: Critical
>
> Using the old spark-class and --num-workers interface, --num-workers 
> parameter is ignored and always uses default number of executors (2).
> bin/spark-class org.apache.spark.deploy.yarn.Client --jar 
> lib/spark-examples-1.5.2.0-hadoop2.6.0.16.1506060127.jar --class 
> org.apache.spark.examples.SparkPi --num-workers 4 --worker-memory 2g 
> --master-memory 1g --worker-cores 1 --queue default



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to