[ 
https://issues.apache.org/jira/browse/SPARK-11555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Owen updated SPARK-11555:
------------------------------
    Priority: Minor  (was: Major)

OK, that still passes through {{ClientArguments.scala}}, which parses 
num-executors and warns you if you're using the old arg but that's not 
important.

The flow is the same so it might not even be specific to spark-class. I see 
that it correctly parses your flag but then overwrites unilaterally with the 
value of spark.executor.instances, which isn't set yet, which defaults to the 
default of 2. Before, it would default to the current {{numExecutors}}. It was 
basically this change:

https://github.com/apache/spark/pull/8910/files#diff-746d34aa06bfa57adb9289011e725472R84

It collapsed two similar blocks of initialization but the way it works then 
changed. [~jerryshao] what do you think?

> spark on yarn spark-class --num-workers doesn't work
> ----------------------------------------------------
>
>                 Key: SPARK-11555
>                 URL: https://issues.apache.org/jira/browse/SPARK-11555
>             Project: Spark
>          Issue Type: Bug
>          Components: YARN
>    Affects Versions: 1.5.2
>            Reporter: Thomas Graves
>            Priority: Minor
>
> Using the old spark-class and --num-workers interface, --num-workers 
> parameter is ignored and always uses default number of executors (2).
> bin/spark-class org.apache.spark.deploy.yarn.Client --jar 
> lib/spark-examples-1.5.2.0-hadoop2.6.0.16.1506060127.jar --class 
> org.apache.spark.examples.SparkPi --num-workers 4 --worker-memory 2g 
> --master-memory 1g --worker-cores 1 --queue default



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to