Repository: spark
Updated Branches:
  refs/heads/master 0d9ab0167 -> 416003b26


[DOCS] Small fixes to Spark on Yarn doc

* a follow-up to 16b6d18613e150c7038c613992d80a7828413e66 as `--num-executors` 
flag is not suppported.
* links + formatting

Author: Jacek Laskowski <jacek.laskow...@deepsense.io>

Closes #8762 from jaceklaskowski/docs-spark-on-yarn.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/416003b2
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/416003b2
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/416003b2

Branch: refs/heads/master
Commit: 416003b26401894ec712e1a5291a92adfbc5af01
Parents: 0d9ab01
Author: Jacek Laskowski <jacek.laskow...@deepsense.io>
Authored: Tue Sep 15 20:42:33 2015 +0100
Committer: Sean Owen <so...@cloudera.com>
Committed: Tue Sep 15 20:42:33 2015 +0100

----------------------------------------------------------------------
 docs/running-on-yarn.md | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark/blob/416003b2/docs/running-on-yarn.md
----------------------------------------------------------------------
diff --git a/docs/running-on-yarn.md b/docs/running-on-yarn.md
index 5159ef9..d124432 100644
--- a/docs/running-on-yarn.md
+++ b/docs/running-on-yarn.md
@@ -18,16 +18,16 @@ Spark application's configuration (driver, executors, and 
the AM when running in
 
 There are two deploy modes that can be used to launch Spark applications on 
YARN. In `yarn-cluster` mode, the Spark driver runs inside an application 
master process which is managed by YARN on the cluster, and the client can go 
away after initiating the application. In `yarn-client` mode, the driver runs 
in the client process, and the application master is only used for requesting 
resources from YARN.
 
-Unlike in Spark standalone and Mesos mode, in which the master's address is 
specified in the `--master` parameter, in YARN mode the ResourceManager's 
address is picked up from the Hadoop configuration. Thus, the `--master` 
parameter is `yarn-client` or `yarn-cluster`. 
+Unlike [Spark standalone](spark-standalone.html) and 
[Mesos](running-on-mesos.html) modes, in which the master's address is 
specified in the `--master` parameter, in YARN mode the ResourceManager's 
address is picked up from the Hadoop configuration. Thus, the `--master` 
parameter is `yarn-client` or `yarn-cluster`.
+
 To launch a Spark application in `yarn-cluster` mode:
 
-   `$ ./bin/spark-submit --class path.to.your.Class --master yarn-cluster 
[options] <app jar> [app options]`
+    $ ./bin/spark-submit --class path.to.your.Class --master yarn-cluster 
[options] <app jar> [app options]
     
 For example:
 
     $ ./bin/spark-submit --class org.apache.spark.examples.SparkPi \
         --master yarn-cluster \
-        --num-executors 3 \
         --driver-memory 4g \
         --executor-memory 2g \
         --executor-cores 1 \
@@ -37,7 +37,7 @@ For example:
 
 The above starts a YARN client program which starts the default Application 
Master. Then SparkPi will be run as a child thread of Application Master. The 
client will periodically poll the Application Master for status updates and 
display them in the console. The client will exit once your application has 
finished running.  Refer to the "Debugging your Application" section below for 
how to see driver and executor logs.
 
-To launch a Spark application in `yarn-client` mode, do the same, but replace 
`yarn-cluster` with `yarn-client`.  To run spark-shell:
+To launch a Spark application in `yarn-client` mode, do the same, but replace 
`yarn-cluster` with `yarn-client`. The following shows how you can run 
`spark-shell` in `yarn-client` mode:
 
     $ ./bin/spark-shell --master yarn-client
 
@@ -54,8 +54,8 @@ In `yarn-cluster` mode, the driver runs on a different 
machine than the client,
 
 # Preparations
 
-Running Spark-on-YARN requires a binary distribution of Spark which is built 
with YARN support.
-Binary distributions can be downloaded from the Spark project website. 
+Running Spark on YARN requires a binary distribution of Spark which is built 
with YARN support.
+Binary distributions can be downloaded from the [downloads 
page](http://spark.apache.org/downloads.html) of the project website.
 To build Spark yourself, refer to [Building Spark](building-spark.html).
 
 # Configuration


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to