Repository: spark
Updated Branches:
  refs/heads/master 99b06b6fd -> 8af237061


[Docs] Fix outdated docs for standalone cluster

This is now supported!

Author: andrewor14 <andrewo...@gmail.com>
Author: Andrew Or <andrewo...@gmail.com>

Closes #2461 from andrewor14/document-standalone-cluster and squashes the 
following commits:

85c8b9e [andrewor14] Wording change per Patrick
35e30ee [Andrew Or] Fix outdated docs for standalone cluster


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/8af23706
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/8af23706
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/8af23706

Branch: refs/heads/master
Commit: 8af2370619a8a6bb1af7df43b8329ab319348ad8
Parents: 99b06b6
Author: andrewor14 <andrewo...@gmail.com>
Authored: Fri Sep 19 16:02:38 2014 -0700
Committer: Andrew Or <andrewo...@gmail.com>
Committed: Fri Sep 19 16:02:38 2014 -0700

----------------------------------------------------------------------
 docs/spark-standalone.md | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark/blob/8af23706/docs/spark-standalone.md
----------------------------------------------------------------------
diff --git a/docs/spark-standalone.md b/docs/spark-standalone.md
index 99a8e43..29b5491 100644
--- a/docs/spark-standalone.md
+++ b/docs/spark-standalone.md
@@ -248,8 +248,10 @@ You can also pass an option `--cores <numCores>` to 
control the number of cores
 
 The [`spark-submit` script](submitting-applications.html) provides the most 
straightforward way to
 submit a compiled Spark application to the cluster. For standalone clusters, 
Spark currently
-only supports deploying the driver inside the client process that is 
submitting the application
-(`client` deploy mode).
+supports two deploy modes. In `client` mode, the driver is launched in the 
same process as the
+client that submits the application. In `cluster` mode, however, the driver is 
launched from one
+of the Worker processes inside the cluster, and the client process exits as 
soon as it fulfills
+its responsibility of submitting the application without waiting for the 
application to finish.
 
 If your application is launched through Spark submit, then the application jar 
is automatically
 distributed to all worker nodes. For any additional jars that your application 
depends on, you


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to