Repository: spark
Updated Branches:
  refs/heads/master 5c99d8bf9 -> 741a29f98


[SPARK-9575] [MESOS] Add docuemntation around Mesos shuffle service.

andrewor14

Author: Timothy Chen <tnac...@gmail.com>

Closes #7907 from tnachen/mesos_shuffle.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/741a29f9
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/741a29f9
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/741a29f9

Branch: refs/heads/master
Commit: 741a29f98945538a475579ccc974cd42c1613be4
Parents: 5c99d8b
Author: Timothy Chen <tnac...@gmail.com>
Authored: Tue Aug 11 23:33:22 2015 -0700
Committer: Andrew Or <and...@databricks.com>
Committed: Tue Aug 11 23:33:22 2015 -0700

----------------------------------------------------------------------
 docs/running-on-mesos.md | 14 ++++++++++++++
 1 file changed, 14 insertions(+)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark/blob/741a29f9/docs/running-on-mesos.md
----------------------------------------------------------------------
diff --git a/docs/running-on-mesos.md b/docs/running-on-mesos.md
index 55e6d4e..cfd219a 100644
--- a/docs/running-on-mesos.md
+++ b/docs/running-on-mesos.md
@@ -216,6 +216,20 @@ node. Please refer to [Hadoop on 
Mesos](https://github.com/mesos/hadoop).
 
 In either case, HDFS runs separately from Hadoop MapReduce, without being 
scheduled through Mesos.
 
+# Dynamic Resource Allocation with Mesos
+
+Mesos supports dynamic allocation only with coarse grain mode, which can 
resize the number of executors based on statistics
+of the application. While dynamic allocation supports both scaling up and 
scaling down the number of executors, the coarse grain scheduler only supports 
scaling down
+since it is already designed to run one executor per slave with the configured 
amount of resources. However, after scaling down the number of executors the 
coarse grain scheduler
+can scale back up to the same amount of executors when Spark signals more 
executors are needed.
+
+Users that like to utilize this feature should launch the Mesos Shuffle 
Service that
+provides shuffle data cleanup functionality on top of the Shuffle Service 
since Mesos doesn't yet support notifying another framework's
+termination. To launch/stop the Mesos Shuffle Service please use the provided 
sbin/start-mesos-shuffle-service.sh and sbin/stop-mesos-shuffle-service.sh
+scripts accordingly.
+
+The Shuffle Service is expected to be running on each slave node that will run 
Spark executors. One way to easily achieve this with Mesos
+is to launch the Shuffle Service with Marathon with a unique host constraint.
 
 # Configuration
 


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to