Repository: spark
Updated Branches:
  refs/heads/master 31921e0f0 -> a416e41e2


[SPARK-11809] Switch the default Mesos mode to coarse-grained mode

Based on my conversions with people, I believe the consensus is that the 
coarse-grained mode is more stable and easier to reason about. It is best to 
use that as the default rather than the more flaky fine-grained mode.

Author: Reynold Xin <r...@databricks.com>

Closes #9795 from rxin/SPARK-11809.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/a416e41e
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/a416e41e
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/a416e41e

Branch: refs/heads/master
Commit: a416e41e285700f861559d710dbf857405bfddf6
Parents: 31921e0
Author: Reynold Xin <r...@databricks.com>
Authored: Wed Nov 18 12:50:29 2015 -0800
Committer: Reynold Xin <r...@databricks.com>
Committed: Wed Nov 18 12:50:29 2015 -0800

----------------------------------------------------------------------
 .../scala/org/apache/spark/SparkContext.scala   |  2 +-
 docs/job-scheduling.md                          |  2 +-
 docs/running-on-mesos.md                        | 27 ++++++++++++--------
 3 files changed, 19 insertions(+), 12 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark/blob/a416e41e/core/src/main/scala/org/apache/spark/SparkContext.scala
----------------------------------------------------------------------
diff --git a/core/src/main/scala/org/apache/spark/SparkContext.scala 
b/core/src/main/scala/org/apache/spark/SparkContext.scala
index b5645b0..ab374cb 100644
--- a/core/src/main/scala/org/apache/spark/SparkContext.scala
+++ b/core/src/main/scala/org/apache/spark/SparkContext.scala
@@ -2710,7 +2710,7 @@ object SparkContext extends Logging {
       case mesosUrl @ MESOS_REGEX(_) =>
         MesosNativeLibrary.load()
         val scheduler = new TaskSchedulerImpl(sc)
-        val coarseGrained = sc.conf.getBoolean("spark.mesos.coarse", false)
+        val coarseGrained = sc.conf.getBoolean("spark.mesos.coarse", 
defaultValue = true)
         val url = mesosUrl.stripPrefix("mesos://") // strip scheme from raw 
Mesos URLs
         val backend = if (coarseGrained) {
           new CoarseMesosSchedulerBackend(scheduler, sc, url, 
sc.env.securityManager)

http://git-wip-us.apache.org/repos/asf/spark/blob/a416e41e/docs/job-scheduling.md
----------------------------------------------------------------------
diff --git a/docs/job-scheduling.md b/docs/job-scheduling.md
index a3c34cb..36327c6 100644
--- a/docs/job-scheduling.md
+++ b/docs/job-scheduling.md
@@ -47,7 +47,7 @@ application is not running tasks on a machine, other 
applications may run tasks
 is useful when you expect large numbers of not overly active applications, 
such as shell sessions from
 separate users. However, it comes with a risk of less predictable latency, 
because it may take a while for
 an application to gain back cores on one node when it has work to do. To use 
this mode, simply use a
-`mesos://` URL without setting `spark.mesos.coarse` to true.
+`mesos://` URL and set `spark.mesos.coarse` to false.
 
 Note that none of the modes currently provide memory sharing across 
applications. If you would like to share
 data this way, we recommend running a single server application that can serve 
multiple requests by querying

http://git-wip-us.apache.org/repos/asf/spark/blob/a416e41e/docs/running-on-mesos.md
----------------------------------------------------------------------
diff --git a/docs/running-on-mesos.md b/docs/running-on-mesos.md
index 5be208c..a197d0e 100644
--- a/docs/running-on-mesos.md
+++ b/docs/running-on-mesos.md
@@ -161,21 +161,15 @@ Note that jars or python files that are passed to 
spark-submit should be URIs re
 
 # Mesos Run Modes
 
-Spark can run over Mesos in two modes: "fine-grained" (default) and 
"coarse-grained".
+Spark can run over Mesos in two modes: "coarse-grained" (default) and 
"fine-grained".
 
-In "fine-grained" mode (default), each Spark task runs as a separate Mesos 
task. This allows
-multiple instances of Spark (and other frameworks) to share machines at a very 
fine granularity,
-where each application gets more or fewer machines as it ramps up and down, 
but it comes with an
-additional overhead in launching each task. This mode may be inappropriate for 
low-latency
-requirements like interactive queries or serving web requests.
-
-The "coarse-grained" mode will instead launch only *one* long-running Spark 
task on each Mesos
+The "coarse-grained" mode will launch only *one* long-running Spark task on 
each Mesos
 machine, and dynamically schedule its own "mini-tasks" within it. The benefit 
is much lower startup
 overhead, but at the cost of reserving the Mesos resources for the complete 
duration of the
 application.
 
-To run in coarse-grained mode, set the `spark.mesos.coarse` property in your
-[SparkConf](configuration.html#spark-properties):
+Coarse-grained is the default mode. You can also set `spark.mesos.coarse` 
property to true
+to turn it on explictly in [SparkConf](configuration.html#spark-properties):
 
 {% highlight scala %}
 conf.set("spark.mesos.coarse", "true")
@@ -186,6 +180,19 @@ acquire. By default, it will acquire *all* cores in the 
cluster (that get offere
 only makes sense if you run just one application at a time. You can cap the 
maximum number of cores
 using `conf.set("spark.cores.max", "10")` (for example).
 
+In "fine-grained" mode, each Spark task runs as a separate Mesos task. This 
allows
+multiple instances of Spark (and other frameworks) to share machines at a very 
fine granularity,
+where each application gets more or fewer machines as it ramps up and down, 
but it comes with an
+additional overhead in launching each task. This mode may be inappropriate for 
low-latency
+requirements like interactive queries or serving web requests.
+
+To run in coarse-grained mode, set the `spark.mesos.coarse` property to false 
in your
+[SparkConf](configuration.html#spark-properties):
+
+{% highlight scala %}
+conf.set("spark.mesos.coarse", "false")
+{% endhighlight %}
+
 You may also make use of `spark.mesos.constraints` to set attribute based 
constraints on mesos resource offers. By default, all resource offers will be 
accepted.
 
 {% highlight scala %}


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to