spark git commit: [SPARK-12345][MESOS] Filter SPARK_HOME when submitting Spark jobs with Mesos cluster mode.

2015-12-16 Thread andrewor14
Repository: spark
Updated Branches:
  refs/heads/master 26d70bd2b -> ad8c1f0b8


[SPARK-12345][MESOS] Filter SPARK_HOME when submitting Spark jobs with Mesos 
cluster mode.

SPARK_HOME is now causing problem with Mesos cluster mode since spark-submit 
script has been changed recently to take precendence when running spark-class 
scripts to look in SPARK_HOME if it's defined.

We should skip passing SPARK_HOME from the Spark client in cluster mode with 
Mesos, since Mesos shouldn't use this configuration but should use 
spark.executor.home instead.

Author: Timothy Chen 

Closes #10332 from tnachen/scheduler_ui.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/ad8c1f0b
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/ad8c1f0b
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/ad8c1f0b

Branch: refs/heads/master
Commit: ad8c1f0b840284d05da737fb2cc5ebf8848f4490
Parents: 26d70bd
Author: Timothy Chen 
Authored: Wed Dec 16 10:54:15 2015 -0800
Committer: Andrew Or 
Committed: Wed Dec 16 10:54:15 2015 -0800

--
 .../org/apache/spark/deploy/rest/mesos/MesosRestServer.scala  | 7 ++-
 .../spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala   | 2 +-
 2 files changed, 7 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/ad8c1f0b/core/src/main/scala/org/apache/spark/deploy/rest/mesos/MesosRestServer.scala
--
diff --git 
a/core/src/main/scala/org/apache/spark/deploy/rest/mesos/MesosRestServer.scala 
b/core/src/main/scala/org/apache/spark/deploy/rest/mesos/MesosRestServer.scala
index 868cc35..24510db 100644
--- 
a/core/src/main/scala/org/apache/spark/deploy/rest/mesos/MesosRestServer.scala
+++ 
b/core/src/main/scala/org/apache/spark/deploy/rest/mesos/MesosRestServer.scala
@@ -94,7 +94,12 @@ private[mesos] class MesosSubmitRequestServlet(
 val driverMemory = sparkProperties.get("spark.driver.memory")
 val driverCores = sparkProperties.get("spark.driver.cores")
 val appArgs = request.appArgs
-val environmentVariables = request.environmentVariables
+// We don't want to pass down SPARK_HOME when launching Spark apps
+// with Mesos cluster mode since it's populated by default on the client 
and it will
+// cause spark-submit script to look for files in SPARK_HOME instead.
+// We only need the ability to specify where to find spark-submit script
+// which user can user spark.executor.home or spark.home configurations.
+val environmentVariables = 
request.environmentVariables.filter(!_.equals("SPARK_HOME"))
 val name = 
request.sparkProperties.get("spark.app.name").getOrElse(mainClass)
 
 // Construct driver description

http://git-wip-us.apache.org/repos/asf/spark/blob/ad8c1f0b/core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala
--
diff --git 
a/core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala
 
b/core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala
index 721861f..573355b 100644
--- 
a/core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala
+++ 
b/core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala
@@ -34,7 +34,7 @@ import org.apache.spark.util.Utils
 
 /**
  * Shared trait for implementing a Mesos Scheduler. This holds common state 
and helper
- * methods and Mesos scheduler will use.
+ * methods the Mesos scheduler will use.
  */
 private[mesos] trait MesosSchedulerUtils extends Logging {
   // Lock used to wait for scheduler to be registered


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



spark git commit: [SPARK-12345][MESOS] Filter SPARK_HOME when submitting Spark jobs with Mesos cluster mode.

2015-12-16 Thread andrewor14
Repository: spark
Updated Branches:
  refs/heads/branch-1.6 f81512729 -> e5b85713d


[SPARK-12345][MESOS] Filter SPARK_HOME when submitting Spark jobs with Mesos 
cluster mode.

SPARK_HOME is now causing problem with Mesos cluster mode since spark-submit 
script has been changed recently to take precendence when running spark-class 
scripts to look in SPARK_HOME if it's defined.

We should skip passing SPARK_HOME from the Spark client in cluster mode with 
Mesos, since Mesos shouldn't use this configuration but should use 
spark.executor.home instead.

Author: Timothy Chen 

Closes #10332 from tnachen/scheduler_ui.

(cherry picked from commit ad8c1f0b840284d05da737fb2cc5ebf8848f4490)
Signed-off-by: Andrew Or 


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/e5b85713
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/e5b85713
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/e5b85713

Branch: refs/heads/branch-1.6
Commit: e5b85713d8a0dbbb1a0a07481f5afa6c5098147f
Parents: f815127
Author: Timothy Chen 
Authored: Wed Dec 16 10:54:15 2015 -0800
Committer: Andrew Or 
Committed: Wed Dec 16 10:55:25 2015 -0800

--
 .../org/apache/spark/deploy/rest/mesos/MesosRestServer.scala  | 7 ++-
 .../spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala   | 2 +-
 2 files changed, 7 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/e5b85713/core/src/main/scala/org/apache/spark/deploy/rest/mesos/MesosRestServer.scala
--
diff --git 
a/core/src/main/scala/org/apache/spark/deploy/rest/mesos/MesosRestServer.scala 
b/core/src/main/scala/org/apache/spark/deploy/rest/mesos/MesosRestServer.scala
index 868cc35..7c01ae4 100644
--- 
a/core/src/main/scala/org/apache/spark/deploy/rest/mesos/MesosRestServer.scala
+++ 
b/core/src/main/scala/org/apache/spark/deploy/rest/mesos/MesosRestServer.scala
@@ -94,7 +94,12 @@ private[mesos] class MesosSubmitRequestServlet(
 val driverMemory = sparkProperties.get("spark.driver.memory")
 val driverCores = sparkProperties.get("spark.driver.cores")
 val appArgs = request.appArgs
-val environmentVariables = request.environmentVariables
+// We don't want to pass down SPARK_HOME when launching Spark apps with 
Mesos cluster mode
+// since it's populated by default on the client and it will cause 
spark-submit script to
+// look for files in SPARK_HOME instead. We only need the ability to 
specify where to find
+// spark-submit script which user can user spark.executor.home or 
spark.home configurations
+// (SPARK-12345).
+val environmentVariables = 
request.environmentVariables.filter(!_.equals("SPARK_HOME"))
 val name = 
request.sparkProperties.get("spark.app.name").getOrElse(mainClass)
 
 // Construct driver description

http://git-wip-us.apache.org/repos/asf/spark/blob/e5b85713/core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala
--
diff --git 
a/core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala
 
b/core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala
index 721861f..573355b 100644
--- 
a/core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala
+++ 
b/core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala
@@ -34,7 +34,7 @@ import org.apache.spark.util.Utils
 
 /**
  * Shared trait for implementing a Mesos Scheduler. This holds common state 
and helper
- * methods and Mesos scheduler will use.
+ * methods the Mesos scheduler will use.
  */
 private[mesos] trait MesosSchedulerUtils extends Logging {
   // Lock used to wait for scheduler to be registered


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org