[jira] [Updated] (SPARK-11327) spark-dispatcher doesn't pass along some spark properties

2016-04-04 Thread Andrew Or (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-11327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Or updated SPARK-11327:
--
Fix Version/s: 1.6.2

> spark-dispatcher doesn't pass along some spark properties
> -
>
> Key: SPARK-11327
> URL: https://issues.apache.org/jira/browse/SPARK-11327
> Project: Spark
>  Issue Type: Bug
>  Components: Mesos
>Reporter: Alan Braithwaite
>Assignee: Jo Voordeckers
> Fix For: 1.6.2, 2.0.0
>
>
> I haven't figured out exactly what's going on yet, but there's something in 
> the spark-dispatcher which is failing to pass along properties to the 
> spark-driver when using spark-submit in a clustered mesos docker environment.
> Most importantly, it's not passing along spark.mesos.executor.docker.image.
> cli:
> {code}
> docker run -t -i --rm --net=host 
> --entrypoint=/usr/local/spark/bin/spark-submit 
> docker.example.com/spark:2015.10.2 --conf spark.driver.memory=8G --conf 
> spark.mesos.executor.docker.image=docker.example.com/spark:2015.10.2 --master 
> mesos://spark-dispatcher.example.com:31262 --deploy-mode cluster 
> --properties-file /usr/local/spark/conf/spark-defaults.conf --class 
> com.example.spark.streaming.MyApp 
> http://jarserver.example.com:8000/sparkapp.jar zk1.example.com:2181 
> spark-testing my-stream 40
> {code}
> submit output:
> {code}
> 15/10/26 22:03:53 INFO RestSubmissionClient: Submitting a request to launch 
> an application in mesos://compute1.example.com:31262.
> 15/10/26 22:03:53 DEBUG RestSubmissionClient: Sending POST request to server 
> at http://compute1.example.com:31262/v1/submissions/create:
> {
>   "action" : "CreateSubmissionRequest",
>   "appArgs" : [ "zk1.example.com:2181", "spark-testing", "requests", "40" ],
>   "appResource" : "http://jarserver.example.com:8000/sparkapp.jar;,
>   "clientSparkVersion" : "1.5.0",
>   "environmentVariables" : {
> "SPARK_SCALA_VERSION" : "2.10",
> "SPARK_CONF_DIR" : "/usr/local/spark/conf",
> "SPARK_HOME" : "/usr/local/spark",
> "SPARK_ENV_LOADED" : "1"
>   },
>   "mainClass" : "com.example.spark.streaming.MyApp",
>   "sparkProperties" : {
> "spark.serializer" : "org.apache.spark.serializer.KryoSerializer",
> "spark.executorEnv.MESOS_NATIVE_JAVA_LIBRARY" : 
> "/usr/local/lib/libmesos.so",
> "spark.history.fs.logDirectory" : "hdfs://hdfsha.example.com/spark/logs",
> "spark.eventLog.enabled" : "true",
> "spark.driver.maxResultSize" : "0",
> "spark.mesos.deploy.recoveryMode" : "ZOOKEEPER",
> "spark.mesos.deploy.zookeeper.url" : 
> "zk1.example.com:2181,zk2.example.com:2181,zk3.example.com:2181,zk4.example.com:2181,zk5.example.com:2181",
> "spark.jars" : "http://jarserver.example.com:8000/sparkapp.jar;,
> "spark.driver.supervise" : "false",
> "spark.app.name" : "com.example.spark.streaming.MyApp",
> "spark.driver.memory" : "8G",
> "spark.logConf" : "true",
> "spark.deploy.zookeeper.dir" : "/spark_mesos_dispatcher",
> "spark.mesos.executor.docker.image" : 
> "docker.example.com/spark-prod:2015.10.2",
> "spark.submit.deployMode" : "cluster",
> "spark.master" : "mesos://compute1.example.com:31262",
> "spark.executor.memory" : "8G",
> "spark.eventLog.dir" : "hdfs://hdfsha.example.com/spark/logs",
> "spark.mesos.docker.executor.network" : "HOST",
> "spark.mesos.executor.home" : "/usr/local/spark"
>   }
> }
> 15/10/26 22:03:53 DEBUG RestSubmissionClient: Response from the server:
> {
>   "action" : "CreateSubmissionResponse",
>   "serverSparkVersion" : "1.5.0",
>   "submissionId" : "driver-20151026220353-0011",
>   "success" : true
> }
> 15/10/26 22:03:53 INFO RestSubmissionClient: Submission successfully created 
> as driver-20151026220353-0011. Polling submission state...
> 15/10/26 22:03:53 INFO RestSubmissionClient: Submitting a request for the 
> status of submission driver-20151026220353-0011 in 
> mesos://compute1.example.com:31262.
> 15/10/26 22:03:53 DEBUG RestSubmissionClient: Sending GET request to server 
> at 
> http://compute1.example.com:31262/v1/submissions/status/driver-20151026220353-0011.
> 15/10/26 22:03:53 DEBUG RestSubmissionClient: Response from the server:
> {
>   "action" : "SubmissionStatusResponse",
>   "driverState" : "QUEUED",
>   "serverSparkVersion" : "1.5.0",
>   "submissionId" : "driver-20151026220353-0011",
>   "success" : true
> }
> 15/10/26 22:03:53 INFO RestSubmissionClient: State of driver 
> driver-20151026220353-0011 is now QUEUED.
> 15/10/26 22:03:53 INFO RestSubmissionClient: Server responded with 
> CreateSubmissionResponse:
> {
>   "action" : "CreateSubmissionResponse",
>   "serverSparkVersion" : "1.5.0",
>   "submissionId" : "driver-20151026220353-0011",
>   "success" : true
> }
> {code}
> driver log:
> {code}
> 15/10/26 22:08:08 INFO SparkContext: Running 

[jira] [Updated] (SPARK-11327) spark-dispatcher doesn't pass along some spark properties

2016-04-04 Thread Andrew Or (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-11327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Or updated SPARK-11327:
--
Target Version/s: 1.6.2, 2.0.0  (was: 2.0.0)

> spark-dispatcher doesn't pass along some spark properties
> -
>
> Key: SPARK-11327
> URL: https://issues.apache.org/jira/browse/SPARK-11327
> Project: Spark
>  Issue Type: Bug
>  Components: Mesos
>Reporter: Alan Braithwaite
>Assignee: Jo Voordeckers
> Fix For: 1.6.2, 2.0.0
>
>
> I haven't figured out exactly what's going on yet, but there's something in 
> the spark-dispatcher which is failing to pass along properties to the 
> spark-driver when using spark-submit in a clustered mesos docker environment.
> Most importantly, it's not passing along spark.mesos.executor.docker.image.
> cli:
> {code}
> docker run -t -i --rm --net=host 
> --entrypoint=/usr/local/spark/bin/spark-submit 
> docker.example.com/spark:2015.10.2 --conf spark.driver.memory=8G --conf 
> spark.mesos.executor.docker.image=docker.example.com/spark:2015.10.2 --master 
> mesos://spark-dispatcher.example.com:31262 --deploy-mode cluster 
> --properties-file /usr/local/spark/conf/spark-defaults.conf --class 
> com.example.spark.streaming.MyApp 
> http://jarserver.example.com:8000/sparkapp.jar zk1.example.com:2181 
> spark-testing my-stream 40
> {code}
> submit output:
> {code}
> 15/10/26 22:03:53 INFO RestSubmissionClient: Submitting a request to launch 
> an application in mesos://compute1.example.com:31262.
> 15/10/26 22:03:53 DEBUG RestSubmissionClient: Sending POST request to server 
> at http://compute1.example.com:31262/v1/submissions/create:
> {
>   "action" : "CreateSubmissionRequest",
>   "appArgs" : [ "zk1.example.com:2181", "spark-testing", "requests", "40" ],
>   "appResource" : "http://jarserver.example.com:8000/sparkapp.jar;,
>   "clientSparkVersion" : "1.5.0",
>   "environmentVariables" : {
> "SPARK_SCALA_VERSION" : "2.10",
> "SPARK_CONF_DIR" : "/usr/local/spark/conf",
> "SPARK_HOME" : "/usr/local/spark",
> "SPARK_ENV_LOADED" : "1"
>   },
>   "mainClass" : "com.example.spark.streaming.MyApp",
>   "sparkProperties" : {
> "spark.serializer" : "org.apache.spark.serializer.KryoSerializer",
> "spark.executorEnv.MESOS_NATIVE_JAVA_LIBRARY" : 
> "/usr/local/lib/libmesos.so",
> "spark.history.fs.logDirectory" : "hdfs://hdfsha.example.com/spark/logs",
> "spark.eventLog.enabled" : "true",
> "spark.driver.maxResultSize" : "0",
> "spark.mesos.deploy.recoveryMode" : "ZOOKEEPER",
> "spark.mesos.deploy.zookeeper.url" : 
> "zk1.example.com:2181,zk2.example.com:2181,zk3.example.com:2181,zk4.example.com:2181,zk5.example.com:2181",
> "spark.jars" : "http://jarserver.example.com:8000/sparkapp.jar;,
> "spark.driver.supervise" : "false",
> "spark.app.name" : "com.example.spark.streaming.MyApp",
> "spark.driver.memory" : "8G",
> "spark.logConf" : "true",
> "spark.deploy.zookeeper.dir" : "/spark_mesos_dispatcher",
> "spark.mesos.executor.docker.image" : 
> "docker.example.com/spark-prod:2015.10.2",
> "spark.submit.deployMode" : "cluster",
> "spark.master" : "mesos://compute1.example.com:31262",
> "spark.executor.memory" : "8G",
> "spark.eventLog.dir" : "hdfs://hdfsha.example.com/spark/logs",
> "spark.mesos.docker.executor.network" : "HOST",
> "spark.mesos.executor.home" : "/usr/local/spark"
>   }
> }
> 15/10/26 22:03:53 DEBUG RestSubmissionClient: Response from the server:
> {
>   "action" : "CreateSubmissionResponse",
>   "serverSparkVersion" : "1.5.0",
>   "submissionId" : "driver-20151026220353-0011",
>   "success" : true
> }
> 15/10/26 22:03:53 INFO RestSubmissionClient: Submission successfully created 
> as driver-20151026220353-0011. Polling submission state...
> 15/10/26 22:03:53 INFO RestSubmissionClient: Submitting a request for the 
> status of submission driver-20151026220353-0011 in 
> mesos://compute1.example.com:31262.
> 15/10/26 22:03:53 DEBUG RestSubmissionClient: Sending GET request to server 
> at 
> http://compute1.example.com:31262/v1/submissions/status/driver-20151026220353-0011.
> 15/10/26 22:03:53 DEBUG RestSubmissionClient: Response from the server:
> {
>   "action" : "SubmissionStatusResponse",
>   "driverState" : "QUEUED",
>   "serverSparkVersion" : "1.5.0",
>   "submissionId" : "driver-20151026220353-0011",
>   "success" : true
> }
> 15/10/26 22:03:53 INFO RestSubmissionClient: State of driver 
> driver-20151026220353-0011 is now QUEUED.
> 15/10/26 22:03:53 INFO RestSubmissionClient: Server responded with 
> CreateSubmissionResponse:
> {
>   "action" : "CreateSubmissionResponse",
>   "serverSparkVersion" : "1.5.0",
>   "submissionId" : "driver-20151026220353-0011",
>   "success" : true
> }
> {code}
> driver log:
> {code}
> 15/10/26 22:08:08 

[jira] [Updated] (SPARK-11327) spark-dispatcher doesn't pass along some spark properties

2016-04-02 Thread Sean Owen (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-11327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Owen updated SPARK-11327:
--
Assignee: Jo Voordeckers

> spark-dispatcher doesn't pass along some spark properties
> -
>
> Key: SPARK-11327
> URL: https://issues.apache.org/jira/browse/SPARK-11327
> Project: Spark
>  Issue Type: Bug
>  Components: Mesos
>Reporter: Alan Braithwaite
>Assignee: Jo Voordeckers
> Fix For: 2.0.0
>
>
> I haven't figured out exactly what's going on yet, but there's something in 
> the spark-dispatcher which is failing to pass along properties to the 
> spark-driver when using spark-submit in a clustered mesos docker environment.
> Most importantly, it's not passing along spark.mesos.executor.docker.image.
> cli:
> {code}
> docker run -t -i --rm --net=host 
> --entrypoint=/usr/local/spark/bin/spark-submit 
> docker.example.com/spark:2015.10.2 --conf spark.driver.memory=8G --conf 
> spark.mesos.executor.docker.image=docker.example.com/spark:2015.10.2 --master 
> mesos://spark-dispatcher.example.com:31262 --deploy-mode cluster 
> --properties-file /usr/local/spark/conf/spark-defaults.conf --class 
> com.example.spark.streaming.MyApp 
> http://jarserver.example.com:8000/sparkapp.jar zk1.example.com:2181 
> spark-testing my-stream 40
> {code}
> submit output:
> {code}
> 15/10/26 22:03:53 INFO RestSubmissionClient: Submitting a request to launch 
> an application in mesos://compute1.example.com:31262.
> 15/10/26 22:03:53 DEBUG RestSubmissionClient: Sending POST request to server 
> at http://compute1.example.com:31262/v1/submissions/create:
> {
>   "action" : "CreateSubmissionRequest",
>   "appArgs" : [ "zk1.example.com:2181", "spark-testing", "requests", "40" ],
>   "appResource" : "http://jarserver.example.com:8000/sparkapp.jar;,
>   "clientSparkVersion" : "1.5.0",
>   "environmentVariables" : {
> "SPARK_SCALA_VERSION" : "2.10",
> "SPARK_CONF_DIR" : "/usr/local/spark/conf",
> "SPARK_HOME" : "/usr/local/spark",
> "SPARK_ENV_LOADED" : "1"
>   },
>   "mainClass" : "com.example.spark.streaming.MyApp",
>   "sparkProperties" : {
> "spark.serializer" : "org.apache.spark.serializer.KryoSerializer",
> "spark.executorEnv.MESOS_NATIVE_JAVA_LIBRARY" : 
> "/usr/local/lib/libmesos.so",
> "spark.history.fs.logDirectory" : "hdfs://hdfsha.example.com/spark/logs",
> "spark.eventLog.enabled" : "true",
> "spark.driver.maxResultSize" : "0",
> "spark.mesos.deploy.recoveryMode" : "ZOOKEEPER",
> "spark.mesos.deploy.zookeeper.url" : 
> "zk1.example.com:2181,zk2.example.com:2181,zk3.example.com:2181,zk4.example.com:2181,zk5.example.com:2181",
> "spark.jars" : "http://jarserver.example.com:8000/sparkapp.jar;,
> "spark.driver.supervise" : "false",
> "spark.app.name" : "com.example.spark.streaming.MyApp",
> "spark.driver.memory" : "8G",
> "spark.logConf" : "true",
> "spark.deploy.zookeeper.dir" : "/spark_mesos_dispatcher",
> "spark.mesos.executor.docker.image" : 
> "docker.example.com/spark-prod:2015.10.2",
> "spark.submit.deployMode" : "cluster",
> "spark.master" : "mesos://compute1.example.com:31262",
> "spark.executor.memory" : "8G",
> "spark.eventLog.dir" : "hdfs://hdfsha.example.com/spark/logs",
> "spark.mesos.docker.executor.network" : "HOST",
> "spark.mesos.executor.home" : "/usr/local/spark"
>   }
> }
> 15/10/26 22:03:53 DEBUG RestSubmissionClient: Response from the server:
> {
>   "action" : "CreateSubmissionResponse",
>   "serverSparkVersion" : "1.5.0",
>   "submissionId" : "driver-20151026220353-0011",
>   "success" : true
> }
> 15/10/26 22:03:53 INFO RestSubmissionClient: Submission successfully created 
> as driver-20151026220353-0011. Polling submission state...
> 15/10/26 22:03:53 INFO RestSubmissionClient: Submitting a request for the 
> status of submission driver-20151026220353-0011 in 
> mesos://compute1.example.com:31262.
> 15/10/26 22:03:53 DEBUG RestSubmissionClient: Sending GET request to server 
> at 
> http://compute1.example.com:31262/v1/submissions/status/driver-20151026220353-0011.
> 15/10/26 22:03:53 DEBUG RestSubmissionClient: Response from the server:
> {
>   "action" : "SubmissionStatusResponse",
>   "driverState" : "QUEUED",
>   "serverSparkVersion" : "1.5.0",
>   "submissionId" : "driver-20151026220353-0011",
>   "success" : true
> }
> 15/10/26 22:03:53 INFO RestSubmissionClient: State of driver 
> driver-20151026220353-0011 is now QUEUED.
> 15/10/26 22:03:53 INFO RestSubmissionClient: Server responded with 
> CreateSubmissionResponse:
> {
>   "action" : "CreateSubmissionResponse",
>   "serverSparkVersion" : "1.5.0",
>   "submissionId" : "driver-20151026220353-0011",
>   "success" : true
> }
> {code}
> driver log:
> {code}
> 15/10/26 22:08:08 INFO SparkContext: Running 

[jira] [Updated] (SPARK-11327) spark-dispatcher doesn't pass along some spark properties

2016-01-20 Thread Alan Braithwaite (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-11327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Braithwaite updated SPARK-11327:
-
Description: 
I haven't figured out exactly what's going on yet, but there's something in the 
spark-dispatcher which is failing to pass along properties to the spark-driver 
when using spark-submit in a clustered mesos docker environment.

Most importantly, it's not passing along spark.mesos.executor.docker.image.

cli:
{code}
docker run -t -i --rm --net=host --entrypoint=/usr/local/spark/bin/spark-submit 
docker.example.com/spark:2015.10.2 --conf spark.driver.memory=8G --conf 
spark.mesos.executor.docker.image=docker.example.com/spark:2015.10.2 --master 
mesos://spark-dispatcher.example.com:31262 --deploy-mode cluster 
--properties-file /usr/local/spark/conf/spark-defaults.conf --class 
com.example.spark.streaming.MyApp 
http://jarserver.example.com:8000/sparkapp.jar zk1.example.com:2181 
spark-testing my-stream 40
{code}

submit output:
{code}
15/10/26 22:03:53 INFO RestSubmissionClient: Submitting a request to launch an 
application in mesos://compute1.example.com:31262.
15/10/26 22:03:53 DEBUG RestSubmissionClient: Sending POST request to server at 
http://compute1.example.com:31262/v1/submissions/create:
{
  "action" : "CreateSubmissionRequest",
  "appArgs" : [ "zk1.example.com:2181", "spark-testing", "requests", "40" ],
  "appResource" : "http://jarserver.example.com:8000/sparkapp.jar;,
  "clientSparkVersion" : "1.5.0",
  "environmentVariables" : {
"SPARK_SCALA_VERSION" : "2.10",
"SPARK_CONF_DIR" : "/usr/local/spark/conf",
"SPARK_HOME" : "/usr/local/spark",
"SPARK_ENV_LOADED" : "1"
  },
  "mainClass" : "com.example.spark.streaming.MyApp",
  "sparkProperties" : {
"spark.serializer" : "org.apache.spark.serializer.KryoSerializer",
"spark.executorEnv.MESOS_NATIVE_JAVA_LIBRARY" : 
"/usr/local/lib/libmesos.so",
"spark.history.fs.logDirectory" : "hdfs://hdfsha.example.com/spark/logs",
"spark.eventLog.enabled" : "true",
"spark.driver.maxResultSize" : "0",
"spark.mesos.deploy.recoveryMode" : "ZOOKEEPER",
"spark.mesos.deploy.zookeeper.url" : 
"zk1.example.com:2181,zk2.example.com:2181,zk3.example.com:2181,zk4.example.com:2181,zk5.example.com:2181",
"spark.jars" : "http://jarserver.example.com:8000/sparkapp.jar;,
"spark.driver.supervise" : "false",
"spark.app.name" : "com.example.spark.streaming.MyApp",
"spark.driver.memory" : "8G",
"spark.logConf" : "true",
"spark.deploy.zookeeper.dir" : "/spark_mesos_dispatcher",
"spark.mesos.executor.docker.image" : 
"docker.example.com/spark-prod:2015.10.2",
"spark.submit.deployMode" : "cluster",
"spark.master" : "mesos://compute1.example.com:31262",
"spark.executor.memory" : "8G",
"spark.eventLog.dir" : "hdfs://hdfsha.example.com/spark/logs",
"spark.mesos.docker.executor.network" : "HOST",
"spark.mesos.executor.home" : "/usr/local/spark"
  }
}
15/10/26 22:03:53 DEBUG RestSubmissionClient: Response from the server:
{
  "action" : "CreateSubmissionResponse",
  "serverSparkVersion" : "1.5.0",
  "submissionId" : "driver-20151026220353-0011",
  "success" : true
}
15/10/26 22:03:53 INFO RestSubmissionClient: Submission successfully created as 
driver-20151026220353-0011. Polling submission state...
15/10/26 22:03:53 INFO RestSubmissionClient: Submitting a request for the 
status of submission driver-20151026220353-0011 in 
mesos://compute1.example.com:31262.
15/10/26 22:03:53 DEBUG RestSubmissionClient: Sending GET request to server at 
http://compute1.example.com:31262/v1/submissions/status/driver-20151026220353-0011.
15/10/26 22:03:53 DEBUG RestSubmissionClient: Response from the server:
{
  "action" : "SubmissionStatusResponse",
  "driverState" : "QUEUED",
  "serverSparkVersion" : "1.5.0",
  "submissionId" : "driver-20151026220353-0011",
  "success" : true
}
15/10/26 22:03:53 INFO RestSubmissionClient: State of driver 
driver-20151026220353-0011 is now QUEUED.
15/10/26 22:03:53 INFO RestSubmissionClient: Server responded with 
CreateSubmissionResponse:
{
  "action" : "CreateSubmissionResponse",
  "serverSparkVersion" : "1.5.0",
  "submissionId" : "driver-20151026220353-0011",
  "success" : true
}
{code}

driver log:
{code}
15/10/26 22:08:08 INFO SparkContext: Running Spark version 1.5.0
15/10/26 22:08:08 DEBUG MutableMetricsFactory: field 
org.apache.hadoop.metrics2.lib.MutableRate 
org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginSuccess with 
annotation @org.apache.hadoop.metrics2.annotation.Metric(about=, 
sampleName=Ops, always=false, type=DEFAULT, valueName=Time, value=[Rate of 
successful kerberos logins and latency (milliseconds)])
15/10/26 22:08:08 DEBUG MutableMetricsFactory: field 
org.apache.hadoop.metrics2.lib.MutableRate 
org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginFailure with 
annotation 

[jira] [Updated] (SPARK-11327) spark-dispatcher doesn't pass along some spark properties

2015-10-27 Thread Sean Owen (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-11327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Owen updated SPARK-11327:
--
Component/s: Mesos

[~abraithwaite] setting the component would help here 
(https://cwiki.apache.org/confluence/display/SPARK/Contributing+to+Spark)

> spark-dispatcher doesn't pass along some spark properties
> -
>
> Key: SPARK-11327
> URL: https://issues.apache.org/jira/browse/SPARK-11327
> Project: Spark
>  Issue Type: Bug
>  Components: Mesos
>Reporter: Alan Braithwaite
>
> I haven't figured out exactly what's going on yet, but there's something in 
> the spark-dispatcher which is failing to pass along properties to the 
> spark-driver when using spark-submit in a clustered mesos docker environment.
> Most importantly, it's not passing along spark.mesos.executor.docker.image...
> cli:
> {code}
> docker run -t -i --rm --net=host 
> --entrypoint=/usr/local/spark/bin/spark-submit 
> docker.example.com/spark:2015.10.2 --conf spark.driver.memory=8G --conf 
> spark.mesos.executor.docker.image=docker.example.com/spark:2015.10.2 --master 
> mesos://spark-dispatcher.example.com:31262 --deploy-mode cluster 
> --properties-file /usr/local/spark/conf/spark-defaults.conf --class 
> com.example.spark.streaming.MyApp 
> http://jarserver.example.com:8000/sparkapp.jar zk1.example.com:2181 
> spark-testing my-stream 40
> {code}
> submit output:
> {code}
> 15/10/26 22:03:53 INFO RestSubmissionClient: Submitting a request to launch 
> an application in mesos://compute1.example.com:31262.
> 15/10/26 22:03:53 DEBUG RestSubmissionClient: Sending POST request to server 
> at http://compute1.example.com:31262/v1/submissions/create:
> {
>   "action" : "CreateSubmissionRequest",
>   "appArgs" : [ "zk1.example.com:2181", "spark-testing", "requests", "40" ],
>   "appResource" : "http://jarserver.example.com:8000/sparkapp.jar;,
>   "clientSparkVersion" : "1.5.0",
>   "environmentVariables" : {
> "SPARK_SCALA_VERSION" : "2.10",
> "SPARK_CONF_DIR" : "/usr/local/spark/conf",
> "SPARK_HOME" : "/usr/local/spark",
> "SPARK_ENV_LOADED" : "1"
>   },
>   "mainClass" : "com.example.spark.streaming.MyApp",
>   "sparkProperties" : {
> "spark.serializer" : "org.apache.spark.serializer.KryoSerializer",
> "spark.executorEnv.MESOS_NATIVE_JAVA_LIBRARY" : 
> "/usr/local/lib/libmesos.so",
> "spark.history.fs.logDirectory" : "hdfs://hdfsha.example.com/spark/logs",
> "spark.eventLog.enabled" : "true",
> "spark.driver.maxResultSize" : "0",
> "spark.mesos.deploy.recoveryMode" : "ZOOKEEPER",
> "spark.mesos.deploy.zookeeper.url" : 
> "zk1.example.com:2181,zk2.example.com:2181,zk3.example.com:2181,zk4.example.com:2181,zk5.example.com:2181",
> "spark.jars" : "http://jarserver.example.com:8000/sparkapp.jar;,
> "spark.driver.supervise" : "false",
> "spark.app.name" : "com.example.spark.streaming.MyApp",
> "spark.driver.memory" : "8G",
> "spark.logConf" : "true",
> "spark.deploy.zookeeper.dir" : "/spark_mesos_dispatcher",
> "spark.mesos.executor.docker.image" : 
> "docker.example.com/spark-prod:2015.10.2",
> "spark.submit.deployMode" : "cluster",
> "spark.master" : "mesos://compute1.example.com:31262",
> "spark.executor.memory" : "8G",
> "spark.eventLog.dir" : "hdfs://hdfsha.example.com/spark/logs",
> "spark.mesos.docker.executor.network" : "HOST",
> "spark.mesos.executor.home" : "/usr/local/spark"
>   }
> }
> 15/10/26 22:03:53 DEBUG RestSubmissionClient: Response from the server:
> {
>   "action" : "CreateSubmissionResponse",
>   "serverSparkVersion" : "1.5.0",
>   "submissionId" : "driver-20151026220353-0011",
>   "success" : true
> }
> 15/10/26 22:03:53 INFO RestSubmissionClient: Submission successfully created 
> as driver-20151026220353-0011. Polling submission state...
> 15/10/26 22:03:53 INFO RestSubmissionClient: Submitting a request for the 
> status of submission driver-20151026220353-0011 in 
> mesos://compute1.example.com:31262.
> 15/10/26 22:03:53 DEBUG RestSubmissionClient: Sending GET request to server 
> at 
> http://compute1.example.com:31262/v1/submissions/status/driver-20151026220353-0011.
> 15/10/26 22:03:53 DEBUG RestSubmissionClient: Response from the server:
> {
>   "action" : "SubmissionStatusResponse",
>   "driverState" : "QUEUED",
>   "serverSparkVersion" : "1.5.0",
>   "submissionId" : "driver-20151026220353-0011",
>   "success" : true
> }
> 15/10/26 22:03:53 INFO RestSubmissionClient: State of driver 
> driver-20151026220353-0011 is now QUEUED.
> 15/10/26 22:03:53 INFO RestSubmissionClient: Server responded with 
> CreateSubmissionResponse:
> {
>   "action" : "CreateSubmissionResponse",
>   "serverSparkVersion" : "1.5.0",
>   "submissionId" : "driver-20151026220353-0011",
>   "success" : true
> }
> {code}
> driver log:
> 

[jira] [Updated] (SPARK-11327) spark-dispatcher doesn't pass along some spark properties

2015-10-26 Thread Alan Braithwaite (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-11327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Braithwaite updated SPARK-11327:
-
Description: 
I haven't figured out exactly what's going on yet, but there's something in the 
spark-dispatcher which is failing to pass along properties to the spark-driver 
when using spark-submit in a clustered mesos docker environment.

Most importantly, it's not passing along spark.mesos.executor.docker.image...

cli:
{code}
docker run -t -i --rm --net=host --entrypoint=/usr/local/spark/bin/spark-submit 
docker.example.com/spark:2015.10.2 --conf spark.driver.memory=8G --conf 
spark.mesos.executor.docker.image=docker.example.com/spark:2015.10.2 --master 
mesos://spark-dispatcher.example.com:31262 --deploy-mode cluster 
--properties-file /usr/local/spark/conf/spark-defaults.conf --class 
com.example.spark.streaming.MyApp 
http://jarserver.example.com:8000/sparkapp.jar zk1.example.com:2181 
spark-testing my-stream 40
{code}

submit output:
{code}
15/10/26 22:03:53 INFO RestSubmissionClient: Submitting a request to launch an 
application in mesos://compute1.example.com:31262.
15/10/26 22:03:53 DEBUG RestSubmissionClient: Sending POST request to server at 
http://compute1.example.com:31262/v1/submissions/create:
{
  "action" : "CreateSubmissionRequest",
  "appArgs" : [ "zk1.example.com:2181", "spark-testing", "requests", "40" ],
  "appResource" : "http://jarserver.example.com:8000/sparkapp.jar;,
  "clientSparkVersion" : "1.5.0",
  "environmentVariables" : {
"SPARK_SCALA_VERSION" : "2.10",
"SPARK_CONF_DIR" : "/usr/local/spark/conf",
"SPARK_HOME" : "/usr/local/spark",
"SPARK_ENV_LOADED" : "1"
  },
  "mainClass" : "com.example.spark.streaming.MyApp",
  "sparkProperties" : {
"spark.serializer" : "org.apache.spark.serializer.KryoSerializer",
"spark.executorEnv.MESOS_NATIVE_JAVA_LIBRARY" : 
"/usr/local/lib/libmesos.so",
"spark.history.fs.logDirectory" : "hdfs://hdfsha.example.com/spark/logs",
"spark.eventLog.enabled" : "true",
"spark.driver.maxResultSize" : "0",
"spark.mesos.deploy.recoveryMode" : "ZOOKEEPER",
"spark.mesos.deploy.zookeeper.url" : 
"zk1.example.com:2181,zk2.example.com:2181,zk3.example.com:2181,zk4.example.com:2181,zk5.example.com:2181",
"spark.jars" : "http://jarserver.example.com:8000/sparkapp.jar;,
"spark.driver.supervise" : "false",
"spark.app.name" : "com.example.spark.streaming.MyApp",
"spark.driver.memory" : "8G",
"spark.logConf" : "true",
"spark.deploy.zookeeper.dir" : "/spark_mesos_dispatcher",
"spark.mesos.executor.docker.image" : 
"docker.example.com/spark-prod:2015.10.2",
"spark.submit.deployMode" : "cluster",
"spark.master" : "mesos://compute1.example.com:31262",
"spark.executor.memory" : "8G",
"spark.eventLog.dir" : "hdfs://hdfsha.example.com/spark/logs",
"spark.mesos.docker.executor.network" : "HOST",
"spark.mesos.executor.home" : "/usr/local/spark"
  }
}
15/10/26 22:03:53 DEBUG RestSubmissionClient: Response from the server:
{
  "action" : "CreateSubmissionResponse",
  "serverSparkVersion" : "1.5.0",
  "submissionId" : "driver-20151026220353-0011",
  "success" : true
}
15/10/26 22:03:53 INFO RestSubmissionClient: Submission successfully created as 
driver-20151026220353-0011. Polling submission state...
15/10/26 22:03:53 INFO RestSubmissionClient: Submitting a request for the 
status of submission driver-20151026220353-0011 in 
mesos://compute1.example.com:31262.
15/10/26 22:03:53 DEBUG RestSubmissionClient: Sending GET request to server at 
http://compute1.example.com:31262/v1/submissions/status/driver-20151026220353-0011.
15/10/26 22:03:53 DEBUG RestSubmissionClient: Response from the server:
{
  "action" : "SubmissionStatusResponse",
  "driverState" : "QUEUED",
  "serverSparkVersion" : "1.5.0",
  "submissionId" : "driver-20151026220353-0011",
  "success" : true
}
15/10/26 22:03:53 INFO RestSubmissionClient: State of driver 
driver-20151026220353-0011 is now QUEUED.
15/10/26 22:03:53 INFO RestSubmissionClient: Server responded with 
CreateSubmissionResponse:
{
  "action" : "CreateSubmissionResponse",
  "serverSparkVersion" : "1.5.0",
  "submissionId" : "driver-20151026220353-0011",
  "success" : true
}
{code}

driver log:
{code}
15/10/26 22:08:08 INFO SparkContext: Running Spark version 1.5.0
15/10/26 22:08:08 DEBUG MutableMetricsFactory: field 
org.apache.hadoop.metrics2.lib.MutableRate 
org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginSuccess with 
annotation @org.apache.hadoop.metrics2.annotation.Metric(about=, 
sampleName=Ops, always=false, type=DEFAULT, valueName=Time, value=[Rate of 
successful kerberos logins and latency (milliseconds)])
15/10/26 22:08:08 DEBUG MutableMetricsFactory: field 
org.apache.hadoop.metrics2.lib.MutableRate 
org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginFailure with 
annotation 

[jira] [Updated] (SPARK-11327) spark-dispatcher doesn't pass along some spark properties

2015-10-26 Thread Alan Braithwaite (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-11327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Braithwaite updated SPARK-11327:
-
Description: 
I haven't figured out exactly what's going on yet, but there's something in the 
spark-dispatcher which is failing to pass along properties to the spark-driver 
when using spark-submit in a clustered mesos docker environment.

Most importantly, it's not passing along spark.mesos.executor.docker.image...

cli:
{code}
docker run -t -i --rm --net=host --entrypoint=/usr/local/spark/bin/spark-submit 
docker.example.com/spark:2015.10.2 --conf spark.driver.memory=8G --conf 
spark.mesos.executor.docker.image=docker.example.com/spark:2015.10.2 --master 
mesos://spark-dispatcher.example.com:31262 --deploy-mode cluster 
--properties-file /usr/local/spark/conf/spark-defaults.conf --class 
com.example.spark.streaming.MyApp 
http://jarserver.example.com:8000/sparkapp.jar zk1.example.com:2181 
spark-testing my-stream 40
{code}

submit output:
{code}
15/10/26 22:03:53 INFO RestSubmissionClient: Submitting a request to launch an 
application in mesos://compute1.example.com:31262.
15/10/26 22:03:53 DEBUG RestSubmissionClient: Sending POST request to server at 
http://compute1.example.com:31262/v1/submissions/create:
{
  "action" : "CreateSubmissionRequest",
  "appArgs" : [ "zk1.example.com:2181", "spark-testing", "requests", "40" ],
  "appResource" : "http://jarserver.example.com:8000/sparkapp.jar;,
  "clientSparkVersion" : "1.5.0",
  "environmentVariables" : {
"SPARK_SCALA_VERSION" : "2.10",
"SPARK_CONF_DIR" : "/usr/local/spark/conf",
"SPARK_HOME" : "/usr/local/spark",
"SPARK_ENV_LOADED" : "1"
  },
  "mainClass" : "com.example.spark.streaming.MyApp",
  "sparkProperties" : {
"spark.serializer" : "org.apache.spark.serializer.KryoSerializer",
"spark.executorEnv.MESOS_NATIVE_JAVA_LIBRARY" : 
"/usr/local/lib/libmesos.so",
"spark.history.fs.logDirectory" : "hdfs://hdfsha.example.com/spark/logs",
"spark.eventLog.enabled" : "true",
"spark.driver.maxResultSize" : "0",
"spark.mesos.deploy.recoveryMode" : "ZOOKEEPER",
"spark.mesos.deploy.zookeeper.url" : 
"zk1.example.com:2181,zk2.example.com:2181,zk3.example.com:2181,zk4.example.com:2181,zk5.example.com:2181",
"spark.jars" : "http://jarserver.example.com:8000/sparkapp.jar;,
"spark.driver.supervise" : "false",
"spark.app.name" : "com.example.spark.streaming.MyApp",
"spark.driver.memory" : "8G",
"spark.logConf" : "true",
"spark.deploy.zookeeper.dir" : "/spark_mesos_dispatcher",
"spark.mesos.executor.docker.image" : 
"docker.example.com/spark-prod:2015.10.2",
"spark.submit.deployMode" : "cluster",
"spark.master" : "mesos://compute1.example.com:31262",
"spark.executor.memory" : "8G",
"spark.eventLog.dir" : "hdfs://hdfsha.example.com/spark/logs",
"spark.mesos.docker.executor.network" : "HOST",
"spark.mesos.executor.home" : "/usr/local/spark"
  }
}
15/10/26 22:03:53 DEBUG RestSubmissionClient: Response from the server:
{
  "action" : "CreateSubmissionResponse",
  "serverSparkVersion" : "1.5.0",
  "submissionId" : "driver-20151026220353-0011",
  "success" : true
}
15/10/26 22:03:53 INFO RestSubmissionClient: Submission successfully created as 
driver-20151026220353-0011. Polling submission state...
15/10/26 22:03:53 INFO RestSubmissionClient: Submitting a request for the 
status of submission driver-20151026220353-0011 in 
mesos://compute1.example.com:31262.
15/10/26 22:03:53 DEBUG RestSubmissionClient: Sending GET request to server at 
http://compute1.example.com:31262/v1/submissions/status/driver-20151026220353-0011.
15/10/26 22:03:53 DEBUG RestSubmissionClient: Response from the server:
{
  "action" : "SubmissionStatusResponse",
  "driverState" : "QUEUED",
  "serverSparkVersion" : "1.5.0",
  "submissionId" : "driver-20151026220353-0011",
  "success" : true
}
15/10/26 22:03:53 INFO RestSubmissionClient: State of driver 
driver-20151026220353-0011 is now QUEUED.
15/10/26 22:03:53 INFO RestSubmissionClient: Server responded with 
CreateSubmissionResponse:
{
  "action" : "CreateSubmissionResponse",
  "serverSparkVersion" : "1.5.0",
  "submissionId" : "driver-20151026220353-0011",
  "success" : true
}
{code}

driver log:
{code}
15/10/26 22:08:08 INFO SparkContext: Running Spark version 1.5.0
15/10/26 22:08:08 DEBUG MutableMetricsFactory: field 
org.apache.hadoop.metrics2.lib.MutableRate 
org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginSuccess with 
annotation @org.apache.hadoop.metrics2.annotation.Metric(about=, 
sampleName=Ops, always=false, type=DEFAULT, valueName=Time, value=[Rate of 
successful kerberos logins and latency (milliseconds)])
15/10/26 22:08:08 DEBUG MutableMetricsFactory: field 
org.apache.hadoop.metrics2.lib.MutableRate 
org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginFailure with 
annotation 

[jira] [Updated] (SPARK-11327) spark-dispatcher doesn't pass along some spark properties

2015-10-26 Thread Alan Braithwaite (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-11327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Braithwaite updated SPARK-11327:
-
Description: 
I haven't figured out exactly what's going on yet, but there's something in the 
spark-dispatcher which is failing to pass along properties to the spark-driver 
when using spark-submit in a clustered mesos docker environment.

Most importantly, it's not passing along spark.mesos.executor.docker.image...

{code}
docker run -t -i --rm --net=host --entrypoint=/usr/local/spark/bin/spark-submit 
docker.example.com/spark:2015.10.2 --conf spark.driver.memory=8G --conf 
spark.mesos.executor.docker.image=docker.example.com/spark:2015.10.2 --master 
mesos://spark-dispatcher.example.com:31262 --deploy-mode cluster 
--properties-file /usr/local/spark/conf/spark-defaults.conf --class 
com.example.spark.streaming.MyApp 
http://jarserver.example.com:8000/sparkapp.jar zk1.example.com:2181 
spark-testing my-stream 40
{code}

{code}
15/10/26 22:03:53 INFO RestSubmissionClient: Submitting a request to launch an 
application in mesos://compute1.example.com:31262.
15/10/26 22:03:53 DEBUG RestSubmissionClient: Sending POST request to server at 
http://compute1.example.com:31262/v1/submissions/create:
{
  "action" : "CreateSubmissionRequest",
  "appArgs" : [ "zk1.example.com:2181", "spark-testing", "requests", "40" ],
  "appResource" : "http://jarserver.example.com:8000/sparkapp.jar;,
  "clientSparkVersion" : "1.5.0",
  "environmentVariables" : {
"SPARK_SCALA_VERSION" : "2.10",
"SPARK_CONF_DIR" : "/usr/local/spark/conf",
"SPARK_HOME" : "/usr/local/spark",
"SPARK_ENV_LOADED" : "1"
  },
  "mainClass" : "com.example.spark.streaming.MyApp",
  "sparkProperties" : {
"spark.serializer" : "org.apache.spark.serializer.KryoSerializer",
"spark.executorEnv.MESOS_NATIVE_JAVA_LIBRARY" : 
"/usr/local/lib/libmesos.so",
"spark.history.fs.logDirectory" : "hdfs://hdfsha.example.com/spark/logs",
"spark.eventLog.enabled" : "true",
"spark.driver.maxResultSize" : "0",
"spark.mesos.deploy.recoveryMode" : "ZOOKEEPER",
"spark.mesos.deploy.zookeeper.url" : 
"zk1.example.com:2181,zk2.example.com:2181,zk3.example.com:2181,zk4.example.com:2181,zk5.example.com:2181",
"spark.jars" : "http://jarserver.example.com:8000/sparkapp.jar;,
"spark.driver.supervise" : "false",
"spark.app.name" : "com.example.spark.streaming.MyApp",
"spark.driver.memory" : "8G",
"spark.logConf" : "true",
"spark.deploy.zookeeper.dir" : "/spark_mesos_dispatcher",
"spark.mesos.executor.docker.image" : 
"docker.example.com/spark-prod:2015.10.2",
"spark.submit.deployMode" : "cluster",
"spark.master" : "mesos://compute1.example.com:31262",
"spark.executor.memory" : "8G",
"spark.eventLog.dir" : "hdfs://hdfsha.example.com/spark/logs",
"spark.mesos.docker.executor.network" : "HOST",
"spark.mesos.executor.home" : "/usr/local/spark"
  }
}
15/10/26 22:03:53 DEBUG RestSubmissionClient: Response from the server:
{
  "action" : "CreateSubmissionResponse",
  "serverSparkVersion" : "1.5.0",
  "submissionId" : "driver-20151026220353-0011",
  "success" : true
}
15/10/26 22:03:53 INFO RestSubmissionClient: Submission successfully created as 
driver-20151026220353-0011. Polling submission state...
15/10/26 22:03:53 INFO RestSubmissionClient: Submitting a request for the 
status of submission driver-20151026220353-0011 in 
mesos://compute1.example.com:31262.
15/10/26 22:03:53 DEBUG RestSubmissionClient: Sending GET request to server at 
http://compute1.example.com:31262/v1/submissions/status/driver-20151026220353-0011.
15/10/26 22:03:53 DEBUG RestSubmissionClient: Response from the server:
{
  "action" : "SubmissionStatusResponse",
  "driverState" : "QUEUED",
  "serverSparkVersion" : "1.5.0",
  "submissionId" : "driver-20151026220353-0011",
  "success" : true
}
15/10/26 22:03:53 INFO RestSubmissionClient: State of driver 
driver-20151026220353-0011 is now QUEUED.
15/10/26 22:03:53 INFO RestSubmissionClient: Server responded with 
CreateSubmissionResponse:
{
  "action" : "CreateSubmissionResponse",
  "serverSparkVersion" : "1.5.0",
  "submissionId" : "driver-20151026220353-0011",
  "success" : true
}
{code}

{code}
15/10/26 22:08:08 INFO SparkContext: Running Spark version 1.5.0
15/10/26 22:08:08 DEBUG MutableMetricsFactory: field 
org.apache.hadoop.metrics2.lib.MutableRate 
org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginSuccess with 
annotation @org.apache.hadoop.metrics2.annotation.Metric(about=, 
sampleName=Ops, always=false, type=DEFAULT, valueName=Time, value=[Rate of 
successful kerberos logins and latency (milliseconds)])
15/10/26 22:08:08 DEBUG MutableMetricsFactory: field 
org.apache.hadoop.metrics2.lib.MutableRate 
org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginFailure with 
annotation @org.apache.hadoop.metrics2.annotation.Metric(about=, 

[jira] [Updated] (SPARK-11327) spark-dispatcher doesn't pass along some spark properties

2015-10-26 Thread Alan Braithwaite (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-11327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Braithwaite updated SPARK-11327:
-
Description: 
I haven't figured out exactly what's going on yet, but there's something in the 
spark-dispatcher which is failing to pass along properties to the spark-driver 
when using spark-submit in a clustered mesos docker environment.

Most importantly, it's not passing along spark.mesos.executor.docker.image...

{code}
docker run -t -i --rm --net=host --entrypoint=/usr/local/spark/bin/spark-submit 
docker.example.com/spark:2015.10.2 --conf spark.driver.memory=8G --conf 
spark.mesos.executor.docker.image=docker.example.com/spark:2015.10.2 --master 
mesos://spark-dispatcher.example.com:31262 --deploy-mode cluster 
--properties-file /usr/local/spark/conf/spark-defaults.conf --class 
com.example.spark.streaming.MyApp 
http://jarserver.example.com:8000/sparkapp.jar zk1.example.com:2181 
spark-testing my-stream 40
{code}

{code}
15/10/26 22:03:53 INFO RestSubmissionClient: Submitting a request to launch an 
application in mesos://compute1.example.com:31262.
15/10/26 22:03:53 DEBUG RestSubmissionClient: Sending POST request to server at 
http://compute1.example.com:31262/v1/submissions/create:
{
  "action" : "CreateSubmissionRequest",
  "appArgs" : [ "zk1.example.com:2181", "spark-testing", "requests", "40" ],
  "appResource" : "http://jarserver.example.com:8000/sparkapp.jar;,
  "clientSparkVersion" : "1.5.0",
  "environmentVariables" : {
"SPARK_SCALA_VERSION" : "2.10",
"SPARK_CONF_DIR" : "/usr/local/spark/conf",
"SPARK_HOME" : "/usr/local/spark",
"SPARK_ENV_LOADED" : "1"
  },
  "mainClass" : "com.example.spark.streaming.MyApp",
  "sparkProperties" : {
"spark.serializer" : "org.apache.spark.serializer.KryoSerializer",
"spark.executorEnv.MESOS_NATIVE_JAVA_LIBRARY" : 
"/usr/local/lib/libmesos.so",
"spark.history.fs.logDirectory" : "hdfs://hdfsha.example.com/spark/logs",
"spark.eventLog.enabled" : "true",
"spark.driver.maxResultSize" : "0",
"spark.mesos.deploy.recoveryMode" : "ZOOKEEPER",
"spark.mesos.deploy.zookeeper.url" : 
"zk1.example.com:2181,zk2.example.com:2181,zk3.example.com:2181,zk4.example.com:2181,zk5.example.com:2181",
"spark.jars" : "http://jarserver.example.com:8000/sparkapp.jar;,
"spark.driver.supervise" : "false",
"spark.app.name" : "com.example.spark.streaming.MyApp",
"spark.driver.memory" : "8G",
"spark.logConf" : "true",
"spark.deploy.zookeeper.dir" : "/spark_mesos_dispatcher",
"spark.mesos.executor.docker.image" : 
"docker.example.com/spark-prod:2015.10.2",
"spark.submit.deployMode" : "cluster",
"spark.master" : "mesos://compute1.example.com:31262",
"spark.executor.memory" : "8G",
"spark.eventLog.dir" : "hdfs://hdfsha.example.com/spark/logs",
"spark.mesos.docker.executor.network" : "HOST",
"spark.mesos.executor.home" : "/usr/local/spark"
  }
}
15/10/26 22:03:53 DEBUG RestSubmissionClient: Response from the server:
{
  "action" : "CreateSubmissionResponse",
  "serverSparkVersion" : "1.5.0",
  "submissionId" : "driver-20151026220353-0011",
  "success" : true
}
15/10/26 22:03:53 INFO RestSubmissionClient: Submission successfully created as 
driver-20151026220353-0011. Polling submission state...
15/10/26 22:03:53 INFO RestSubmissionClient: Submitting a request for the 
status of submission driver-20151026220353-0011 in 
mesos://compute1.example.com:31262.
15/10/26 22:03:53 DEBUG RestSubmissionClient: Sending GET request to server at 
http://compute1.example.com:31262/v1/submissions/status/driver-20151026220353-0011.
15/10/26 22:03:53 DEBUG RestSubmissionClient: Response from the server:
{
  "action" : "SubmissionStatusResponse",
  "driverState" : "QUEUED",
  "serverSparkVersion" : "1.5.0",
  "submissionId" : "driver-20151026220353-0011",
  "success" : true
}
15/10/26 22:03:53 INFO RestSubmissionClient: State of driver 
driver-20151026220353-0011 is now QUEUED.
15/10/26 22:03:53 INFO RestSubmissionClient: Server responded with 
CreateSubmissionResponse:
{
  "action" : "CreateSubmissionResponse",
  "serverSparkVersion" : "1.5.0",
  "submissionId" : "driver-20151026220353-0011",
  "success" : true
}
{code}


  was:
I haven't figured out exactly what's going on yet, but there's something in the 
spark-dispatcher which is failing to pass along properties to the spark-driver 
when using spark-submit in a clustered mesos docker environment.

Most importantly, it's not passing along spark.mesos.executor.docker.image...

{code}
docker run -t -i --rm --net=host --entrypoint=/usr/local/spark/bin/spark-submit 
docker.example.com/spark:2015.10.2 --conf spark.driver.memory=8G --conf 
spark.mesos.executor.docker.image=docker.example.com/spark:2015.10.2 --master 
mesos://spark-dispatcher.example.com:31262 --deploy-mode cluster 
--properties-file /usr/local/spark/conf/spark-defaults.conf --class