[ 
https://issues.apache.org/jira/browse/SPARK-11638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15011210#comment-15011210
 ] 

Stavros Kontopoulos commented on SPARK-11638:
---------------------------------------------

Ok i tried to veirfy that but no result:

On my host machine:

echo 'docker,mesos' > /etc/mesos-slave/containerizers
echo '5mins' > /etc/mesos-slave/executor_registration_timeout
sudo service mesos-slave restart

curl -X POST -H "Content-Type: application/json" http://localhost:8080/v2/apps 
-d...@docker.json

#copy patched jars
export CONT="$(docker ps | grep mesos |  awk '{print $15; }')"
docker cp spark-core_2.10-1.5.1.jar $CONT:/mnt/mesos/sandbox
docker cp spark-repl_2.10-1.5.1.jar $CONT:/mnt/mesos/sandbox
docker cp akka-remote_2.10-2.3.4.jar $CONT:/mnt/mesos/sandbox  

Docker.json:

{
  "container": {
    "type": "DOCKER",
    "name" : "test",
    "docker": {
      "image": 
"andypetrella/spark-notebook:0.6.2-SNAPSHOT-scala-2.10.4-spark-1.5.1-hadoop-2.6.0",
      "network": "BRIDGE",
          "portMappings": [
              {
                  "containerPort": 9000,
                  "hostPort": 0,
                  "protocol": "tcp"
              }
            ]
    },

    "volumes": [
      { "containerPath": "/var/spark/spark-1.5.1-bin-custom-spark.tgz",
       "hostPath": "/xxxx/spark-1.5.1-bin-custom-spark.tgz",
       "mode": "RO" }
    ]

  },
  "id": "ubuntu2",
  "instances": 1,
  "cpus": 2,
  "mem": 1024,
  "cmd" : "while sleep 1000; do date -u +%T; done"
}

On my host Ubuntu machine i run mesos-master, mesos-slave, and zookeeper. 
zookeeper listens on *:2181.

I have used this image 
"andypetrella/spark-notebook:0.6.2-SNAPSHOT-scala-2.10.4-spark-1.5.1-hadoop-2.6.0

The docker0 interface has ip 172.17.0.1

The containers private ip is 172.17.0.2

You can ping to each other...

Next i run inside the container:

sudo apt-get --only-upgrade install mesos #upgrade to 0.25.0

export AGENT_ADVERTISE_IP=172.17.0.1
export LIBPROCESS_ADVERTISE_IP=$AGENT_ADVERTISE_IP
export LIBPROCESS_ADVERTISE_PORT=9050 # choose your own port
export SPARK_LOCAL_IP=$(ip -o -4 addr list eth0 | perl -n -e 'if 
(m{inet\s([\d\.]+)\/\d+\s}xms) { print $1 }')  #172.17.0.1
export SPARK_PUBLIC_DNS=$HOST
export SPARK_LOCAL_HOSTNAME=$HOST  //thinkpad->172.17.0.2

# this is required so Spark can bind to the $SPARK_PUBLIC_DNS
my_own_ip=$(cat /etc/hosts | grep $HOSTNAME | awk '{print $1}') #172.17.0.1

echo "$my_own_ip    $SPARK_PUBLIC_DNS" >> /etc/hosts

export SPARK_EXECUTOR_URI=/var/spark/spark-1.5.1-bin-custom-spark.tgz

# Make sure the spark-core, spark-repl and akka-remote jars are in the sandbox 
(add them with uris).
# This is important now, these 3 jars have to placed on the class path first, 
before any other jar goes in, Data Fellas Spark notebook provides 
CLASSPATH_OVERRIDES env variable especially for that:

export 
CLASSPATH_OVERRIDES=$MESOS_SANDBOX/akka-remote_2.10-2.3.4.jar:$MESOS_SANDBOX/spark-core_2.10-1.5.1.jar:$MESOS_SANDBOX/spark-repl_2.10-1.5.1.jar

export NOTEBOOK_APPLICATION_CONTEXT=/

# Spark notebook settings:
cd /opt/docker && ./bin/spark-notebook 
-Dapplication.context="$NOTEBOOK_APPLICATION_CONTEXT" \
    -Dmanager.notebooks.custom.args.0="-Dspark.driver.host=$SPARK_PUBLIC_DNS" \
    -Dmanager.notebooks.custom.args.1="-Dspark.driver.port=6666" \
    -Dmanager.notebooks.custom.args.2="-Dspark.driver.advertisedPort=6666" \
    -Dmanager.notebooks.custom.args.3="-Dspark.replClassServer.port=23456" \
    
-Dmanager.notebooks.custom.args.4="-Dspark.replClassServer.advertisedPort=23456"
 \
    
-Dmanager.notebooks.custom.sparkConf.spark.executor.uri=${SPARK_EXECUTOR_URI} \
    
-Dmanager.notebooks.custom.sparkConf.spark.master=mesos://zk://172.17.0.1:2181/mesos
  \
    -Dmanager.notebooks.custom.sparkConf.spark.fileserver.port=6677 \
    -Dmanager.notebooks.custom.sparkConf.spark.fileserver.advertisedPort=6677 \
    
-Dmanager.notebooks.custom.sparkConf.spark.broadcast.factory=org.apache.spark.broadcast.HttpBroadcastFactory
 \
    -Dmanager.notebooks.custom.sparkConf.spark.broadcast.port=6688 \
    -Dmanager.notebooks.custom.sparkConf.spark.broadcast.advertisedPort=6688

These settings are not passed when i open the notebook i had to edit the 
metadata for the notebook i wanted to use for example i tried to use  
http://localhost:31590/notebooks/core/Simple%20Spark.snb#:

{
  "name": "Simple Spark",
  "user_save_timestamp": "2014-10-11T17:33:45.703Z",
  "auto_save_timestamp": "2015-01-10T00:02:12.659Z",
  "language_info": {
    "name": "scala",
    "file_extension": "scala",
    "codemirror_mode": "text/x-scala"
  },
  "trusted": true,
  "customLocalRepo": null,
  "customRepos": null,
  "customDeps": null,
  "customImports": null,
  "customArgs": [
    "-Dspark.driver.host=172.17.0.1",
    "-Dspark.driver.port=6666",
    "-Dspark.driver.advertisedPort=6666",
    "-Dspark.replClassServer.port=23456",
    "-Dspark.replClassServer.advertisedPort=23456"
    ]
  ,
  "customSparkConf": {
    "spark.master": "mesos://zk://172.17.0.1:2181/mesos",
    "spark.executor.uri": "/var/spark/spark-1.5.1-bin-custom-spark.tgz",
    "spark.fileserver.port": "6677",
    "spark.fileserver.advertisedPort": "6677",
    "spark.broadcast.factory": 
"org.apache.spark.broadcast.HttpBroadcastFactory",
    "spark.broadcast.port": "6688",
    "spark.broadcast.advertisedPort": "6688"
  },
  "kernelspec": {
    "name": "spark",
    "display_name": "Scala [2.10.4] Spark [1.5.1] Hadoop [2.6.0]  "
  }
}

This is the log outuput at the the sparknotebook side:


[info] application - customSparkConf >> 
{"spark.master":"mesos://zk://172.17.0.1:2181/mesos","spark.executor.uri":"/var/spark/spark-1.5.1-bin-custom-spark.tgz","spark.fileserver.port":"6677","spark.fileserver.advertisedPort":"6677","spark.broadcast.factory":"org.apache.spark.broadcast.HttpBroadcastFactory","spark.broadcast.port":"6688","spark.broadcast.advertisedPort":"6688"}
[info] application - Spawning [/usr/lib/jvm/java-7-openjdk-amd64/jre/bin/java, 
-Xmx1834483712, -XX:MaxPermSize=1073741824, -server, 
-Dspark.driver.host=thinkpad, -Dspark.driver.port=6666, 
-Dspark.driver.advertisedPort=6666, -Dspark.replClassServer.port=23456, 
-Dspark.replClassServer.advertisedPort=23456, 
notebook.kernel.pfork.ChildProcessMain, 
notebook.kernel.remote.RemoteActorProcess, 44225, info, 
1953f7c7-0119-4326-9ad3-d72dbeec846a, "core/Simple Spark.snb", kernel, ]
[info] application - With Env Map(SHLVL -> 1, JAVA_HOME -> 
/usr/lib/jvm/java-7-openjdk-amd64, GREP_OPTIONS -> --color=auto, PORT_9000 -> 
31590, SPARK_LOCAL_HOSTNAME -> thinkpad, PWD -> /opt/docker, HOST -> thinkpad, 
HOSTNAME -> 0c4fba1664ac, MESOS_JAVA_NATIVE_LIBRARY -> 
/usr/local/lib/libmesos-0.22.0.so, PORT -> 31590, CLASSPATH_OVERRIDES -> 
/mnt/mesos/sandbox/akka-remote_2.10-2.3.4.jar:/mnt/mesos/sandbox/spark-core_2.10-1.5.1.jar:/mnt/mesos/sandbox/spark-repl_2.10-1.5.1.jar,
 GIT_PS1_SHOWDIRTYSTATE -> 1, MESOS_TASK_ID -> 
ubuntu2.f13323a6-8e01-11e5-a3ef-361f883cecc0, MESOS_SANDBOX -> 
/mnt/mesos/sandbox, GREP_COLOR -> 1;31, MARATHON_APP_VERSION -> 
2015-11-18T14:37:49.682Z, SPARK_PUBLIC_DNS -> thinkpad, PORT0 -> 31590, 
MARATHON_APP_ID -> /ubuntu2, ADD_JARS -> 
,/opt/docker/lib/common.common-0.6.2-SNAPSHOT-scala-2.10.4-spark-1.5.1-hadoop-2.6.0.jar,
 MESOS_CONTAINER_NAME -> 
mesos-031543d6-8344-4b3d-b294-c6c226287d92-S0.95338bd0-e9f2-4c6c-ba62-f8e5650f1ed7,
 PORTS -> 31590, NOTEBOOK_APPLICATION_CONTEXT -> /, MESOS_LOG_DIR -> 
/var/log/mesos, SPARK_EXECUTOR_URI -> 
/var/spark/spark-1.5.1-bin-custom-spark.tgz, MARATHON_APP_DOCKER_IMAGE -> 
andypetrella/spark-notebook:0.6.2-SNAPSHOT-scala-2.10.4-spark-1.5.1-hadoop-2.6.0,
 CLASSPATH -> 
/mnt/mesos/sandbox/akka-remote_2.10-2.3.4.jar:/mnt/mesos/sandbox/spark-core_2.10-1.5.1.jar:/mnt/mesos/sandbox/spark-repl_2.10-1.5.1.jar:/opt/docker:/opt/docker/lib/nooostab.spark-notebook-0.6.2-SNAPSHOT-scala-2.10.4-spark-1.5.1-hadoop-2.6.0.jar:/opt/docker/lib/tachyon.tachyon-0.6.2-SNAPSHOT-scala-2.10.4-spark-1.5.1-hadoop-2.6.0.jar:/opt/docker/lib/subprocess.subprocess-0.6.2-SNAPSHOT-scala-2.10.4-spark-1.5.1-hadoop-2.6.0.jar:/opt/docker/lib/observable.observable-0.6.2-SNAPSHOT-scala-2.10.4-spark-1.5.1-hadoop-2.6.0.jar:/opt/docker/lib/common.common-0.6.2-SNAPSHOT-scala-2.10.4-spark-1.5.1-hadoop-2.6.0.jar:/opt/docker/lib/spark.spark-0.6.2-SNAPSHOT-scala-2.10.4-spark-1.5.1-hadoop-2.6.0.jar:/opt/docker/lib/kernel.kernel-0.6.2-SNAPSHOT-scala-2.10.4-spark-1.5.1-hadoop-2.6.0.jar:/opt/docker/lib/wisp_2.10-0.0.5.jar:/opt/docker/lib/org.scala-lang.scala-compiler-2.10.4.jar:/opt/docker/lib/org.scala-lang.scala-library-2.10.4.jar:/opt/docker/lib/org.scala-lang.scala-reflect-2.10.4.jar:/opt/docker/lib/com.google.guava.guava-14.0.1.jar:/opt/docker/lib/org.tachyonproject.tachyon-common-0.7.1.jar:/opt/docker/lib/commons-io.commons-io-2.4.jar:/opt/docker/lib/log4j.log4j-1.2.16.jar:/opt/docker/lib/org.apache.thrift.libthrift-0.9.1.jar:/opt/docker/lib/org.apache.httpcomponents.httpclient-4.2.5.jar:/opt/docker/lib/org.apache.httpcomponents.httpcore-4.2.4.jar:/opt/docker/lib/org.tachyonproject.tachyon-client-0.7.1.jar:/opt/docker/lib/org.tachyonproject.tachyon-underfs-hdfs-0.7.1.jar:/opt/docker/lib/org.tachyonproject.tachyon-underfs-local-0.7.1.jar:/opt/docker/lib/org.tachyonproject.tachyon-servers-0.7.1.jar:/opt/docker/lib/org.tachyonproject.tachyon-client-unshaded-0.7.1.jar:/opt/docker/lib/org.eclipse.jetty.jetty-jsp-7.6.15.v20140411.jar:/opt/docker/lib/org.eclipse.jetty.orbit.javax.servlet.jsp-2.1.0.v201105211820.jar:/opt/docker/lib/org.eclipse.jetty.orbit.org.apache.jasper.glassfish-2.1.0.v201110031002.jar:/opt/docker/lib/org.eclipse.jetty.orbit.javax.servlet.jsp.jstl-1.2.0.v201105211821.jar:/opt/docker/lib/org.eclipse.jetty.orbit.org.apache.taglibs.standard.glassfish-1.2.0.v201112081803.jar:/opt/docker/lib/org.eclipse.jetty.orbit.javax.el-2.1.0.v201105211819.jar:/opt/docker/lib/org.eclipse.jetty.orbit.com.sun.el-1.0.0.v201105211818.jar:/opt/docker/lib/org.eclipse.jetty.orbit.org.eclipse.jdt.core-3.7.1.jar:/opt/docker/lib/org.eclipse.jetty.jetty-webapp-7.6.15.v20140411.jar:/opt/docker/lib/org.eclipse.jetty.jetty-xml-7.6.15.v20140411.jar:/opt/docker/lib/org.eclipse.jetty.jetty-util-7.6.15.v20140411.jar:/opt/docker/lib/org.eclipse.jetty.jetty-servlet-7.6.15.v20140411.jar:/opt/docker/lib/org.eclipse.jetty.jetty-security-7.6.15.v20140411.jar:/opt/docker/lib/org.eclipse.jetty.jetty-server-7.6.15.v20140411.jar:/opt/docker/lib/org.eclipse.jetty.jetty-continuation-7.6.15.v20140411.jar:/opt/docker/lib/org.eclipse.jetty.jetty-http-7.6.15.v20140411.jar:/opt/docker/lib/org.eclipse.jetty.jetty-io-7.6.15.v20140411.jar:/opt/docker/lib/org.tachyonproject.tachyon-minicluster-0.7.1.jar:/opt/docker/lib/org.apache.curator.curator-test-2.1.0-incubating.jar:/opt/docker/lib/org.apache.commons.commons-math-2.2.jar:/opt/docker/lib/com.typesafe.play.play_2.10-2.3.7.jar:/opt/docker/lib/com.typesafe.play.build-link-2.3.7.jar:/opt/docker/lib/com.typesafe.play.play-exceptions-2.3.7.jar:/opt/docker/lib/org.javassist.javassist-3.18.2-GA.jar:/opt/docker/lib/org.scala-stm.scala-stm_2.10-0.7.jar:/opt/docker/lib/com.typesafe.config-1.2.1.jar:/opt/docker/lib/org.joda.joda-convert-1.6.jar:/opt/docker/lib/com.typesafe.play.twirl-api_2.10-1.0.2.jar:/opt/docker/lib/io.netty.netty-3.9.3.Final.jar:/opt/docker/lib/com.typesafe.netty.netty-http-pipelining-1.1.2.jar:/opt/docker/lib/ch.qos.logback.logback-core-1.1.1.jar:/opt/docker/lib/commons-codec.commons-codec-1.10.jar:/opt/docker/lib/xerces.xercesImpl-2.11.0.jar:/opt/docker/lib/xml-apis.xml-apis-1.4.01.jar:/opt/docker/lib/javax.transaction.jta-1.1.jar:/opt/docker/lib/com.typesafe.akka.akka-actor_2.10-2.3.11.jar:/opt/docker/lib/com.typesafe.akka.akka-remote_2.10-2.3.11.jar:/opt/docker/lib/com.google.protobuf.protobuf-java-2.5.0.jar:/opt/docker/lib/org.uncommons.maths.uncommons-maths-1.2.2a.jar:/opt/docker/lib/com.typesafe.akka.akka-slf4j_2.10-2.3.11.jar:/opt/docker/lib/org.apache.commons.commons-exec-1.3.jar:/opt/docker/lib/com.github.fommil.netlib.core-1.1.2.jar:/opt/docker/lib/net.sourceforge.f2j.arpack_combined_all-0.1.jar:/opt/docker/lib/net.sourceforge.f2j.arpack_combined_all-0.1-javadoc.jar:/opt/docker/lib/net.sf.opencsv.opencsv-2.3.jar:/opt/docker/lib/com.github.rwl.jtransforms-2.4.0.jar:/opt/docker/lib/org.spire-math.spire_2.10-0.7.4.jar:/opt/docker/lib/org.spire-math.spire-macros_2.10-0.7.4.jar:/opt/docker/lib/org.apache.spark.spark-core_2.10-1.5.1.jar:/opt/docker/lib/org.apache.avro.avro-mapred-1.7.7-hadoop2.jar:/opt/docker/lib/org.apache.avro.avro-ipc-1.7.7-tests.jar:/opt/docker/lib/org.apache.avro.avro-ipc-1.7.7.jar:/opt/docker/lib/org.apache.avro.avro-1.7.7.jar:/opt/docker/lib/org.codehaus.jackson.jackson-core-asl-1.9.13.jar:/opt/docker/lib/org.codehaus.jackson.jackson-mapper-asl-1.9.13.jar:/opt/docker/lib/org.apache.commons.commons-compress-1.4.1.jar:/opt/docker/lib/org.tukaani.xz-1.0.jar:/opt/docker/lib/com.twitter.chill_2.10-0.5.0.jar:/opt/docker/lib/com.twitter.chill-java-0.5.0.jar:/opt/docker/lib/com.esotericsoftware.kryo.kryo-2.21.jar:/opt/docker/lib/com.esotericsoftware.reflectasm.reflectasm-1.07-shaded.jar:/opt/docker/lib/com.esotericsoftware.minlog.minlog-1.2.jar:/opt/docker/lib/org.objenesis.objenesis-1.2.jar:/opt/docker/lib/org.apache.spark.spark-launcher_2.10-1.5.1.jar:/opt/docker/lib/org.spark-project.spark.unused-1.0.0.jar:/opt/docker/lib/org.apache.spark.spark-network-common_2.10-1.5.1.jar:/opt/docker/lib/io.netty.netty-all-4.0.29.Final.jar:/opt/docker/lib/org.apache.spark.spark-network-shuffle_2.10-1.5.1.jar:/opt/docker/lib/org.apache.spark.spark-unsafe_2.10-1.5.1.jar:/opt/docker/lib/com.google.code.findbugs.jsr305-1.3.9.jar:/opt/docker/lib/net.java.dev.jets3t.jets3t-0.9.0.jar:/opt/docker/lib/org.eclipse.jetty.orbit.javax.servlet-3.0.0.v201112011016.jar:/opt/docker/lib/org.apache.commons.commons-lang3-3.3.2.jar:/opt/docker/lib/org.apache.commons.commons-math3-3.4.1.jar:/opt/docker/lib/org.slf4j.slf4j-api-1.7.10.jar:/opt/docker/lib/org.slf4j.jul-to-slf4j-1.7.10.jar:/opt/docker/lib/org.slf4j.jcl-over-slf4j-1.7.10.jar:/opt/docker/lib/org.slf4j.slf4j-log4j12-1.7.10.jar:/opt/docker/lib/com.ning.compress-lzf-1.0.3.jar:/opt/docker/lib/org.xerial.snappy.snappy-java-1.1.1.7.jar:/opt/docker/lib/net.jpountz.lz4.lz4-1.3.0.jar:/opt/docker/lib/org.roaringbitmap.RoaringBitmap-0.4.5.jar:/opt/docker/lib/org.json4s.json4s-jackson_2.10-3.2.10.jar:/opt/docker/lib/org.json4s.json4s-core_2.10-3.2.10.jar:/opt/docker/lib/org.json4s.json4s-ast_2.10-3.2.10.jar:/opt/docker/lib/com.thoughtworks.paranamer.paranamer-2.6.jar:/opt/docker/lib/org.scala-lang.scalap-2.10.0.jar:/opt/docker/lib/com.fasterxml.jackson.core.jackson-databind-2.4.4.jar:/opt/docker/lib/com.fasterxml.jackson.core.jackson-core-2.4.4.jar:/opt/docker/lib/com.sun.jersey.jersey-server-1.9.jar:/opt/docker/lib/com.sun.jersey.jersey-core-1.9.jar:/opt/docker/lib/org.apache.mesos.mesos-0.21.1-shaded-protobuf.jar:/opt/docker/lib/com.clearspring.analytics.stream-2.7.0.jar:/opt/docker/lib/io.dropwizard.metrics.metrics-core-3.1.2.jar:/opt/docker/lib/io.dropwizard.metrics.metrics-jvm-3.1.2.jar:/opt/docker/lib/io.dropwizard.metrics.metrics-json-3.1.2.jar:/opt/docker/lib/io.dropwizard.metrics.metrics-graphite-3.1.2.jar:/opt/docker/lib/com.fasterxml.jackson.module.jackson-module-scala_2.10-2.4.4.jar:/opt/docker/lib/com.fasterxml.jackson.core.jackson-annotations-2.4.4.jar:/opt/docker/lib/oro.oro-2.0.8.jar:/opt/docker/lib/net.razorvine.pyrolite-4.4.jar:/opt/docker/lib/net.sf.py4j.py4j-0.8.2.1.jar:/opt/docker/lib/com.jamesmurty.utils.java-xmlbuilder-0.4.jar:/opt/docker/lib/org.apache.spark.spark-yarn_2.10-1.5.1.jar:/opt/docker/lib/org.apache.spark.spark-sql_2.10-1.5.1.jar:/opt/docker/lib/org.apache.spark.spark-catalyst_2.10-1.5.1.jar:/opt/docker/lib/org.codehaus.janino.janino-2.7.8.jar:/opt/docker/lib/org.codehaus.janino.commons-compiler-2.7.8.jar:/opt/docker/lib/org.apache.parquet.parquet-column-1.7.0.jar:/opt/docker/lib/org.apache.parquet.parquet-common-1.7.0.jar:/opt/docker/lib/org.apache.parquet.parquet-encoding-1.7.0.jar:/opt/docker/lib/org.apache.parquet.parquet-generator-1.7.0.jar:/opt/docker/lib/org.apache.parquet.parquet-hadoop-1.7.0.jar:/opt/docker/lib/org.apache.parquet.parquet-format-2.3.0-incubating.jar:/opt/docker/lib/org.apache.parquet.parquet-jackson-1.7.0.jar:/opt/docker/lib/org.apache.hadoop.hadoop-client-2.6.0.jar:/opt/docker/lib/org.apache.hadoop.hadoop-common-2.6.0.jar:/opt/docker/lib/org.apache.hadoop.hadoop-annotations-2.6.0.jar:/opt/docker/lib/commons-cli.commons-cli-1.2.jar:/opt/docker/lib/xmlenc.xmlenc-0.52.jar:/opt/docker/lib/commons-httpclient.commons-httpclient-3.1.jar:/opt/docker/lib/commons-logging.commons-logging-1.1.3.jar:/opt/docker/lib/commons-net.commons-net-3.1.jar:/opt/docker/lib/commons-collections.commons-collections-3.2.1.jar:/opt/docker/lib/commons-lang.commons-lang-2.6.jar:/opt/docker/lib/commons-configuration.commons-configuration-1.6.jar:/opt/docker/lib/commons-digester.commons-digester-1.8.jar:/opt/docker/lib/commons-beanutils.commons-beanutils-1.7.0.jar:/opt/docker/lib/commons-beanutils.commons-beanutils-core-1.8.0.jar:/opt/docker/lib/com.google.code.gson.gson-2.2.4.jar:/opt/docker/lib/org.apache.hadoop.hadoop-auth-2.6.0.jar:/opt/docker/lib/org.apache.directory.server.apacheds-kerberos-codec-2.0.0-M15.jar:/opt/docker/lib/org.apache.directory.server.apacheds-i18n-2.0.0-M15.jar:/opt/docker/lib/org.apache.directory.api.api-asn1-api-1.0.0-M20.jar:/opt/docker/lib/org.apache.directory.api.api-util-1.0.0-M20.jar:/opt/docker/lib/org.apache.curator.curator-framework-2.6.0.jar:/opt/docker/lib/org.apache.curator.curator-client-2.6.0.jar:/opt/docker/lib/org.apache.zookeeper.zookeeper-3.4.6.jar:/opt/docker/lib/org.apache.curator.curator-recipes-2.6.0.jar:/opt/docker/lib/org.htrace.htrace-core-3.0.4.jar:/opt/docker/lib/org.apache.hadoop.hadoop-hdfs-2.6.0.jar:/opt/docker/lib/org.mortbay.jetty.jetty-util-6.1.26.jar:/opt/docker/lib/org.apache.hadoop.hadoop-mapreduce-client-app-2.6.0.jar:/opt/docker/lib/org.apache.hadoop.hadoop-mapreduce-client-common-2.6.0.jar:/opt/docker/lib/org.apache.hadoop.hadoop-yarn-common-2.6.0.jar:/opt/docker/lib/org.apache.hadoop.hadoop-yarn-api-2.6.0.jar:/opt/docker/lib/com.sun.jersey.jersey-client-1.9.jar:/opt/docker/lib/org.codehaus.jackson.jackson-jaxrs-1.9.13.jar:/opt/docker/lib/org.codehaus.jackson.jackson-xc-1.9.13.jar:/opt/docker/lib/com.google.inject.guice-3.0.jar:/opt/docker/lib/javax.inject.javax.inject-1.jar:/opt/docker/lib/aopalliance.aopalliance-1.0.jar:/opt/docker/lib/org.sonatype.sisu.inject.cglib-2.2.1-v20090111.jar:/opt/docker/lib/asm.asm-3.2.jar:/opt/docker/lib/com.sun.jersey.jersey-json-1.9.jar:/opt/docker/lib/org.codehaus.jettison.jettison-1.1.jar:/opt/docker/lib/com.sun.jersey.contribs.jersey-guice-1.9.jar:/opt/docker/lib/org.apache.hadoop.hadoop-yarn-client-2.6.0.jar:/opt/docker/lib/org.apache.hadoop.hadoop-mapreduce-client-core-2.6.0.jar:/opt/docker/lib/org.apache.hadoop.hadoop-yarn-server-common-2.6.0.jar:/opt/docker/lib/org.fusesource.leveldbjni.leveldbjni-all-1.8.jar:/opt/docker/lib/org.apache.hadoop.hadoop-mapreduce-client-shuffle-2.6.0.jar:/opt/docker/lib/org.apache.hadoop.hadoop-mapreduce-client-jobclient-2.6.0.jar:/opt/docker/lib/org.apache.spark.spark-repl_2.10-1.5.1.jar:/opt/docker/lib/org.scala-lang.jline-2.10.4.jar:/opt/docker/lib/org.fusesource.jansi.jansi-1.4.jar:/opt/docker/lib/org.apache.spark.spark-bagel_2.10-1.5.1.jar:/opt/docker/lib/org.apache.spark.spark-mllib_2.10-1.5.1.jar:/opt/docker/lib/org.apache.spark.spark-streaming_2.10-1.5.1.jar:/opt/docker/lib/org.apache.spark.spark-graphx_2.10-1.5.1.jar:/opt/docker/lib/org.scalanlp.breeze_2.10-0.11.2.jar:/opt/docker/lib/org.scalanlp.breeze-macros_2.10-0.11.2.jar:/opt/docker/lib/org.jpmml.pmml-model-1.1.15.jar:/opt/docker/lib/org.jpmml.pmml-agent-1.1.15.jar:/opt/docker/lib/org.jpmml.pmml-schema-1.1.15.jar:/opt/docker/lib/com.sun.xml.bind.jaxb-impl-2.2.7.jar:/opt/docker/lib/com.sun.xml.bind.jaxb-core-2.2.7.jar:/opt/docker/lib/javax.xml.bind.jaxb-api-2.2.7.jar:/opt/docker/lib/com.sun.istack.istack-commons-runtime-2.16.jar:/opt/docker/lib/com.sun.xml.fastinfoset.FastInfoset-1.2.12.jar:/opt/docker/lib/javax.xml.bind.jsr173_api-1.0.jar:/opt/docker/lib/org.apache.hadoop.hadoop-yarn-server-web-proxy-2.6.0.jar:/opt/docker/lib/org.mortbay.jetty.jetty-6.1.26.jar:/opt/docker/lib/io.reactivex.rxscala_2.10-0.22.0.jar:/opt/docker/lib/io.reactivex.rxjava-1.0.0-rc.5.jar:/opt/docker/lib/org.scalaz.scalaz-core_2.10-7.0.6.jar:/opt/docker/lib/org.scala-sbt.sbt-0.13.8.jar:/opt/docker/lib/org.scala-sbt.main-0.13.8.jar:/opt/docker/lib/org.scala-sbt.actions-0.13.8.jar:/opt/docker/lib/org.scala-sbt.classpath-0.13.8.jar:/opt/docker/lib/org.scala-sbt.launcher-interface-0.13.8.jar:/opt/docker/lib/org.scala-sbt.interface-0.13.8.jar:/opt/docker/lib/org.scala-sbt.io-0.13.8.jar:/opt/docker/lib/org.scala-sbt.control-0.13.8.jar:/opt/docker/lib/org.scala-sbt.completion-0.13.8.jar:/opt/docker/lib/org.scala-sbt.collections-0.13.8.jar:/opt/docker/lib/jline.jline-2.11.jar:/opt/docker/lib/org.scala-sbt.api-0.13.8.jar:/opt/docker/lib/org.scala-sbt.compiler-integration-0.13.8.jar:/opt/docker/lib/org.scala-sbt.incremental-compiler-0.13.8.jar:/opt/docker/lib/org.scala-sbt.logging-0.13.8.jar:/opt/docker/lib/org.scala-sbt.process-0.13.8.jar:/opt/docker/lib/org.scala-sbt.relation-0.13.8.jar:/opt/docker/lib/org.scala-sbt.compile-0.13.8.jar:/opt/docker/lib/org.scala-sbt.classfile-0.13.8.jar:/opt/docker/lib/org.scala-sbt.persist-0.13.8.jar:/opt/docker/lib/org.scala-tools.sbinary.sbinary_2.10-0.4.2.jar:/opt/docker/lib/org.scala-sbt.compiler-ivy-integration-0.13.8.jar:/opt/docker/lib/org.scala-sbt.ivy-0.13.8.jar:/opt/docker/lib/org.scala-sbt.cross-0.13.8.jar:/opt/docker/lib/org.scala-sbt.ivy.ivy-2.3.0-sbt-fccfbd44c9f64523b61398a0155784dcbaeae28f.jar:/opt/docker/lib/com.jcraft.jsch-0.1.46.jar:/opt/docker/lib/org.scala-sbt.serialization_2.10-0.1.1.jar:/opt/docker/lib/org.scala-lang.modules.scala-pickling_2.10-0.10.0.jar:/opt/docker/lib/org.scalamacros.quasiquotes_2.10-2.0.1.jar:/opt/docker/lib/org.spire-math.jawn-parser_2.10-0.6.0.jar:/opt/docker/lib/org.spire-math.json4s-support_2.10-0.6.0.jar:/opt/docker/lib/org.scala-sbt.run-0.13.8.jar:/opt/docker/lib/org.scala-sbt.task-system-0.13.8.jar:/opt/docker/lib/org.scala-sbt.tasks-0.13.8.jar:/opt/docker/lib/org.scala-sbt.tracking-0.13.8.jar:/opt/docker/lib/org.scala-sbt.cache-0.13.8.jar:/opt/docker/lib/org.scala-sbt.testing-0.13.8.jar:/opt/docker/lib/org.scala-sbt.test-agent-0.13.8.jar:/opt/docker/lib/org.scala-sbt.test-interface-1.0.jar:/opt/docker/lib/org.scala-sbt.main-settings-0.13.8.jar:/opt/docker/lib/org.scala-sbt.apply-macro-0.13.8.jar:/opt/docker/lib/org.scala-sbt.command-0.13.8.jar:/opt/docker/lib/org.scala-sbt.logic-0.13.8.jar:/opt/docker/lib/org.scala-sbt.compiler-interface--bin-0.13.8.jar:/opt/docker/lib/org.scala-sbt.compiler-interface--src-0.13.8.jar:/opt/docker/lib/org.scala-sbt.precompiled-2_8_2-compiler-interface-bin-0.13.8.jar:/opt/docker/lib/org.scala-sbt.precompiled-2_9_2-compiler-interface-bin-0.13.8.jar:/opt/docker/lib/org.scala-sbt.precompiled-2_9_3-compiler-interface-bin-0.13.8.jar:/opt/docker/lib/com.frugalmechanic.fm-sbt-s3-resolver-0.5.0.jar:/opt/docker/lib/com.amazonaws.aws-java-sdk-s3-1.9.0.jar:/opt/docker/lib/com.amazonaws.aws-java-sdk-core-1.9.0.jar:/opt/docker/lib/joda-time.joda-time-2.9.1.jar:/opt/docker/lib/io.continuum.bokeh.bokeh_2.10-0.2.jar:/opt/docker/lib/io.continuum.bokeh.core_2.10-0.2.jar:/opt/docker/lib/com.typesafe.play.play-json_2.10-2.4.0-M1.jar:/opt/docker/lib/com.typesafe.play.play-iteratees_2.10-2.4.0-M1.jar:/opt/docker/lib/com.typesafe.play.play-functional_2.10-2.4.0-M1.jar:/opt/docker/lib/com.typesafe.play.play-datacommons_2.10-2.4.0-M1.jar:/opt/docker/lib/io.continuum.bokeh.bokehjs_2.10-0.2.jar:/opt/docker/lib/com.github.scala-incubator.io.scala-io-core_2.10-0.4.3.jar:/opt/docker/lib/com.jsuereth.scala-arm_2.10-1.3.jar:/opt/docker/lib/com.github.scala-incubator.io.scala-io-file_2.10-0.4.3.jar:/opt/docker/lib/com.quantifind.sumac_2.10-0.3.0.jar:/opt/docker/lib/com.typesafe.play.play-cache_2.10-2.3.7.jar:/opt/docker/lib/net.sf.ehcache.ehcache-core-2.6.8.jar:/opt/docker/lib/nooostab.spark-notebook-0.6.2-SNAPSHOT-scala-2.10.4-spark-1.5.1-hadoop-2.6.0-assets.jar:/usr/lib/jvm/java-7-openjdk-amd64/jre/lib/ext/zipfs.jar:/usr/lib/jvm/java-7-openjdk-amd64/jre/lib/ext/dnsns.jar:/usr/lib/jvm/java-7-openjdk-amd64/jre/lib/ext/java-atk-wrapper.jar:/usr/lib/jvm/java-7-openjdk-amd64/jre/lib/ext/libatk-wrapper.so:/usr/lib/jvm/java-7-openjdk-amd64/jre/lib/ext/sunpkcs11.jar:/usr/lib/jvm/java-7-openjdk-amd64/jre/lib/ext/icedtea-sound.jar:/usr/lib/jvm/java-7-openjdk-amd64/jre/lib/ext/localedata.jar:/usr/lib/jvm/java-7-openjdk-amd64/jre/lib/ext/sunjce_provider.jar,
 PATH -> /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin, CLICOLOR 
-> 1, HOME -> /root, SPARK_LOCAL_IP -> 172.17.0.2, AGENT_ADVERTISE_IP -> 
172.17.0.1, LIBPROCESS_ADVERTISE_IP -> 172.17.0.1, LS_COLORS -> 
di=34:ln=35:so=32:pi=33:ex=1;40:bd=34;40:cd=34;40:su=0;40:sg=0;40:tw=0;40:ow=0;40:,
 LIBPROCESS_ADVERTISE_PORT -> 9050)
[info] application - In working directory .
[info] application - Registering first web-socket 
(notebook.server.WebSockWrapperImpl@57ae2ceb) in service 
notebook.server.CalcWebSocketService$CalcActor@6c072545
[info] application - Spawning calculator in service 
notebook.server.CalcWebSocketService$CalcActor@6c072545
[DEBUG] [11/18/2015 14:57:19.073] [main] [EventStream] StandardOutLogger started
[DEBUG] [11/18/2015 14:57:19.263] [main] [EventStream(akka://Remote)] logger 
log1-Slf4jLogger started
[DEBUG] [11/18/2015 14:57:19.265] [main] [EventStream(akka://Remote)] Default 
Loggers started
I1118 14:57:28.938076  3879 logging.cpp:172] INFO level logging started!
2015-11-18 14:57:28,940:3857(0x7f3d18eeb700):ZOO_INFO@log_env@712: Client 
environment:zookeeper.version=zookeeper C client 3.4.5
2015-11-18 14:57:28,940:3857(0x7f3d18eeb700):ZOO_INFO@log_env@716: Client 
environment:host.name=0c4fba1664ac
2015-11-18 14:57:28,940:3857(0x7f3d18eeb700):ZOO_INFO@log_env@723: Client 
environment:os.name=Linux
2015-11-18 14:57:28,940:3857(0x7f3d18eeb700):ZOO_INFO@log_env@724: Client 
environment:os.arch=3.19.0-30-generic
2015-11-18 14:57:28,940:3857(0x7f3d18eeb700):ZOO_INFO@log_env@725: Client 
environment:os.version=#34~14.04.1-Ubuntu SMP Fri Oct 2 22:09:39 UTC 2015
2015-11-18 14:57:28,940:3857(0x7f3d18eeb700):ZOO_INFO@log_env@733: Client 
environment:user.name=(null)
2015-11-18 14:57:28,940:3857(0x7f3d18eeb700):ZOO_INFO@log_env@741: Client 
environment:user.home=/root
2015-11-18 14:57:28,940:3857(0x7f3d18eeb700):ZOO_INFO@log_env@753: Client 
environment:user.dir=/opt/docker
2015-11-18 14:57:28,940:3857(0x7f3d18eeb700):ZOO_INFO@zookeeper_init@786: 
Initiating client connection, host=172.17.0.1:2181 sessionTimeout=10000 
watcher=0x7f3d1f4ad600 sessionId=0 sessionPasswd=<null> context=0x7f3d80001e10 
flags=0
2015-11-18 14:57:28,940:3857(0x7f3d14de2700):ZOO_INFO@check_events@1703: 
initiated connection to server [172.17.0.1:2181]
I1118 14:57:28.940862  3956 sched.cpp:164] Version: 0.25.0
2015-11-18 14:57:28,944:3857(0x7f3d14de2700):ZOO_INFO@check_events@1750: 
session establishment complete on server [172.17.0.1:2181], 
sessionId=0x1511a4b0ab0000b, negotiated timeout=10000
I1118 14:57:28.944319  3950 group.cpp:331] Group process 
(group(1)@172.17.0.1:9050) connected to ZooKeeper
I1118 14:57:28.944707  3950 group.cpp:805] Syncing group operations: queue size 
(joins, cancels, datas) = (0, 0, 0)
I1118 14:57:28.944722  3950 group.cpp:403] Trying to create path '/mesos' in 
ZooKeeper
I1118 14:57:28.948410  3950 detector.cpp:156] Detected a new leader: (id='1')
I1118 14:57:28.948495  3952 group.cpp:674] Trying to get 
'/mesos/json.info_0000000001' in ZooKeeper
I1118 14:57:28.948930  3951 detector.cpp:481] A new leading master 
(UPID=master@172.17.0.1:5050) is detected
I1118 14:57:28.948982  3951 sched.cpp:262] New master detected at 
master@172.17.0.1:5050
I1118 14:57:28.949141  3951 sched.cpp:272] No credentials provided. Attempting 
to register without authentication

At the mesos master i got:
I1118 16:32:04.114322 11968 master.cpp:2179] Received SUBSCRIBE call for 
framework 'Simple Spark' at 
scheduler-c860460e-8878-4e92-a28e-871890ef2227@172.17.0.1:9050
I1118 16:32:04.114537 11968 master.cpp:2250] Subscribing framework Simple Spark 
with checkpointing disabled and capabilities [  ]
E1118 16:32:04.114892 11975 socket.hpp:174] Shutdown failed on fd=28: Transport 
endpoint is not connected [107]
I1118 16:32:04.115171 11968 master.cpp:1119] Framework 
8236605c-0abd-4562-8061-0520b9448f41-0235 (Simple Spark) at scheduler-

So it seems that something is wrong with the connection from scheduler to the 
mesos master... whats the best setup to test it?


> Apache Spark in Docker with Bridge networking / run Spark on Mesos, in Docker 
> with Bridge networking
> ----------------------------------------------------------------------------------------------------
>
>                 Key: SPARK-11638
>                 URL: https://issues.apache.org/jira/browse/SPARK-11638
>             Project: Spark
>          Issue Type: Improvement
>          Components: Mesos, Spark Core
>    Affects Versions: 1.4.0, 1.4.1, 1.5.0, 1.5.1, 1.5.2, 1.6.0
>            Reporter: Radoslaw Gruchalski
>         Attachments: 1.4.0.patch, 1.4.1.patch, 1.5.0.patch, 1.5.1.patch, 
> 1.5.2.patch, 1.6.0-master.patch, 2.3.11.patch, 2.3.4.patch
>
>
> h4. Summary
> Provides {{spark.driver.advertisedPort}}, 
> {{spark.fileserver.advertisedPort}}, {{spark.broadcast.advertisedPort}} and 
> {{spark.replClassServer.advertisedPort}} settings to enable running Spark in 
> Mesos on Docker with Bridge networking. Provides patches for Akka Remote to 
> enable Spark driver advertisement using alternative host and port.
> With these settings, it is possible to run Spark Master in a Docker container 
> and have the executors running on Mesos talk back correctly to such Master.
> The problem is discussed on the Mesos mailing list here: 
> https://mail-archives.apache.org/mod_mbox/mesos-user/201510.mbox/%3CCACTd3c9vjAMXk=bfotj5ljzfrh5u7ix-ghppfqknvg9mkkc...@mail.gmail.com%3E
> h4. Running Spark on Mesos - LIBPROCESS_ADVERTISE_IP opens the door
> In order for the framework to receive orders in the bridged container, Mesos 
> in the container has to register for offers using the IP address of the 
> Agent. Offers are sent by Mesos Master to the Docker container running on a 
> different host, an Agent. Normally, prior to Mesos 0.24.0, {{libprocess}} 
> would advertise itself using the IP address of the container, something like 
> {{172.x.x.x}}. Obviously, Mesos Master can't reach that address, it's a 
> different host, it's a different machine. Mesos 0.24.0 introduced two new 
> properties for {{libprocess}} - {{LIBPROCESS_ADVERTISE_IP}} and 
> {{LIBPROCESS_ADVERTISE_PORT}}. This allows the container to use the Agent's 
> address to register for offers. This was provided mainly for running Mesos in 
> Docker on Mesos.
> h4. Spark - how does the above relate and what is being addressed here?
> Similar to Mesos, out of the box, Spark does not allow to advertise its 
> services on ports different than bind ports. Consider following scenario:
> Spark is running inside a Docker container on Mesos, it's a bridge networking 
> mode. Assuming a port {{6666}} for the {{spark.driver.port}}, {{6677}} for 
> the {{spark.fileserver.port}}, {{6688}} for the {{spark.broadcast.port}} and 
> {{23456}} for the {{spark.replClassServer.port}}. If such task is posted to 
> Marathon, Mesos will give 4 ports in range {{31000-32000}} mapping to the 
> container ports. Starting the executors from such container results in 
> executors not being able to communicate back to the Spark Master.
> This happens because of 2 things:
> Spark driver is effectively an {{akka-remote}} system with {{akka.tcp}} 
> transport. {{akka-remote}} prior to version {{2.4}} can't advertise a port 
> different to what it bound to. The settings discussed are here: 
> https://github.com/akka/akka/blob/f8c1671903923837f22d0726a955e0893add5e9f/akka-remote/src/main/resources/reference.conf#L345-L376.
>  These do not exist in Akka {{2.3.x}}. Spark driver will always advertise 
> port {{6666}} as this is the one {{akka-remote}} is bound to.
> Any URIs the executors contact the Spark Master on, are prepared by Spark 
> Master and handed over to executors. These always contain the port number 
> used by the Master to find the service on. The services are:
> - {{spark.broadcast.port}}
> - {{spark.fileserver.port}}
> - {{spark.replClassServer.port}}
> all above ports are by default {{0}} (random assignment) but can be specified 
> using Spark configuration ( {{-Dspark...port}} ). However, they are limited 
> in the same way as the {{spark.driver.port}}; in the above example, an 
> executor should not contact the file server on port {{6677}} but rather on 
> the respective 31xxx assigned by Mesos.
> Spark currently does not allow any of that.
> h4. Taking on the problem, step 1: Spark Driver
> As mentioned above, Spark Driver is based on {{akka-remote}}. In order to 
> take on the problem, the {{akka.remote.net.tcp.bind-hostname}} and 
> {{akka.remote.net.tcp.bind-port}} settings are a must. Spark does not compile 
> with Akka 2.4.x yet.
> What we want is the back port of mentioned {{akka-remote}} settings to 
> {{2.3.x}} versions. These patches are attached to this ticket - 
> {{2.3.4.patch}} and {{2.3.11.patch}} files provide patches for respective 
> akka versions. These add mentioned settings and ensure they work as 
> documented for Akka 2.4. In other words, these are future compatible.
> A part of that patch also exists in the patch for Spark, in the 
> {{org.apache.spark.util.AkkaUtils}} class. This is where Spark is creating 
> the driver and compiling the Akka configuration. That part of the patch tells 
> Akka to use {{bind-hostname}} instead of {{hostname}}, if 
> {{spark.driver.advertisedHost}} is given and use {{bind-port}} instead of 
> {{port}}, if {{spark.driver.advertisedPort}} is given. In such cases, 
> {{hostname}} and {{port}} are set to the advertised values, respectively.
> *Worth mentioning:* if {{spark.driver.advertisedHost}} or 
> {{spark.driver.advertisedPort}} isn't given, patched Spark reverts to using 
> the settings as they would be in case of non-patched {{akka-remote}}; exactly 
> for that purpose: if there is no patched {{akka-remote}} in use. Even if it 
> is in use, {{akka-remote}} will correctly handle undefined {{bind-hostname}} 
> and {{bind-port}}, as specified by Akka 2.4.x.
> h5. Akka versions in Spark (attached patches only)
> - Akka 2.3.4
>  - Spark 1.4.0
>  - Spark 1.4.1
> - Akka 2.3.11
>  - spark 1.5.0
>  - spark 1.5.1
>  - spark-1.6.0-SNAPSHOT
> h4. Taking on the problem, step 2: Spark services
> The fortunate thing is that every other Spark service is running over HTTP, 
> using an {{org.apache.spark.HttpServer}} class. This is where the second part 
> of the Spark patch comes into play. All other changes in the patch files 
> provide alternative {{advertised...}} ports for each of the following 
> services:
> - {{spark.broadcast.port}} -> {{spark.broadcast.advertisedPort}}
> - {{spark.fileserver.port}} -> {{spark.fileserver.advertisedPort}}
> - {{spark.replClassServer.port}} -> {{spark.replClassServer.advertisedPort}}
> What we are telling Spark here, is the following: if there is an alternative 
> {{advertisedPort}} setting given to this server instance, use that setting 
> for advertising the port.
> h4. Patches
> These patches are cleared by the Technicolor IP&L Team to be contributed back 
> under the Apache 2.0 License to Spark.
> All patches for versions from {{1.4.0}} to {{1.5.2}} can be applied directly 
> to the respective tag from Spark git repository. The {{1.6.0-master.patch}} 
> applies to git sha {{18350a57004eb87cafa9504ff73affab4b818e06}}.
> h4. Building Akka
> To build the required akka version:
> {noformat}
> AKKA_VERSION=2.3.4
> git clone https://github.com/akka/akka.git .
> git fetch origin
> git checkout v${AKKA_VERSION}
> git apply ...2.3.4.patch
> sbt package -Dakka.scaladoc.diagrams=false
> {noformat}
> h4. What is not supplied
> At the moment of contribution, we do not supply any unit tests. We would like 
> to contribute those but we may require some assistance.
> =====
> Happy to answer any questions and looking forward to any guidance which would 
> lead to have these included in the master Spark version.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to