Hi everybody,

I am testing the use of Docker for executing Spark algorithms on MESOS. I
managed to execute Spark in client mode with executors inside Docker, but I
wanted to go further and have also my Driver running into a Docker
Container. Here I ran into a behavior that I'm not sure is normal, let me
try to explain.

I submit my spark application through MesosClusterDispatcher using a command
like:
$ ./bin/spark-submit --class org.apache.spark.examples.SparkPi --master
mesos://spark-master-1:7077 --deploy-mode cluster --conf
spark.mesos.executor.docker.image=myuser/myimage:0.0.2
https://storage.googleapis.com/some-bucket/spark-examples-1.5.2-hadoop2.6.0.jar
10

My driver is running fine, inside its docker container, but the executors
fail:
"sh: /some/spark/home/bin/spark-class: No such file or directory"

Looking on MESOS slaves log, I think that the executors do not run inside
docker: "docker.cpp:775] No container info found, skipping launch". As my
Mesos slaves do not have spark installed, it fails.

*It seems that the spark conf that I gave in the first spark-submit is not
transmitted to the Driver submitted conf*, when launched in the docker
container. The only workaround I found is to modify my Docker image in order
to define inside its spark conf the spark.mesos.executor.docker.image
property. This way, my executors get the conf well and are launched inside
docker on Mesos. This seems a little complicated to me, and I feel the
configuration passed to the early spark-submit should be transmitted to the
Driver submit...



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Problem-mixing-MESOS-Cluster-Mode-and-Docker-task-execution-tp26258.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to