[ 
https://issues.apache.org/jira/browse/SPARK-2691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14194329#comment-14194329
 ] 

Eduardo Jimenez commented on SPARK-2691:
----------------------------------------

I was looking at this, and I came up with a patch that simply passes 
ContainerInfo in the appropriate places with both Coarse and Fine grained mode.

The only thing I can see as a bit of a kludge is how to pass the spark 
configuration. 

So far, I've added:

spark.mesos.container.type
spark.mesos.container.docker.image

I would prefer to simply have spark pass these through without much validation, 
otherwise Spark and Mesos have to be kept in sync wrt to what is supported. I 
could add the network setting (HOST or BRIDGE).

But other settings could be a kludge to provide. I would prefer to not pass 
Docker Parameters for the CLI as mess might not use those in the future anyway. 
Port Mappings and Volumes could be useful, but how to provide them?

spark.mesos.container.volumes.A.container_path
spark.mesos.container.volumes.A.host_path
spark.mesos.container.volumes.A.mode
spark.mesos.container.volumes.B.container_path
spark.mesos.container.volumes.B.host_path
spark.mesos.container.volumes.B.mode

or 
spark.mesos.container.volumes = A:container_path:host_path:mode, B:...

Any preference? I'll try to work on this tomorrow, put some tests together, and 
take it for a spin.


> Allow Spark on Mesos to be launched with Docker
> -----------------------------------------------
>
>                 Key: SPARK-2691
>                 URL: https://issues.apache.org/jira/browse/SPARK-2691
>             Project: Spark
>          Issue Type: Improvement
>          Components: Mesos
>            Reporter: Timothy Chen
>            Assignee: Timothy Chen
>              Labels: mesos
>
> Currently to launch Spark with Mesos one must upload a tarball and specifiy 
> the executor URI to be passed in that is to be downloaded on each slave or 
> even each execution depending coarse mode or not.
> We want to make Spark able to support launching Executors via a Docker image 
> that utilizes the recent Docker and Mesos integration work. 
> With the recent integration Spark can simply specify a Docker image and 
> options that is needed and it should continue to work as-is.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to