Hi Tim ,

Can you please share your dockerfiles and configuration as it will help a
lot , I am planing to publish a blog post on the same .

Ashish

On Thu, Mar 10, 2016 at 10:34 AM, Timothy Chen <t...@mesosphere.io> wrote:

> No you don't need to install spark on each slave, we have been running
> this setup in Mesosphere without any problem at this point, I think most
> likely configuration problem and perhaps a chance something is missing in
> the code to handle some cases.
>
> What version of spark are you guys running? And also did you set the
> working dir in your image to be spark home?
>
> Tim
>
>
> On Mar 10, 2016, at 3:11 AM, Ashish Soni <asoni.le...@gmail.com> wrote:
>
> You need to install spark on each mesos slave and then while starting
> container make a workdir to your spark home so that it can find the spark
> class.
>
> Ashish
>
> On Mar 10, 2016, at 5:22 AM, Guillaume Eynard Bontemps <
> g.eynard.bonte...@gmail.com> wrote:
>
> For an answer to my question see this:
> http://stackoverflow.com/a/35660466?noredirect=1.
>
> But for your problem did you define  the  Spark.mesos.docker. home or
> something like  that property?
>
> Le jeu. 10 mars 2016 04:26, Eran Chinthaka Withana <
> eran.chinth...@gmail.com> a écrit :
>
>> Hi
>>
>> I'm also having this issue and can not get the tasks to work inside mesos.
>>
>> In my case, the spark-submit command is the following.
>>
>> $SPARK_HOME/bin/spark-submit \
>>  --class com.mycompany.SparkStarter \
>>  --master mesos://mesos-dispatcher:7077 \ --name SparkStarterJob \
>> --driver-memory 1G \
>>  --executor-memory 4G \
>> --deploy-mode cluster \
>>  --total-executor-cores 1 \
>>  --conf 
>> spark.mesos.executor.docker.image=echinthaka/mesos-spark:0.23.1-1.6.0-2.6 \
>>  http://abc.com/spark-starter.jar
>>
>>
>> And the error I'm getting is the following.
>>
>> I0310 03:13:11.417009 131594 exec.cpp:132] Version: 0.23.1
>> I0310 03:13:11.419452 131601 exec.cpp:206] Executor registered on slave 
>> 20160223-000314-3439362570-5050-631-S0
>> sh: 1: /usr/spark-1.6.0-bin-hadoop2.6/bin/spark-class: not found
>>
>>
>> (Looked into Spark JIRA and I found that
>> https://issues.apache.org/jira/browse/SPARK-11759 is marked as closed
>> since https://issues.apache.org/jira/browse/SPARK-12345 is marked as
>> resolved)
>>
>> Really appreciate if I can get some help here.
>>
>> Thanks,
>> Eran Chinthaka Withana
>>
>> On Wed, Feb 17, 2016 at 2:00 PM, g.eynard.bonte...@gmail.com <
>> g.eynard.bonte...@gmail.com> wrote:
>>
>>> Hi everybody,
>>>
>>> I am testing the use of Docker for executing Spark algorithms on MESOS. I
>>> managed to execute Spark in client mode with executors inside Docker,
>>> but I
>>> wanted to go further and have also my Driver running into a Docker
>>> Container. Here I ran into a behavior that I'm not sure is normal, let me
>>> try to explain.
>>>
>>> I submit my spark application through MesosClusterDispatcher using a
>>> command
>>> like:
>>> $ ./bin/spark-submit --class org.apache.spark.examples.SparkPi --master
>>> mesos://spark-master-1:7077 --deploy-mode cluster --conf
>>> spark.mesos.executor.docker.image=myuser/myimage:0.0.2
>>>
>>> https://storage.googleapis.com/some-bucket/spark-examples-1.5.2-hadoop2.6.0.jar
>>> 10
>>>
>>> My driver is running fine, inside its docker container, but the executors
>>> fail:
>>> "sh: /some/spark/home/bin/spark-class: No such file or directory"
>>>
>>> Looking on MESOS slaves log, I think that the executors do not run inside
>>> docker: "docker.cpp:775] No container info found, skipping launch". As my
>>> Mesos slaves do not have spark installed, it fails.
>>>
>>> *It seems that the spark conf that I gave in the first spark-submit is
>>> not
>>> transmitted to the Driver submitted conf*, when launched in the docker
>>> container. The only workaround I found is to modify my Docker image in
>>> order
>>> to define inside its spark conf the spark.mesos.executor.docker.image
>>> property. This way, my executors get the conf well and are launched
>>> inside
>>> docker on Mesos. This seems a little complicated to me, and I feel the
>>> configuration passed to the early spark-submit should be transmitted to
>>> the
>>> Driver submit...
>>>
>>>
>>>
>>> --
>>> View this message in context:
>>> http://apache-spark-user-list.1001560.n3.nabble.com/Problem-mixing-MESOS-Cluster-Mode-and-Docker-task-execution-tp26258.html
>>> Sent from the Apache Spark User List mailing list archive at Nabble.com
>>> <http://nabble.com>.
>>>
>>> ---------------------------------------------------------------------
>>> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
>>> For additional commands, e-mail: user-h...@spark.apache.org
>>>
>>>
>>

Reply via email to