Hi Charles,

I thought that's fixed with your patch in latest master now right?

Ashish, yes please give me your docker image name (if it's in the public
registry) and what you've tried and I can see what's wrong. I think it's
most likely just the configuration of where the Spark home folder is in the
image.

Tim

On Wed, Mar 2, 2016 at 2:28 PM, Charles Allen <charles.al...@metamarkets.com
> wrote:

> Re: Spark on Mesos.... Warning regarding disk space:
> https://issues.apache.org/jira/browse/SPARK-12330
>
> That's a spark flaw I encountered on a very regular basis on mesos. That
> and a few other annoyances are fixed in
> https://github.com/metamx/spark/tree/v1.5.2-mmx
>
> Here's another mild annoyance I've encountered:
> https://issues.apache.org/jira/browse/SPARK-11714
>
> On Wed, Mar 2, 2016 at 1:31 PM Ashish Soni <asoni.le...@gmail.com> wrote:
>
>> I have no luck and i would to ask the question to spark committers will
>> this be ever designed to run on mesos ?
>>
>> spark app as a docker container not working at all on mesos  ,if any one
>> would like the code i can send it over to have a look.
>>
>> Ashish
>>
>> On Wed, Mar 2, 2016 at 12:23 PM, Sathish Kumaran Vairavelu <
>> vsathishkuma...@gmail.com> wrote:
>>
>>> Try passing jar using --jars option
>>>
>>> On Wed, Mar 2, 2016 at 10:17 AM Ashish Soni <asoni.le...@gmail.com>
>>> wrote:
>>>
>>>> I made some progress but now i am stuck at this point , Please help as
>>>> looks like i am close to get it working
>>>>
>>>> I have everything running in docker container including mesos slave and
>>>> master
>>>>
>>>> When i try to submit the pi example i get below error
>>>> *Error: Cannot load main class from JAR file:/opt/spark/Example*
>>>>
>>>> Below is the command i use to submit as a docker container
>>>>
>>>> docker run -it --rm -e SPARK_MASTER="mesos://10.0.2.15:7077"  -e
>>>> SPARK_IMAGE="spark_driver:latest" spark_driver:latest ./bin/spark-submit
>>>> --deploy-mode cluster --name "PI Example" --class
>>>> org.apache.spark.examples.SparkPi --driver-memory 512m --executor-memory
>>>> 512m --executor-cores 1
>>>> http://10.0.2.15/spark-examples-1.6.0-hadoop2.6.0.jar
>>>>
>>>>
>>>> On Tue, Mar 1, 2016 at 2:59 PM, Timothy Chen <t...@mesosphere.io> wrote:
>>>>
>>>>> Can you go through the Mesos UI and look at the driver/executor log
>>>>> from steer file and see what the problem is?
>>>>>
>>>>> Tim
>>>>>
>>>>> On Mar 1, 2016, at 8:05 AM, Ashish Soni <asoni.le...@gmail.com> wrote:
>>>>>
>>>>> Not sure what is the issue but i am getting below error  when i try to
>>>>> run spark PI example
>>>>>
>>>>> Blacklisting Mesos slave value: "5345asdasdasdkas234234asdasdasdasd"
>>>>>    due to too many failures; is Spark installed on it?
>>>>>     WARN TaskSchedulerImpl: Initial job has not accepted any resources; 
>>>>> check your cluster UI to ensure that workers are registered and have 
>>>>> sufficient resources
>>>>>
>>>>>
>>>>> On Mon, Feb 29, 2016 at 1:39 PM, Sathish Kumaran Vairavelu <
>>>>> vsathishkuma...@gmail.com> wrote:
>>>>>
>>>>>> May be the Mesos executor couldn't find spark image or the
>>>>>> constraints are not satisfied. Check your Mesos UI if you see Spark
>>>>>> application in the Frameworks tab
>>>>>>
>>>>>> On Mon, Feb 29, 2016 at 12:23 PM Ashish Soni <asoni.le...@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> What is the Best practice , I have everything running as docker
>>>>>>> container in single host ( mesos and marathon also as docker container )
>>>>>>>  and everything comes up fine but when i try to launch the spark shell i
>>>>>>> get below error
>>>>>>>
>>>>>>>
>>>>>>> SQL context available as sqlContext.
>>>>>>>
>>>>>>> scala> val data = sc.parallelize(1 to 100)
>>>>>>> data: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[0] at
>>>>>>> parallelize at <console>:27
>>>>>>>
>>>>>>> scala> data.count
>>>>>>> [Stage 0:>
>>>>>>>  (0 + 0) / 2]16/02/29 18:21:12 WARN TaskSchedulerImpl: Initial job has 
>>>>>>> not
>>>>>>> accepted any resources; check your cluster UI to ensure that workers are
>>>>>>> registered and have sufficient resources
>>>>>>> 16/02/29 18:21:27 WARN TaskSchedulerImpl: Initial job has not
>>>>>>> accepted any resources; check your cluster UI to ensure that workers are
>>>>>>> registered and have sufficient resources
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Mon, Feb 29, 2016 at 12:04 PM, Tim Chen <t...@mesosphere.io>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> No you don't have to run Mesos in docker containers to run Spark in
>>>>>>>> docker containers.
>>>>>>>>
>>>>>>>> Once you have Mesos cluster running you can then specfiy the Spark
>>>>>>>> configurations in your Spark job (i.e: 
>>>>>>>> spark.mesos.executor.docker.image=mesosphere/spark:1.6)
>>>>>>>> and Mesos will automatically launch docker containers for you.
>>>>>>>>
>>>>>>>> Tim
>>>>>>>>
>>>>>>>> On Mon, Feb 29, 2016 at 7:36 AM, Ashish Soni <asoni.le...@gmail.com
>>>>>>>> > wrote:
>>>>>>>>
>>>>>>>>> Yes i read that and not much details here.
>>>>>>>>>
>>>>>>>>> Is it true that we need to have spark installed on each mesos
>>>>>>>>> docker container ( master and slave ) ...
>>>>>>>>>
>>>>>>>>> Ashish
>>>>>>>>>
>>>>>>>>> On Fri, Feb 26, 2016 at 2:14 PM, Tim Chen <t...@mesosphere.io>
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>>> https://spark.apache.org/docs/latest/running-on-mesos.html should
>>>>>>>>>> be the best source, what problems were you running into?
>>>>>>>>>>
>>>>>>>>>> Tim
>>>>>>>>>>
>>>>>>>>>> On Fri, Feb 26, 2016 at 11:06 AM, Yin Yang <yy201...@gmail.com>
>>>>>>>>>> wrote:
>>>>>>>>>>
>>>>>>>>>>> Have you read this ?
>>>>>>>>>>> https://spark.apache.org/docs/latest/running-on-mesos.html
>>>>>>>>>>>
>>>>>>>>>>> On Fri, Feb 26, 2016 at 11:03 AM, Ashish Soni <
>>>>>>>>>>> asoni.le...@gmail.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> Hi All ,
>>>>>>>>>>>>
>>>>>>>>>>>> Is there any proper documentation as how to run spark on mesos
>>>>>>>>>>>> , I am trying from the last few days and not able to make it work.
>>>>>>>>>>>>
>>>>>>>>>>>> Please help
>>>>>>>>>>>>
>>>>>>>>>>>> Ashish
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>
>>>>
>>

Reply via email to