Re: Spark 1.5 on Mesos

2016-03-04 Thread Ashish Soni
It did not helped , same error , Is this the issue i am running into
https://issues.apache.org/jira/browse/SPARK-11638

*Warning: Local jar /mnt/mesos/sandbox/spark-examples-1.6.0-hadoop2.6.0.jar
does not exist, skipping.*
java.lang.ClassNotFoundException: org.apache.spark.examples.SparkPi

On Thu, Mar 3, 2016 at 4:12 PM, Tim Chen  wrote:

> Ah I see, I think it's because you've launched the Mesos slave in a docker
> container, and when you launch also the executor in a container it's not
> able to mount in the sandbox to the other container since the slave is in a
> chroot.
>
> Can you try mounting in a volume from the host when you launch the slave
> for your slave's workdir?
> docker run -v /tmp/mesos/slave:/tmp/mesos/slave mesos_image mesos-slave
> --work_dir=/tmp/mesos/slave 
>
> Tim
>
> On Thu, Mar 3, 2016 at 4:42 AM, Ashish Soni  wrote:
>
>> Hi Tim ,
>>
>>
>> I think I know the problem but i do not have a solution , *The Mesos
>> Slave supposed to download the Jars from the URI specified and placed in
>> $MESOS_SANDBOX location but it is not downloading not sure why* .. see
>> below logs
>>
>> My command looks like below
>>
>> docker run -it --rm -m 2g -e SPARK_MASTER="mesos://10.0.2.15:7077"  -e
>> SPARK_IMAGE="spark_driver:latest" spark_driver:latest ./bin/spark-submit
>> --deploy-mode cluster --class org.apache.spark.examples.SparkPi
>> http://10.0.2.15/spark-examples-1.6.0-hadoop2.6.0.jar
>>
>> [root@Mindstorm spark-1.6.0]# docker logs d22d8e897b79
>> *Warning: Local jar
>> /mnt/mesos/sandbox/spark-examples-1.6.0-hadoop2.6.0.jar does not exist,
>> skipping.*
>> java.lang.ClassNotFoundException: org.apache.spark.examples.SparkPi
>> at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
>> at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>> at java.security.AccessController.doPrivileged(Native Method)
>> at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>> at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
>> at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
>> at java.lang.Class.forName0(Native Method)
>> at java.lang.Class.forName(Class.java:278)
>> at org.apache.spark.util.Utils$.classForName(Utils.scala:174)
>> at
>> org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:689)
>> at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
>> at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
>> at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
>> at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
>>
>> When i do docker inspect i see below command gets issued
>>
>> "Cmd": [
>> "-c",
>> "./bin/spark-submit --name org.apache.spark.examples.SparkPi
>> --master mesos://10.0.2.15:5050 --driver-cores 1.0 --driver-memory 1024M
>> --class org.apache.spark.examples.SparkPi 
>> $*MESOS_SANDBOX*/spark-examples-1.6.0-hadoop2.6.0.jar
>> "
>>
>>
>>
>> On Thu, Mar 3, 2016 at 12:09 AM, Tim Chen  wrote:
>>
>>> You shouldn't need to specify --jars at all since you only have one jar.
>>>
>>> The error is pretty odd as it suggests it's trying to load
>>> /opt/spark/Example but that doesn't really seem to be anywhere in your
>>> image or command.
>>>
>>> Can you paste your stdout from the driver task launched by the cluster
>>> dispatcher, that shows you the spark-submit command it eventually ran?
>>>
>>>
>>> Tim
>>>
>>>
>>>
>>> On Wed, Mar 2, 2016 at 5:42 PM, Ashish Soni 
>>> wrote:
>>>
 See below  and Attached the Dockerfile to build the spark image  (
 between i just upgraded to 1.6 )

 I am running below setup -

 Mesos Master - Docker Container
 Mesos Slave 1 - Docker Container
 Mesos Slave 2 - Docker Container
 Marathon - Docker Container
 Spark MESOS Dispatcher - Docker Container

 when i submit the Spark PI Example Job using below command

 *docker run -it --rm -m 2g -e SPARK_MASTER="mesos://10.0.2.15:7077
 "  -e SPARK_IMAGE="spark_driver:**latest"
 spark_driver:latest ./bin/spark-submit  --deploy-mode cluster --name "PI
 Example" --class org.apache.spark.examples.**SparkPi
 http://10.0.2.15/spark-examples-1.6.0-hadoop2.6.0.jar
  --jars
 /opt/spark/lib/spark-examples-**1.6.0-hadoop2.6.0.jar --verbose*

 Below is the ERROR
 Error: Cannot load main class from JAR file:/opt/spark/Example
 Run with --help for usage help or --verbose for debug output


 When i docker Inspect for the stopped / dead container i see below
 output what is interesting to see is some one or executor replaced by
 original command with below in highlighted and i do not see Executor is
 downloading the JAR -- IS this a BUG i am hitting or not sure if that is
 supposed to 

Re: Spark 1.5 on Mesos

2016-03-03 Thread Tim Chen
Ah I see, I think it's because you've launched the Mesos slave in a docker
container, and when you launch also the executor in a container it's not
able to mount in the sandbox to the other container since the slave is in a
chroot.

Can you try mounting in a volume from the host when you launch the slave
for your slave's workdir?
docker run -v /tmp/mesos/slave:/tmp/mesos/slave mesos_image mesos-slave
--work_dir=/tmp/mesos/slave 

Tim

On Thu, Mar 3, 2016 at 4:42 AM, Ashish Soni  wrote:

> Hi Tim ,
>
>
> I think I know the problem but i do not have a solution , *The Mesos
> Slave supposed to download the Jars from the URI specified and placed in
> $MESOS_SANDBOX location but it is not downloading not sure why* .. see
> below logs
>
> My command looks like below
>
> docker run -it --rm -m 2g -e SPARK_MASTER="mesos://10.0.2.15:7077"  -e
> SPARK_IMAGE="spark_driver:latest" spark_driver:latest ./bin/spark-submit
> --deploy-mode cluster --class org.apache.spark.examples.SparkPi
> http://10.0.2.15/spark-examples-1.6.0-hadoop2.6.0.jar
>
> [root@Mindstorm spark-1.6.0]# docker logs d22d8e897b79
> *Warning: Local jar
> /mnt/mesos/sandbox/spark-examples-1.6.0-hadoop2.6.0.jar does not exist,
> skipping.*
> java.lang.ClassNotFoundException: org.apache.spark.examples.SparkPi
> at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
> at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
> at java.security.AccessController.doPrivileged(Native Method)
> at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
> at java.lang.Class.forName0(Native Method)
> at java.lang.Class.forName(Class.java:278)
> at org.apache.spark.util.Utils$.classForName(Utils.scala:174)
> at
> org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:689)
> at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
> at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
> at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
> at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
>
> When i do docker inspect i see below command gets issued
>
> "Cmd": [
> "-c",
> "./bin/spark-submit --name org.apache.spark.examples.SparkPi
> --master mesos://10.0.2.15:5050 --driver-cores 1.0 --driver-memory 1024M
> --class org.apache.spark.examples.SparkPi 
> $*MESOS_SANDBOX*/spark-examples-1.6.0-hadoop2.6.0.jar
> "
>
>
>
> On Thu, Mar 3, 2016 at 12:09 AM, Tim Chen  wrote:
>
>> You shouldn't need to specify --jars at all since you only have one jar.
>>
>> The error is pretty odd as it suggests it's trying to load
>> /opt/spark/Example but that doesn't really seem to be anywhere in your
>> image or command.
>>
>> Can you paste your stdout from the driver task launched by the cluster
>> dispatcher, that shows you the spark-submit command it eventually ran?
>>
>>
>> Tim
>>
>>
>>
>> On Wed, Mar 2, 2016 at 5:42 PM, Ashish Soni 
>> wrote:
>>
>>> See below  and Attached the Dockerfile to build the spark image  (
>>> between i just upgraded to 1.6 )
>>>
>>> I am running below setup -
>>>
>>> Mesos Master - Docker Container
>>> Mesos Slave 1 - Docker Container
>>> Mesos Slave 2 - Docker Container
>>> Marathon - Docker Container
>>> Spark MESOS Dispatcher - Docker Container
>>>
>>> when i submit the Spark PI Example Job using below command
>>>
>>> *docker run -it --rm -m 2g -e SPARK_MASTER="mesos://10.0.2.15:7077
>>> "  -e SPARK_IMAGE="spark_driver:**latest"
>>> spark_driver:latest ./bin/spark-submit  --deploy-mode cluster --name "PI
>>> Example" --class org.apache.spark.examples.**SparkPi
>>> http://10.0.2.15/spark-examples-1.6.0-hadoop2.6.0.jar
>>>  --jars
>>> /opt/spark/lib/spark-examples-**1.6.0-hadoop2.6.0.jar --verbose*
>>>
>>> Below is the ERROR
>>> Error: Cannot load main class from JAR file:/opt/spark/Example
>>> Run with --help for usage help or --verbose for debug output
>>>
>>>
>>> When i docker Inspect for the stopped / dead container i see below
>>> output what is interesting to see is some one or executor replaced by
>>> original command with below in highlighted and i do not see Executor is
>>> downloading the JAR -- IS this a BUG i am hitting or not sure if that is
>>> supposed to work this way and i am missing some configuration
>>>
>>> "Env": [
>>> "SPARK_IMAGE=spark_driver:latest",
>>> "SPARK_SCALA_VERSION=2.10",
>>> "SPARK_VERSION=1.6.0",
>>> "SPARK_EXECUTOR_URI=
>>> http://d3kbcqa49mib13.cloudfront.net/spark-1.6.0-bin-hadoop2.6.tgz;,
>>> "MESOS_NATIVE_JAVA_LIBRARY=/usr/lib/libmesos-0.25.0.so",
>>> "SPARK_MASTER=mesos://10.0.2.15:7077",
>>>
>>> 

Re: Spark 1.5 on Mesos

2016-03-02 Thread Tim Chen
You shouldn't need to specify --jars at all since you only have one jar.

The error is pretty odd as it suggests it's trying to load
/opt/spark/Example but that doesn't really seem to be anywhere in your
image or command.

Can you paste your stdout from the driver task launched by the cluster
dispatcher, that shows you the spark-submit command it eventually ran?


Tim



On Wed, Mar 2, 2016 at 5:42 PM, Ashish Soni  wrote:

> See below  and Attached the Dockerfile to build the spark image  ( between
> i just upgraded to 1.6 )
>
> I am running below setup -
>
> Mesos Master - Docker Container
> Mesos Slave 1 - Docker Container
> Mesos Slave 2 - Docker Container
> Marathon - Docker Container
> Spark MESOS Dispatcher - Docker Container
>
> when i submit the Spark PI Example Job using below command
>
> *docker run -it --rm -m 2g -e SPARK_MASTER="mesos://10.0.2.15:7077
> "  -e SPARK_IMAGE="spark_driver:**latest"
> spark_driver:latest ./bin/spark-submit  --deploy-mode cluster --name "PI
> Example" --class org.apache.spark.examples.**SparkPi
> http://10.0.2.15/spark-examples-1.6.0-hadoop2.6.0.jar
>  --jars
> /opt/spark/lib/spark-examples-**1.6.0-hadoop2.6.0.jar --verbose*
>
> Below is the ERROR
> Error: Cannot load main class from JAR file:/opt/spark/Example
> Run with --help for usage help or --verbose for debug output
>
>
> When i docker Inspect for the stopped / dead container i see below output
> what is interesting to see is some one or executor replaced by original
> command with below in highlighted and i do not see Executor is downloading
> the JAR -- IS this a BUG i am hitting or not sure if that is supposed to
> work this way and i am missing some configuration
>
> "Env": [
> "SPARK_IMAGE=spark_driver:latest",
> "SPARK_SCALA_VERSION=2.10",
> "SPARK_VERSION=1.6.0",
> "SPARK_EXECUTOR_URI=
> http://d3kbcqa49mib13.cloudfront.net/spark-1.6.0-bin-hadoop2.6.tgz;,
> "MESOS_NATIVE_JAVA_LIBRARY=/usr/lib/libmesos-0.25.0.so",
> "SPARK_MASTER=mesos://10.0.2.15:7077",
>
> "SPARK_EXECUTOR_OPTS=-Dspark.executorEnv.MESOS_NATIVE_JAVA_LIBRARY=/usr/lib/
> libmesos-0.25.0.so -Dspark.jars=
> http://10.0.2.15/spark-examples-1.6.0-hadoop2.6.0.jar
> -Dspark.mesos.mesosExecutor.cores=0.1 -Dspark.driver.supervise=false -
> Dspark.app.name=PI Example -Dspark.mesos.uris=
> http://10.0.2.15/spark-examples-1.6.0-hadoop2.6.0.jar
> -Dspark.mesos.executor.docker.image=spark_driver:latest
> -Dspark.submit.deployMode=cluster -Dspark.master=mesos://10.0.2.15:7077
> -Dspark.driver.extraClassPath=/opt/spark/custom/lib/*
> -Dspark.executor.extraClassPath=/opt/spark/custom/lib/*
> -Dspark.executor.uri=
> http://d3kbcqa49mib13.cloudfront.net/spark-1.6.0-bin-hadoop2.6.tgz
> -Dspark.mesos.executor.home=/opt/spark",
> "MESOS_SANDBOX=/mnt/mesos/sandbox",
>
> "MESOS_CONTAINER_NAME=mesos-e47f8d4c-5ee1-4d01-ad07-0d9a03ced62d-S1.43c08f82-e508-4d57-8c0b-fa05bee77fd6",
>
> "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
> "HADOOP_VERSION=2.6",
> "SPARK_HOME=/opt/spark"
> ],
> "Cmd": [
> "-c",
>* "./bin/spark-submit --name PI Example --master
> mesos://10.0.2.15:5050  --driver-cores 1.0
> --driver-memory 1024M --class org.apache.spark.examples.SparkPi
> $MESOS_SANDBOX/spark-examples-1.6.0-hadoop2.6.0.jar --jars
> /opt/spark/lib/spark-examples-1.6.0-hadoop2.6.0.jar --verbose"*
> ],
> "Image": "spark_driver:latest",
>
>
>
>
>
>
>
>
>
>
>
>
> On Wed, Mar 2, 2016 at 5:49 PM, Charles Allen <
> charles.al...@metamarkets.com> wrote:
>
>> @Tim yes, this is asking about 1.5 though
>>
>> On Wed, Mar 2, 2016 at 2:35 PM Tim Chen  wrote:
>>
>>> Hi Charles,
>>>
>>> I thought that's fixed with your patch in latest master now right?
>>>
>>> Ashish, yes please give me your docker image name (if it's in the public
>>> registry) and what you've tried and I can see what's wrong. I think it's
>>> most likely just the configuration of where the Spark home folder is in the
>>> image.
>>>
>>> Tim
>>>
>>> On Wed, Mar 2, 2016 at 2:28 PM, Charles Allen <
>>> charles.al...@metamarkets.com> wrote:
>>>
 Re: Spark on Mesos Warning regarding disk space:
 https://issues.apache.org/jira/browse/SPARK-12330

 That's a spark flaw I encountered on a very regular basis on mesos.
 That and a few other annoyances are fixed in
 https://github.com/metamx/spark/tree/v1.5.2-mmx

 Here's another mild annoyance I've encountered:
 https://issues.apache.org/jira/browse/SPARK-11714

 On Wed, Mar 2, 2016 at 1:31 PM Ashish Soni 
 wrote:

> I have no luck and i would to ask the question to spark committers
> will this be ever designed to run on mesos ?
>
> spark app as a docker 

Re: Spark 1.5 on Mesos

2016-03-02 Thread Ashish Soni
See below  and Attached the Dockerfile to build the spark image  ( between
i just upgraded to 1.6 )

I am running below setup -

Mesos Master - Docker Container
Mesos Slave 1 - Docker Container
Mesos Slave 2 - Docker Container
Marathon - Docker Container
Spark MESOS Dispatcher - Docker Container

when i submit the Spark PI Example Job using below command

*docker run -it --rm -m 2g -e SPARK_MASTER="mesos://10.0.2.15:7077
"  -e SPARK_IMAGE="spark_driver:**latest"
spark_driver:latest ./bin/spark-submit  --deploy-mode cluster --name "PI
Example" --class org.apache.spark.examples.**SparkPi
http://10.0.2.15/spark-examples-1.6.0-hadoop2.6.0.jar
 --jars
/opt/spark/lib/spark-examples-**1.6.0-hadoop2.6.0.jar --verbose*

Below is the ERROR
Error: Cannot load main class from JAR file:/opt/spark/Example
Run with --help for usage help or --verbose for debug output


When i docker Inspect for the stopped / dead container i see below output
what is interesting to see is some one or executor replaced by original
command with below in highlighted and i do not see Executor is downloading
the JAR -- IS this a BUG i am hitting or not sure if that is supposed to
work this way and i am missing some configuration

"Env": [
"SPARK_IMAGE=spark_driver:latest",
"SPARK_SCALA_VERSION=2.10",
"SPARK_VERSION=1.6.0",
"SPARK_EXECUTOR_URI=
http://d3kbcqa49mib13.cloudfront.net/spark-1.6.0-bin-hadoop2.6.tgz;,
"MESOS_NATIVE_JAVA_LIBRARY=/usr/lib/libmesos-0.25.0.so",
"SPARK_MASTER=mesos://10.0.2.15:7077",

"SPARK_EXECUTOR_OPTS=-Dspark.executorEnv.MESOS_NATIVE_JAVA_LIBRARY=/usr/lib/
libmesos-0.25.0.so -Dspark.jars=
http://10.0.2.15/spark-examples-1.6.0-hadoop2.6.0.jar
-Dspark.mesos.mesosExecutor.cores=0.1 -Dspark.driver.supervise=false -
Dspark.app.name=PI Example -Dspark.mesos.uris=
http://10.0.2.15/spark-examples-1.6.0-hadoop2.6.0.jar
-Dspark.mesos.executor.docker.image=spark_driver:latest
-Dspark.submit.deployMode=cluster -Dspark.master=mesos://10.0.2.15:7077
-Dspark.driver.extraClassPath=/opt/spark/custom/lib/*
-Dspark.executor.extraClassPath=/opt/spark/custom/lib/*
-Dspark.executor.uri=
http://d3kbcqa49mib13.cloudfront.net/spark-1.6.0-bin-hadoop2.6.tgz
-Dspark.mesos.executor.home=/opt/spark",
"MESOS_SANDBOX=/mnt/mesos/sandbox",

"MESOS_CONTAINER_NAME=mesos-e47f8d4c-5ee1-4d01-ad07-0d9a03ced62d-S1.43c08f82-e508-4d57-8c0b-fa05bee77fd6",

"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"HADOOP_VERSION=2.6",
"SPARK_HOME=/opt/spark"
],
"Cmd": [
"-c",
   * "./bin/spark-submit --name PI Example --master
mesos://10.0.2.15:5050  --driver-cores 1.0
--driver-memory 1024M --class org.apache.spark.examples.SparkPi
$MESOS_SANDBOX/spark-examples-1.6.0-hadoop2.6.0.jar --jars
/opt/spark/lib/spark-examples-1.6.0-hadoop2.6.0.jar --verbose"*
],
"Image": "spark_driver:latest",












On Wed, Mar 2, 2016 at 5:49 PM, Charles Allen  wrote:

> @Tim yes, this is asking about 1.5 though
>
> On Wed, Mar 2, 2016 at 2:35 PM Tim Chen  wrote:
>
>> Hi Charles,
>>
>> I thought that's fixed with your patch in latest master now right?
>>
>> Ashish, yes please give me your docker image name (if it's in the public
>> registry) and what you've tried and I can see what's wrong. I think it's
>> most likely just the configuration of where the Spark home folder is in the
>> image.
>>
>> Tim
>>
>> On Wed, Mar 2, 2016 at 2:28 PM, Charles Allen <
>> charles.al...@metamarkets.com> wrote:
>>
>>> Re: Spark on Mesos Warning regarding disk space:
>>> https://issues.apache.org/jira/browse/SPARK-12330
>>>
>>> That's a spark flaw I encountered on a very regular basis on mesos. That
>>> and a few other annoyances are fixed in
>>> https://github.com/metamx/spark/tree/v1.5.2-mmx
>>>
>>> Here's another mild annoyance I've encountered:
>>> https://issues.apache.org/jira/browse/SPARK-11714
>>>
>>> On Wed, Mar 2, 2016 at 1:31 PM Ashish Soni 
>>> wrote:
>>>
 I have no luck and i would to ask the question to spark committers will
 this be ever designed to run on mesos ?

 spark app as a docker container not working at all on mesos  ,if any
 one would like the code i can send it over to have a look.

 Ashish

 On Wed, Mar 2, 2016 at 12:23 PM, Sathish Kumaran Vairavelu <
 vsathishkuma...@gmail.com> wrote:

> Try passing jar using --jars option
>
> On Wed, Mar 2, 2016 at 10:17 AM Ashish Soni 
> wrote:
>
>> I made some progress but now i am stuck at this point , Please help
>> as looks like i am close to get it working
>>
>> I have everything running in docker container including mesos slave
>> and master
>>
>> When i try to 

Re: Spark 1.5 on Mesos

2016-03-02 Thread Tim Chen
Hi Charles,

I thought that's fixed with your patch in latest master now right?

Ashish, yes please give me your docker image name (if it's in the public
registry) and what you've tried and I can see what's wrong. I think it's
most likely just the configuration of where the Spark home folder is in the
image.

Tim

On Wed, Mar 2, 2016 at 2:28 PM, Charles Allen  wrote:

> Re: Spark on Mesos Warning regarding disk space:
> https://issues.apache.org/jira/browse/SPARK-12330
>
> That's a spark flaw I encountered on a very regular basis on mesos. That
> and a few other annoyances are fixed in
> https://github.com/metamx/spark/tree/v1.5.2-mmx
>
> Here's another mild annoyance I've encountered:
> https://issues.apache.org/jira/browse/SPARK-11714
>
> On Wed, Mar 2, 2016 at 1:31 PM Ashish Soni  wrote:
>
>> I have no luck and i would to ask the question to spark committers will
>> this be ever designed to run on mesos ?
>>
>> spark app as a docker container not working at all on mesos  ,if any one
>> would like the code i can send it over to have a look.
>>
>> Ashish
>>
>> On Wed, Mar 2, 2016 at 12:23 PM, Sathish Kumaran Vairavelu <
>> vsathishkuma...@gmail.com> wrote:
>>
>>> Try passing jar using --jars option
>>>
>>> On Wed, Mar 2, 2016 at 10:17 AM Ashish Soni 
>>> wrote:
>>>
 I made some progress but now i am stuck at this point , Please help as
 looks like i am close to get it working

 I have everything running in docker container including mesos slave and
 master

 When i try to submit the pi example i get below error
 *Error: Cannot load main class from JAR file:/opt/spark/Example*

 Below is the command i use to submit as a docker container

 docker run -it --rm -e SPARK_MASTER="mesos://10.0.2.15:7077"  -e
 SPARK_IMAGE="spark_driver:latest" spark_driver:latest ./bin/spark-submit
 --deploy-mode cluster --name "PI Example" --class
 org.apache.spark.examples.SparkPi --driver-memory 512m --executor-memory
 512m --executor-cores 1
 http://10.0.2.15/spark-examples-1.6.0-hadoop2.6.0.jar


 On Tue, Mar 1, 2016 at 2:59 PM, Timothy Chen  wrote:

> Can you go through the Mesos UI and look at the driver/executor log
> from steer file and see what the problem is?
>
> Tim
>
> On Mar 1, 2016, at 8:05 AM, Ashish Soni  wrote:
>
> Not sure what is the issue but i am getting below error  when i try to
> run spark PI example
>
> Blacklisting Mesos slave value: "5345asdasdasdkas234234asdasdasdasd"
>due to too many failures; is Spark installed on it?
> WARN TaskSchedulerImpl: Initial job has not accepted any resources; 
> check your cluster UI to ensure that workers are registered and have 
> sufficient resources
>
>
> On Mon, Feb 29, 2016 at 1:39 PM, Sathish Kumaran Vairavelu <
> vsathishkuma...@gmail.com> wrote:
>
>> May be the Mesos executor couldn't find spark image or the
>> constraints are not satisfied. Check your Mesos UI if you see Spark
>> application in the Frameworks tab
>>
>> On Mon, Feb 29, 2016 at 12:23 PM Ashish Soni 
>> wrote:
>>
>>> What is the Best practice , I have everything running as docker
>>> container in single host ( mesos and marathon also as docker container )
>>>  and everything comes up fine but when i try to launch the spark shell i
>>> get below error
>>>
>>>
>>> SQL context available as sqlContext.
>>>
>>> scala> val data = sc.parallelize(1 to 100)
>>> data: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[0] at
>>> parallelize at :27
>>>
>>> scala> data.count
>>> [Stage 0:>
>>>  (0 + 0) / 2]16/02/29 18:21:12 WARN TaskSchedulerImpl: Initial job has 
>>> not
>>> accepted any resources; check your cluster UI to ensure that workers are
>>> registered and have sufficient resources
>>> 16/02/29 18:21:27 WARN TaskSchedulerImpl: Initial job has not
>>> accepted any resources; check your cluster UI to ensure that workers are
>>> registered and have sufficient resources
>>>
>>>
>>>
>>> On Mon, Feb 29, 2016 at 12:04 PM, Tim Chen 
>>> wrote:
>>>
 No you don't have to run Mesos in docker containers to run Spark in
 docker containers.

 Once you have Mesos cluster running you can then specfiy the Spark
 configurations in your Spark job (i.e: 
 spark.mesos.executor.docker.image=mesosphere/spark:1.6)
 and Mesos will automatically launch docker containers for you.

 Tim

 On Mon, Feb 29, 2016 at 7:36 AM, Ashish Soni  wrote:

> Yes i read that and not much details here.
>
> Is it true that we 

Re: Spark 1.5 on Mesos

2016-03-02 Thread Charles Allen
Re: Spark on Mesos Warning regarding disk space:
https://issues.apache.org/jira/browse/SPARK-12330

That's a spark flaw I encountered on a very regular basis on mesos. That
and a few other annoyances are fixed in
https://github.com/metamx/spark/tree/v1.5.2-mmx

Here's another mild annoyance I've encountered:
https://issues.apache.org/jira/browse/SPARK-11714

On Wed, Mar 2, 2016 at 1:31 PM Ashish Soni  wrote:

> I have no luck and i would to ask the question to spark committers will
> this be ever designed to run on mesos ?
>
> spark app as a docker container not working at all on mesos  ,if any one
> would like the code i can send it over to have a look.
>
> Ashish
>
> On Wed, Mar 2, 2016 at 12:23 PM, Sathish Kumaran Vairavelu <
> vsathishkuma...@gmail.com> wrote:
>
>> Try passing jar using --jars option
>>
>> On Wed, Mar 2, 2016 at 10:17 AM Ashish Soni 
>> wrote:
>>
>>> I made some progress but now i am stuck at this point , Please help as
>>> looks like i am close to get it working
>>>
>>> I have everything running in docker container including mesos slave and
>>> master
>>>
>>> When i try to submit the pi example i get below error
>>> *Error: Cannot load main class from JAR file:/opt/spark/Example*
>>>
>>> Below is the command i use to submit as a docker container
>>>
>>> docker run -it --rm -e SPARK_MASTER="mesos://10.0.2.15:7077"  -e
>>> SPARK_IMAGE="spark_driver:latest" spark_driver:latest ./bin/spark-submit
>>> --deploy-mode cluster --name "PI Example" --class
>>> org.apache.spark.examples.SparkPi --driver-memory 512m --executor-memory
>>> 512m --executor-cores 1
>>> http://10.0.2.15/spark-examples-1.6.0-hadoop2.6.0.jar
>>>
>>>
>>> On Tue, Mar 1, 2016 at 2:59 PM, Timothy Chen  wrote:
>>>
 Can you go through the Mesos UI and look at the driver/executor log
 from steer file and see what the problem is?

 Tim

 On Mar 1, 2016, at 8:05 AM, Ashish Soni  wrote:

 Not sure what is the issue but i am getting below error  when i try to
 run spark PI example

 Blacklisting Mesos slave value: "5345asdasdasdkas234234asdasdasdasd"
due to too many failures; is Spark installed on it?
 WARN TaskSchedulerImpl: Initial job has not accepted any resources; 
 check your cluster UI to ensure that workers are registered and have 
 sufficient resources


 On Mon, Feb 29, 2016 at 1:39 PM, Sathish Kumaran Vairavelu <
 vsathishkuma...@gmail.com> wrote:

> May be the Mesos executor couldn't find spark image or the constraints
> are not satisfied. Check your Mesos UI if you see Spark application in the
> Frameworks tab
>
> On Mon, Feb 29, 2016 at 12:23 PM Ashish Soni 
> wrote:
>
>> What is the Best practice , I have everything running as docker
>> container in single host ( mesos and marathon also as docker container )
>>  and everything comes up fine but when i try to launch the spark shell i
>> get below error
>>
>>
>> SQL context available as sqlContext.
>>
>> scala> val data = sc.parallelize(1 to 100)
>> data: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[0] at
>> parallelize at :27
>>
>> scala> data.count
>> [Stage 0:>
>>  (0 + 0) / 2]16/02/29 18:21:12 WARN TaskSchedulerImpl: Initial job has 
>> not
>> accepted any resources; check your cluster UI to ensure that workers are
>> registered and have sufficient resources
>> 16/02/29 18:21:27 WARN TaskSchedulerImpl: Initial job has not
>> accepted any resources; check your cluster UI to ensure that workers are
>> registered and have sufficient resources
>>
>>
>>
>> On Mon, Feb 29, 2016 at 12:04 PM, Tim Chen  wrote:
>>
>>> No you don't have to run Mesos in docker containers to run Spark in
>>> docker containers.
>>>
>>> Once you have Mesos cluster running you can then specfiy the Spark
>>> configurations in your Spark job (i.e: 
>>> spark.mesos.executor.docker.image=mesosphere/spark:1.6)
>>> and Mesos will automatically launch docker containers for you.
>>>
>>> Tim
>>>
>>> On Mon, Feb 29, 2016 at 7:36 AM, Ashish Soni 
>>> wrote:
>>>
 Yes i read that and not much details here.

 Is it true that we need to have spark installed on each mesos
 docker container ( master and slave ) ...

 Ashish

 On Fri, Feb 26, 2016 at 2:14 PM, Tim Chen 
 wrote:

> https://spark.apache.org/docs/latest/running-on-mesos.html should
> be the best source, what problems were you running into?
>
> Tim
>
> On Fri, Feb 26, 2016 at 11:06 AM, Yin Yang 
> wrote:
>
>> Have you read this ?

Re: Spark 1.5 on Mesos

2016-03-02 Thread Ashish Soni
I have no luck and i would to ask the question to spark committers will
this be ever designed to run on mesos ?

spark app as a docker container not working at all on mesos  ,if any one
would like the code i can send it over to have a look.

Ashish

On Wed, Mar 2, 2016 at 12:23 PM, Sathish Kumaran Vairavelu <
vsathishkuma...@gmail.com> wrote:

> Try passing jar using --jars option
>
> On Wed, Mar 2, 2016 at 10:17 AM Ashish Soni  wrote:
>
>> I made some progress but now i am stuck at this point , Please help as
>> looks like i am close to get it working
>>
>> I have everything running in docker container including mesos slave and
>> master
>>
>> When i try to submit the pi example i get below error
>> *Error: Cannot load main class from JAR file:/opt/spark/Example*
>>
>> Below is the command i use to submit as a docker container
>>
>> docker run -it --rm -e SPARK_MASTER="mesos://10.0.2.15:7077"  -e
>> SPARK_IMAGE="spark_driver:latest" spark_driver:latest ./bin/spark-submit
>> --deploy-mode cluster --name "PI Example" --class
>> org.apache.spark.examples.SparkPi --driver-memory 512m --executor-memory
>> 512m --executor-cores 1
>> http://10.0.2.15/spark-examples-1.6.0-hadoop2.6.0.jar
>>
>>
>> On Tue, Mar 1, 2016 at 2:59 PM, Timothy Chen  wrote:
>>
>>> Can you go through the Mesos UI and look at the driver/executor log from
>>> steer file and see what the problem is?
>>>
>>> Tim
>>>
>>> On Mar 1, 2016, at 8:05 AM, Ashish Soni  wrote:
>>>
>>> Not sure what is the issue but i am getting below error  when i try to
>>> run spark PI example
>>>
>>> Blacklisting Mesos slave value: "5345asdasdasdkas234234asdasdasdasd"
>>>due to too many failures; is Spark installed on it?
>>> WARN TaskSchedulerImpl: Initial job has not accepted any resources; 
>>> check your cluster UI to ensure that workers are registered and have 
>>> sufficient resources
>>>
>>>
>>> On Mon, Feb 29, 2016 at 1:39 PM, Sathish Kumaran Vairavelu <
>>> vsathishkuma...@gmail.com> wrote:
>>>
 May be the Mesos executor couldn't find spark image or the constraints
 are not satisfied. Check your Mesos UI if you see Spark application in the
 Frameworks tab

 On Mon, Feb 29, 2016 at 12:23 PM Ashish Soni 
 wrote:

> What is the Best practice , I have everything running as docker
> container in single host ( mesos and marathon also as docker container )
>  and everything comes up fine but when i try to launch the spark shell i
> get below error
>
>
> SQL context available as sqlContext.
>
> scala> val data = sc.parallelize(1 to 100)
> data: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[0] at
> parallelize at :27
>
> scala> data.count
> [Stage 0:>  (0
> + 0) / 2]16/02/29 18:21:12 WARN TaskSchedulerImpl: Initial job has not
> accepted any resources; check your cluster UI to ensure that workers are
> registered and have sufficient resources
> 16/02/29 18:21:27 WARN TaskSchedulerImpl: Initial job has not accepted
> any resources; check your cluster UI to ensure that workers are registered
> and have sufficient resources
>
>
>
> On Mon, Feb 29, 2016 at 12:04 PM, Tim Chen  wrote:
>
>> No you don't have to run Mesos in docker containers to run Spark in
>> docker containers.
>>
>> Once you have Mesos cluster running you can then specfiy the Spark
>> configurations in your Spark job (i.e: 
>> spark.mesos.executor.docker.image=mesosphere/spark:1.6)
>> and Mesos will automatically launch docker containers for you.
>>
>> Tim
>>
>> On Mon, Feb 29, 2016 at 7:36 AM, Ashish Soni 
>> wrote:
>>
>>> Yes i read that and not much details here.
>>>
>>> Is it true that we need to have spark installed on each mesos docker
>>> container ( master and slave ) ...
>>>
>>> Ashish
>>>
>>> On Fri, Feb 26, 2016 at 2:14 PM, Tim Chen  wrote:
>>>
 https://spark.apache.org/docs/latest/running-on-mesos.html should
 be the best source, what problems were you running into?

 Tim

 On Fri, Feb 26, 2016 at 11:06 AM, Yin Yang 
 wrote:

> Have you read this ?
> https://spark.apache.org/docs/latest/running-on-mesos.html
>
> On Fri, Feb 26, 2016 at 11:03 AM, Ashish Soni <
> asoni.le...@gmail.com> wrote:
>
>> Hi All ,
>>
>> Is there any proper documentation as how to run spark on mesos ,
>> I am trying from the last few days and not able to make it work.
>>
>> Please help
>>
>> Ashish
>>
>
>

>>>
>>
>
>>>
>>


Re: Spark 1.5 on Mesos

2016-03-02 Thread Sathish Kumaran Vairavelu
Try passing jar using --jars option
On Wed, Mar 2, 2016 at 10:17 AM Ashish Soni  wrote:

> I made some progress but now i am stuck at this point , Please help as
> looks like i am close to get it working
>
> I have everything running in docker container including mesos slave and
> master
>
> When i try to submit the pi example i get below error
> *Error: Cannot load main class from JAR file:/opt/spark/Example*
>
> Below is the command i use to submit as a docker container
>
> docker run -it --rm -e SPARK_MASTER="mesos://10.0.2.15:7077"  -e
> SPARK_IMAGE="spark_driver:latest" spark_driver:latest ./bin/spark-submit
> --deploy-mode cluster --name "PI Example" --class
> org.apache.spark.examples.SparkPi --driver-memory 512m --executor-memory
> 512m --executor-cores 1
> http://10.0.2.15/spark-examples-1.6.0-hadoop2.6.0.jar
>
>
> On Tue, Mar 1, 2016 at 2:59 PM, Timothy Chen  wrote:
>
>> Can you go through the Mesos UI and look at the driver/executor log from
>> steer file and see what the problem is?
>>
>> Tim
>>
>> On Mar 1, 2016, at 8:05 AM, Ashish Soni  wrote:
>>
>> Not sure what is the issue but i am getting below error  when i try to
>> run spark PI example
>>
>> Blacklisting Mesos slave value: "5345asdasdasdkas234234asdasdasdasd"
>>due to too many failures; is Spark installed on it?
>> WARN TaskSchedulerImpl: Initial job has not accepted any resources; 
>> check your cluster UI to ensure that workers are registered and have 
>> sufficient resources
>>
>>
>> On Mon, Feb 29, 2016 at 1:39 PM, Sathish Kumaran Vairavelu <
>> vsathishkuma...@gmail.com> wrote:
>>
>>> May be the Mesos executor couldn't find spark image or the constraints
>>> are not satisfied. Check your Mesos UI if you see Spark application in the
>>> Frameworks tab
>>>
>>> On Mon, Feb 29, 2016 at 12:23 PM Ashish Soni 
>>> wrote:
>>>
 What is the Best practice , I have everything running as docker
 container in single host ( mesos and marathon also as docker container )
  and everything comes up fine but when i try to launch the spark shell i
 get below error


 SQL context available as sqlContext.

 scala> val data = sc.parallelize(1 to 100)
 data: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[0] at
 parallelize at :27

 scala> data.count
 [Stage 0:>  (0
 + 0) / 2]16/02/29 18:21:12 WARN TaskSchedulerImpl: Initial job has not
 accepted any resources; check your cluster UI to ensure that workers are
 registered and have sufficient resources
 16/02/29 18:21:27 WARN TaskSchedulerImpl: Initial job has not accepted
 any resources; check your cluster UI to ensure that workers are registered
 and have sufficient resources



 On Mon, Feb 29, 2016 at 12:04 PM, Tim Chen  wrote:

> No you don't have to run Mesos in docker containers to run Spark in
> docker containers.
>
> Once you have Mesos cluster running you can then specfiy the Spark
> configurations in your Spark job (i.e: 
> spark.mesos.executor.docker.image=mesosphere/spark:1.6)
> and Mesos will automatically launch docker containers for you.
>
> Tim
>
> On Mon, Feb 29, 2016 at 7:36 AM, Ashish Soni 
> wrote:
>
>> Yes i read that and not much details here.
>>
>> Is it true that we need to have spark installed on each mesos docker
>> container ( master and slave ) ...
>>
>> Ashish
>>
>> On Fri, Feb 26, 2016 at 2:14 PM, Tim Chen  wrote:
>>
>>> https://spark.apache.org/docs/latest/running-on-mesos.html should
>>> be the best source, what problems were you running into?
>>>
>>> Tim
>>>
>>> On Fri, Feb 26, 2016 at 11:06 AM, Yin Yang 
>>> wrote:
>>>
 Have you read this ?
 https://spark.apache.org/docs/latest/running-on-mesos.html

 On Fri, Feb 26, 2016 at 11:03 AM, Ashish Soni <
 asoni.le...@gmail.com> wrote:

> Hi All ,
>
> Is there any proper documentation as how to run spark on mesos , I
> am trying from the last few days and not able to make it work.
>
> Please help
>
> Ashish
>


>>>
>>
>

>>
>


Re: Spark 1.5 on Mesos

2016-03-02 Thread Ashish Soni
I made some progress but now i am stuck at this point , Please help as
looks like i am close to get it working

I have everything running in docker container including mesos slave and
master

When i try to submit the pi example i get below error
*Error: Cannot load main class from JAR file:/opt/spark/Example*

Below is the command i use to submit as a docker container

docker run -it --rm -e SPARK_MASTER="mesos://10.0.2.15:7077"  -e
SPARK_IMAGE="spark_driver:latest" spark_driver:latest ./bin/spark-submit
--deploy-mode cluster --name "PI Example" --class
org.apache.spark.examples.SparkPi --driver-memory 512m --executor-memory
512m --executor-cores 1
http://10.0.2.15/spark-examples-1.6.0-hadoop2.6.0.jar


On Tue, Mar 1, 2016 at 2:59 PM, Timothy Chen  wrote:

> Can you go through the Mesos UI and look at the driver/executor log from
> steer file and see what the problem is?
>
> Tim
>
> On Mar 1, 2016, at 8:05 AM, Ashish Soni  wrote:
>
> Not sure what is the issue but i am getting below error  when i try to run
> spark PI example
>
> Blacklisting Mesos slave value: "5345asdasdasdkas234234asdasdasdasd"
>due to too many failures; is Spark installed on it?
> WARN TaskSchedulerImpl: Initial job has not accepted any resources; check 
> your cluster UI to ensure that workers are registered and have sufficient 
> resources
>
>
> On Mon, Feb 29, 2016 at 1:39 PM, Sathish Kumaran Vairavelu <
> vsathishkuma...@gmail.com> wrote:
>
>> May be the Mesos executor couldn't find spark image or the constraints
>> are not satisfied. Check your Mesos UI if you see Spark application in the
>> Frameworks tab
>>
>> On Mon, Feb 29, 2016 at 12:23 PM Ashish Soni 
>> wrote:
>>
>>> What is the Best practice , I have everything running as docker
>>> container in single host ( mesos and marathon also as docker container )
>>>  and everything comes up fine but when i try to launch the spark shell i
>>> get below error
>>>
>>>
>>> SQL context available as sqlContext.
>>>
>>> scala> val data = sc.parallelize(1 to 100)
>>> data: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[0] at
>>> parallelize at :27
>>>
>>> scala> data.count
>>> [Stage 0:>  (0 +
>>> 0) / 2]16/02/29 18:21:12 WARN TaskSchedulerImpl: Initial job has not
>>> accepted any resources; check your cluster UI to ensure that workers are
>>> registered and have sufficient resources
>>> 16/02/29 18:21:27 WARN TaskSchedulerImpl: Initial job has not accepted
>>> any resources; check your cluster UI to ensure that workers are registered
>>> and have sufficient resources
>>>
>>>
>>>
>>> On Mon, Feb 29, 2016 at 12:04 PM, Tim Chen  wrote:
>>>
 No you don't have to run Mesos in docker containers to run Spark in
 docker containers.

 Once you have Mesos cluster running you can then specfiy the Spark
 configurations in your Spark job (i.e: 
 spark.mesos.executor.docker.image=mesosphere/spark:1.6)
 and Mesos will automatically launch docker containers for you.

 Tim

 On Mon, Feb 29, 2016 at 7:36 AM, Ashish Soni 
 wrote:

> Yes i read that and not much details here.
>
> Is it true that we need to have spark installed on each mesos docker
> container ( master and slave ) ...
>
> Ashish
>
> On Fri, Feb 26, 2016 at 2:14 PM, Tim Chen  wrote:
>
>> https://spark.apache.org/docs/latest/running-on-mesos.html should be
>> the best source, what problems were you running into?
>>
>> Tim
>>
>> On Fri, Feb 26, 2016 at 11:06 AM, Yin Yang 
>> wrote:
>>
>>> Have you read this ?
>>> https://spark.apache.org/docs/latest/running-on-mesos.html
>>>
>>> On Fri, Feb 26, 2016 at 11:03 AM, Ashish Soni >> > wrote:
>>>
 Hi All ,

 Is there any proper documentation as how to run spark on mesos , I
 am trying from the last few days and not able to make it work.

 Please help

 Ashish

>>>
>>>
>>
>

>>>
>


Re: Spark 1.5 on Mesos

2016-03-01 Thread Timothy Chen
Can you go through the Mesos UI and look at the driver/executor log from steer 
file and see what the problem is?

Tim

> On Mar 1, 2016, at 8:05 AM, Ashish Soni  wrote:
> 
> Not sure what is the issue but i am getting below error  when i try to run 
> spark PI example
> 
> Blacklisting Mesos slave value: "5345asdasdasdkas234234asdasdasdasd"
>due to too many failures; is Spark installed on it?
> WARN TaskSchedulerImpl: Initial job has not accepted any resources; check 
> your cluster UI to ensure that workers are registered and have sufficient 
> resources
> 
>> On Mon, Feb 29, 2016 at 1:39 PM, Sathish Kumaran Vairavelu 
>>  wrote:
>> May be the Mesos executor couldn't find spark image or the constraints are 
>> not satisfied. Check your Mesos UI if you see Spark application in the 
>> Frameworks tab
>> 
>>> On Mon, Feb 29, 2016 at 12:23 PM Ashish Soni  wrote:
>>> What is the Best practice , I have everything running as docker container 
>>> in single host ( mesos and marathon also as docker container )  and 
>>> everything comes up fine but when i try to launch the spark shell i get 
>>> below error
>>> 
>>> 
>>> SQL context available as sqlContext.
>>> 
>>> scala> val data = sc.parallelize(1 to 100)
>>> data: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[0] at 
>>> parallelize at :27
>>> 
>>> scala> data.count
>>> [Stage 0:>  (0 + 0) 
>>> / 2]16/02/29 18:21:12 WARN TaskSchedulerImpl: Initial job has not accepted 
>>> any resources; check your cluster UI to ensure that workers are registered 
>>> and have sufficient resources
>>> 16/02/29 18:21:27 WARN TaskSchedulerImpl: Initial job has not accepted any 
>>> resources; check your cluster UI to ensure that workers are registered and 
>>> have sufficient resources
>>>  
>>> 
>>> 
 On Mon, Feb 29, 2016 at 12:04 PM, Tim Chen  wrote:
 No you don't have to run Mesos in docker containers to run Spark in docker 
 containers.
 
 Once you have Mesos cluster running you can then specfiy the Spark 
 configurations in your Spark job (i.e: 
 spark.mesos.executor.docker.image=mesosphere/spark:1.6) and Mesos will 
 automatically launch docker containers for you.
 
 Tim
 
> On Mon, Feb 29, 2016 at 7:36 AM, Ashish Soni  
> wrote:
> Yes i read that and not much details here.
> 
> Is it true that we need to have spark installed on each mesos docker 
> container ( master and slave ) ...
> 
> Ashish
> 
>> On Fri, Feb 26, 2016 at 2:14 PM, Tim Chen  wrote:
>> https://spark.apache.org/docs/latest/running-on-mesos.html should be the 
>> best source, what problems were you running into?
>> 
>> Tim
>> 
>>> On Fri, Feb 26, 2016 at 11:06 AM, Yin Yang  wrote:
>>> Have you read this ?
>>> https://spark.apache.org/docs/latest/running-on-mesos.html
>>> 
 On Fri, Feb 26, 2016 at 11:03 AM, Ashish Soni  
 wrote:
 Hi All , 
 
 Is there any proper documentation as how to run spark on mesos , I am 
 trying from the last few days and not able to make it work.
 
 Please help
 
 Ashish
> 


Re: Spark 1.5 on Mesos

2016-03-01 Thread Ashish Soni
Not sure what is the issue but i am getting below error  when i try to run
spark PI example

Blacklisting Mesos slave value: "5345asdasdasdkas234234asdasdasdasd"
   due to too many failures; is Spark installed on it?
WARN TaskSchedulerImpl: Initial job has not accepted any
resources; check your cluster UI to ensure that workers are registered
and have sufficient resources


On Mon, Feb 29, 2016 at 1:39 PM, Sathish Kumaran Vairavelu <
vsathishkuma...@gmail.com> wrote:

> May be the Mesos executor couldn't find spark image or the constraints are
> not satisfied. Check your Mesos UI if you see Spark application in the
> Frameworks tab
>
> On Mon, Feb 29, 2016 at 12:23 PM Ashish Soni 
> wrote:
>
>> What is the Best practice , I have everything running as docker container
>> in single host ( mesos and marathon also as docker container )  and
>> everything comes up fine but when i try to launch the spark shell i get
>> below error
>>
>>
>> SQL context available as sqlContext.
>>
>> scala> val data = sc.parallelize(1 to 100)
>> data: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[0] at
>> parallelize at :27
>>
>> scala> data.count
>> [Stage 0:>  (0 +
>> 0) / 2]16/02/29 18:21:12 WARN TaskSchedulerImpl: Initial job has not
>> accepted any resources; check your cluster UI to ensure that workers are
>> registered and have sufficient resources
>> 16/02/29 18:21:27 WARN TaskSchedulerImpl: Initial job has not accepted
>> any resources; check your cluster UI to ensure that workers are registered
>> and have sufficient resources
>>
>>
>>
>> On Mon, Feb 29, 2016 at 12:04 PM, Tim Chen  wrote:
>>
>>> No you don't have to run Mesos in docker containers to run Spark in
>>> docker containers.
>>>
>>> Once you have Mesos cluster running you can then specfiy the Spark
>>> configurations in your Spark job (i.e: 
>>> spark.mesos.executor.docker.image=mesosphere/spark:1.6)
>>> and Mesos will automatically launch docker containers for you.
>>>
>>> Tim
>>>
>>> On Mon, Feb 29, 2016 at 7:36 AM, Ashish Soni 
>>> wrote:
>>>
 Yes i read that and not much details here.

 Is it true that we need to have spark installed on each mesos docker
 container ( master and slave ) ...

 Ashish

 On Fri, Feb 26, 2016 at 2:14 PM, Tim Chen  wrote:

> https://spark.apache.org/docs/latest/running-on-mesos.html should be
> the best source, what problems were you running into?
>
> Tim
>
> On Fri, Feb 26, 2016 at 11:06 AM, Yin Yang  wrote:
>
>> Have you read this ?
>> https://spark.apache.org/docs/latest/running-on-mesos.html
>>
>> On Fri, Feb 26, 2016 at 11:03 AM, Ashish Soni 
>> wrote:
>>
>>> Hi All ,
>>>
>>> Is there any proper documentation as how to run spark on mesos , I
>>> am trying from the last few days and not able to make it work.
>>>
>>> Please help
>>>
>>> Ashish
>>>
>>
>>
>

>>>
>>


Re: Spark 1.5 on Mesos

2016-02-29 Thread Ashish Soni
What is the Best practice , I have everything running as docker container
in single host ( mesos and marathon also as docker container )  and
everything comes up fine but when i try to launch the spark shell i get
below error


SQL context available as sqlContext.

scala> val data = sc.parallelize(1 to 100)
data: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[0] at
parallelize at :27

scala> data.count
[Stage 0:>  (0 + 0)
/ 2]16/02/29 18:21:12 WARN TaskSchedulerImpl: Initial job has not accepted
any resources; check your cluster UI to ensure that workers are registered
and have sufficient resources
16/02/29 18:21:27 WARN TaskSchedulerImpl: Initial job has not accepted any
resources; check your cluster UI to ensure that workers are registered and
have sufficient resources



On Mon, Feb 29, 2016 at 12:04 PM, Tim Chen  wrote:

> No you don't have to run Mesos in docker containers to run Spark in docker
> containers.
>
> Once you have Mesos cluster running you can then specfiy the Spark
> configurations in your Spark job (i.e: 
> spark.mesos.executor.docker.image=mesosphere/spark:1.6)
> and Mesos will automatically launch docker containers for you.
>
> Tim
>
> On Mon, Feb 29, 2016 at 7:36 AM, Ashish Soni 
> wrote:
>
>> Yes i read that and not much details here.
>>
>> Is it true that we need to have spark installed on each mesos docker
>> container ( master and slave ) ...
>>
>> Ashish
>>
>> On Fri, Feb 26, 2016 at 2:14 PM, Tim Chen  wrote:
>>
>>> https://spark.apache.org/docs/latest/running-on-mesos.html should be
>>> the best source, what problems were you running into?
>>>
>>> Tim
>>>
>>> On Fri, Feb 26, 2016 at 11:06 AM, Yin Yang  wrote:
>>>
 Have you read this ?
 https://spark.apache.org/docs/latest/running-on-mesos.html

 On Fri, Feb 26, 2016 at 11:03 AM, Ashish Soni 
 wrote:

> Hi All ,
>
> Is there any proper documentation as how to run spark on mesos , I am
> trying from the last few days and not able to make it work.
>
> Please help
>
> Ashish
>


>>>
>>
>


Re: Spark 1.5 on Mesos

2016-02-29 Thread Tim Chen
No you don't have to run Mesos in docker containers to run Spark in docker
containers.

Once you have Mesos cluster running you can then specfiy the Spark
configurations in your Spark job (i.e:
spark.mesos.executor.docker.image=mesosphere/spark:1.6)
and Mesos will automatically launch docker containers for you.

Tim

On Mon, Feb 29, 2016 at 7:36 AM, Ashish Soni  wrote:

> Yes i read that and not much details here.
>
> Is it true that we need to have spark installed on each mesos docker
> container ( master and slave ) ...
>
> Ashish
>
> On Fri, Feb 26, 2016 at 2:14 PM, Tim Chen  wrote:
>
>> https://spark.apache.org/docs/latest/running-on-mesos.html should be the
>> best source, what problems were you running into?
>>
>> Tim
>>
>> On Fri, Feb 26, 2016 at 11:06 AM, Yin Yang  wrote:
>>
>>> Have you read this ?
>>> https://spark.apache.org/docs/latest/running-on-mesos.html
>>>
>>> On Fri, Feb 26, 2016 at 11:03 AM, Ashish Soni 
>>> wrote:
>>>
 Hi All ,

 Is there any proper documentation as how to run spark on mesos , I am
 trying from the last few days and not able to make it work.

 Please help

 Ashish

>>>
>>>
>>
>


Re: Spark 1.5 on Mesos

2016-02-29 Thread Ashish Soni
Yes i read that and not much details here.

Is it true that we need to have spark installed on each mesos docker
container ( master and slave ) ...

Ashish

On Fri, Feb 26, 2016 at 2:14 PM, Tim Chen  wrote:

> https://spark.apache.org/docs/latest/running-on-mesos.html should be the
> best source, what problems were you running into?
>
> Tim
>
> On Fri, Feb 26, 2016 at 11:06 AM, Yin Yang  wrote:
>
>> Have you read this ?
>> https://spark.apache.org/docs/latest/running-on-mesos.html
>>
>> On Fri, Feb 26, 2016 at 11:03 AM, Ashish Soni 
>> wrote:
>>
>>> Hi All ,
>>>
>>> Is there any proper documentation as how to run spark on mesos , I am
>>> trying from the last few days and not able to make it work.
>>>
>>> Please help
>>>
>>> Ashish
>>>
>>
>>
>


Re: Spark 1.5 on Mesos

2016-02-26 Thread Tim Chen
https://spark.apache.org/docs/latest/running-on-mesos.html should be the
best source, what problems were you running into?

Tim

On Fri, Feb 26, 2016 at 11:06 AM, Yin Yang  wrote:

> Have you read this ?
> https://spark.apache.org/docs/latest/running-on-mesos.html
>
> On Fri, Feb 26, 2016 at 11:03 AM, Ashish Soni 
> wrote:
>
>> Hi All ,
>>
>> Is there any proper documentation as how to run spark on mesos , I am
>> trying from the last few days and not able to make it work.
>>
>> Please help
>>
>> Ashish
>>
>
>


Spark 1.5 on Mesos

2016-02-26 Thread Ashish Soni
Hi All ,

Is there any proper documentation as how to run spark on mesos , I am
trying from the last few days and not able to make it work.

Please help

Ashish


Re: Spark 1.5 on Mesos

2016-02-26 Thread Yin Yang
Have you read this ?
https://spark.apache.org/docs/latest/running-on-mesos.html

On Fri, Feb 26, 2016 at 11:03 AM, Ashish Soni  wrote:

> Hi All ,
>
> Is there any proper documentation as how to run spark on mesos , I am
> trying from the last few days and not able to make it work.
>
> Please help
>
> Ashish
>