Hi Eran,
I need to investigate but perhaps that's true, we're using SPARK_JAVA_OPTS
to pass all the options and not --conf.
I'll take a look at the bug, but if you can try the workaround and see if
that fixes your problem.
Tim
On Thu, Mar 10, 2016 at 10:08 AM, Eran Chinthaka Withana <
Here is an example dockerfile, although it's a bit dated now if you build
it today it should still work:
https://github.com/tnachen/spark/tree/dockerfile/mesos_docker
Tim
On Thu, Mar 10, 2016 at 8:06 AM, Ashish Soni wrote:
> Hi Tim ,
>
> Can you please share your
see below command gets issued
>
> "Cmd": [
> "-c",
> "./bin/spark-submit --name org.apache.spark.examples.SparkPi
> --master mesos://10.0.2.15:5050 --driver-cores 1.0 --driver-memory 1024M
> --class org.apache.spark.examples.SparkPi
> "Cmd": [
> "-c",
>* "./bin/spark-submit --name PI Example --master
> mesos://10.0.2.15:5050 <http://10.0.2.15:5050> --driver-cores 1.0
> --driver-memory 1024M --class org.apache.spark.examples.SparkPi
> $MESOS_SANDBOX/spark-exa
t;> wrote:
>>>>>>
>>>>>>> What is the Best practice , I have everything running as docker
>>>>>>> container in single host ( mesos and marathon also as docker container )
>>>>>>> and everything comes up fin
n Fri, Feb 26, 2016 at 2:14 PM, Tim Chen <t...@mesosphere.io> wrote:
>
>> https://spark.apache.org/docs/latest/running-on-mesos.html should be the
>> best source, what problems were you running into?
>>
>> Tim
>>
>> On Fri, Feb 26, 2016 at 11:06 AM,
https://spark.apache.org/docs/latest/running-on-mesos.html should be the
best source, what problems were you running into?
Tim
On Fri, Feb 26, 2016 at 11:06 AM, Yin Yang wrote:
> Have you read this ?
> https://spark.apache.org/docs/latest/running-on-mesos.html
>
> On Fri,
Mesos does provide some benefits and features, such as the ability to
launch all the Spark pieces in Docker and also Mesos resource scheduling
features (weights, roles), and if you plan to also use HDFS/Cassandra there
are existing frameworks that are actively maintained by us.
That said when
Hi Duc,
Are you running Spark on Mesos with cluster mode? And what's your cluster
mode submission, and version of Spark are you running?
Tim
On Sat, Jan 30, 2016 at 8:19 AM, PhuDuc Nguyen
wrote:
> I have a spark job running on Mesos in multi-master and supervise mode.
t no details provided.. Please help
>
>
> Thanks
>
> Sathish
>
>
>
>
> On Mon, Sep 21, 2015 at 11:54 AM Tim Chen <t...@mesosphere.io> wrote:
>
>> Hi John,
>>
>> There is no other blog post yet, I'm thinking to do a series of posts but
>> so
Do you have jobs enqueued? And if none of the jobs matches any offer it
will just decline it.
What's your job resource specifications?
Tim
On Fri, Oct 2, 2015 at 11:34 AM, Alan Braithwaite
wrote:
> Hey All,
>
> Using spark with mesos and docker.
>
> I'm wondering if
ark
>> dispatcher UI, you should see the port in the dispatcher logs itself.
>
>
> Yes, this job is not listed under that UI. Hence my confusion.
>
> Thanks,
> - Alan
>
> On Fri, Oct 2, 2015 at 11:49 AM, Tim Chen <t...@mesosphere.io> wrote:
>
>> So if t
ginning of value"
>
> (that's coming from the spark-dispatcher docker).
>
> Thanks!
> - Alan
>
> On Fri, Oct 2, 2015 at 11:36 AM, Tim Chen <t...@mesosphere.io> wrote:
>
>> Do you have jobs enqueued? And if none of the jobs matches any offer it
>> will j
trying to understand what is different in fine grained vs coarse
>> mode other than allocation of multiple mesos tasks vs 1 mesos task. Clearly
>> spark is not managing memory in the same way.
>>
>> Thanks,
>> -Utkarsh
>>
>>
>> On Fri, Sep 25, 2015
.2015.09.22T20.14.36-1442952963980-1-mesos_slave1_qa_uswest2.qasql.opentable.com-us_west_2a/tail/stderr#615779>15/09/22
>> 20:18:17 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint:
>> OutputCommitCoordinator stopped!
>>
>> <http://singularity-qa-uswest2.ote
What configuration have you used, and what are the slaves configuration?
Possiblity all other nodes either don't have enough resources, are is using
a another role that's preventing from the executor to be launched.
Tim
On Mon, Sep 21, 2015 at 1:58 PM, John Omernik wrote:
>
Hi Utkarsh,
Just to be sure you originally set coarse to false but then to true? Or is
it the other way around?
Also what's the exception/stack trace when the driver crashed?
Coarse grain mode per-starts all the Spark executor backends, so has the
least overhead comparing to fine grain. There
Hi John,
Sorry haven't get time to respond to your questions over the weekend.
If you're running client mode, to use the Docker/Mesos integration
minimally you just need to set the image configuration
'spark.mesos.executor.docker.image' as stated in the documentation, which
Spark will use this
Hi John,
There is no other blog post yet, I'm thinking to do a series of posts but
so far haven't get time to do that yet.
Running Spark in docker containers makes distributing spark versions easy,
it's simple to upgrade and automatically caches on the slaves so the same
image just runs right
>> at least allow the user to inform the dispatcher through spark-submit that
>> those properties will be available once the job starts.
>>
>> Finally, I don't think the dispatcher should crash in this event. It
>> seems not exceptional that a job is misconfigured wh
Hi Philip,
I've included documentation in the Spark/Mesos doc (
http://spark.apache.org/docs/latest/running-on-mesos.html), where you can
start the MesosShuffleService with sbin/start-mesos-shuffle-service.sh
script.
The shuffle service needs to be started manually for Mesos on each slave
(one
an issue regarding improvement of the docs? For those of us who are
> gaining the experience having such a pointer is very helpful.
>
> Tom
>
> From: Tim Chen <t...@mesosphere.io>
> Date: Thursday, September 10, 2015 at 10:25 AM
> To: Tom Waterhouse <tomwa...@cisco.com>
Hi Tom,
Sorry the documentation isn't really rich, since it's probably assuming
users understands how Mesos and framework works.
First I need explain the rationale of why create the dispatcher. If you're
not familiar with Mesos yet, each node in your datacenter is installed a
Mesos slave where
Hi Adrian,
Spark is expecting a specific naming of the tgz and also the folder name
inside, as this is generated by running make-distribution.sh --tgz in the
Spark source folder.
If you use a Spark 1.4 tgz generated with that script with the same name
and upload to HDFS again, fix the URI then
better control.
Thanks,
Ajay
On Wed, Aug 12, 2015 at 4:18 AM, Tim Chen t...@mesosphere.io wrote:
Yes the options are not that configurable yet but I think it's not hard
to change it.
I have a patch out actually specifically able to configure amount of cpus
per executor in coarse grain mode
I'm not sure what you're looking for, since you can't really compare
Standalone with YARN or Mesos, as Standalone is assuming the Spark
workers/master owns the cluster, and YARN/Mesos is trying to share the
cluster among different applications/frameworks.
And when you refer to resource
? Is there a default number it
assumes?
On Mon, Jan 5, 2015 at 5:07 PM, Tim Chen t...@mesosphere.io wrote:
Forgot to hit reply-all.
-- Forwarded message --
From: Tim Chen t...@mesosphere.io
Date: Sun, Jan 4, 2015 at 10:46 PM
Subject: Re: Controlling number of executors on Mesos vs
Hi Anton,
Client mode we haven't populated the webui link and only did so for cluster
mode.
If you like you can open a JIRA and it should be a easy ticket for anyone
to work on.
Tim
On Wed, Jul 29, 2015 at 4:27 AM, Anton Kirillov antonv.kiril...@gmail.com
wrote:
Hi everyone,
I’m trying to
Hi Haripriya,
Your master has registered it's public ip to be 127.0.0.1:5050 which won't
be able to be reached by the slave node.
If mesos didn't pick up the right ip you can specifiy one yourself via the
--ip flag.
Tim
On Mon, Jul 27, 2015 at 8:32 PM, Haripriya Ayyalasomayajula
Depends on how you run 1.3/1.4 versions of Spark, if you're giving it
different Docker images / tar balls of Spark, technically it should work
since it's just launching a driver for you at the end of the day.
However, I haven't really tried it so let me know if you run into problems
with it.
Tim
*Sent: *Friday, June 26, 2015 6:20 PM
*To: *Dave Ariens
*Cc: *Tim Chen; Olivier Girardot; user@spark.apache.org
*Subject: *Re: Accessing Kerberos Secured HDFS Resources from Spark on
Mesos
On Fri, Jun 26, 2015 at 3:09 PM, Dave Ariens dari...@blackberry.com
wrote:
Would there be any way
Mesos do support running containers as specific users passed to it.
Thanks for chiming in, what else does YARN do with Kerberos besides keytab
file and user?
Tim
On Fri, Jun 26, 2015 at 1:20 PM, Marcelo Vanzin van...@cloudera.com wrote:
On Fri, Jun 26, 2015 at 1:13 PM, Tim Chen t
So correct me if I'm wrong, sounds like all you need is a principal user
name and also a keytab file downloaded right?
I'm adding support from spark framework to download additional files along
side your executor and driver, and one workaround is to specify a user
principal and keytab file that
It seems like there is another thread going on:
http://answers.mapr.com/questions/163353/spark-from-apache-downloads-site-for-mapr.html
I'm not particularly sure why, seems like the problem is that getting the
current context class loader is returning null in this instance.
Do you have some
://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/Logging.scala#L128
[2]
https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerBackend.scala#L77
On Thu, May 28, 2015 at 7:50 PM, Tim Chen t...@mesosphere.io
-- Forwarded message --
From: Tim Chen t...@mesosphere.io
Date: Thu, May 28, 2015 at 10:49 AM
Subject: Re: [Streaming] Configure executor logging on Mesos
To: Gerard Maas gerard.m...@gmail.com
Hi Gerard,
The log line you referred to is not Spark logging but Mesos own logging
Can you share your exact spark-submit command line?
And also cluster mode is not yet released yet (1.4) and doesn't support
spark-shell, so I think you're just using client mode unless you're using
latest master.
Tim
On Tue, May 19, 2015 at 8:57 AM, Panagiotis Garefalakis panga...@gmail.com
Hi Ankur,
This is a great question as I've heard similar concerns about Spark on
Mesos.
At the time when I started to contribute to Spark on Mesos approx half year
ago, the Mesos scheduler and related code hasn't really got much attention
from anyone and it was pretty much in maintenance mode.
?
Many thanks,
Sander
On Fri, May 1, 2015 at 8:35 AM Tim Chen t...@mesosphere.io wrote:
Hi Stephen,
It looks like Mesos slave was most likely not able to launch some mesos
helper processes (fetcher probably?).
How did you install Mesos? Did you build from source yourself
Hi Stephen,
It looks like Mesos slave was most likely not able to launch some mesos
helper processes (fetcher probably?).
How did you install Mesos? Did you build from source yourself?
Please install Mesos through a package or actually from source run make
install and run from the installed
Hi Stephen,
Sometimes it's just missing something simple, either like a user name
problem or file dependency, etc.
Can you share what's in the stdout/stderr in your task sandbox directory
(available via Mesos UI, clicking on the task and sandbox)?
And also super helpful if you can find in the
Linux OOM throws SIGTERM, but if I remember correctly JVM handles heap
memory limits differently and throws OutOfMemoryError and eventually sends
SIGINT.
Not sure what happened but the worker simply received a SIGTERM signal, so
perhaps the daemon was terminated by someone or a parent process.
(Adding spark user list)
Hi Tom,
If I understand correctly you're saying that you're running into memory
problems because the scheduler is allocating too much CPUs and not enough
memory to acoomodate them right?
In the case of fine grain mode I don't think that's a problem since we have
a fixed
Hi Ankur,
There isn't a way to do that yet, but it's simple to add.
Can you create a JIRA in Spark for this?
Thanks!
Tim
On Fri, Apr 3, 2015 at 1:08 PM, Ankur Chauhan achau...@brightcove.com
wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi,
I am trying to figure out if there is
Hi there,
It looks like while trying to launch the executor (or one of the process
like the fetcher to fetch the uris) was failing because of the dependencies
problem you see. Your mesos-slave shouldn't be able to run though, were you
running 0.20.0 slave and upgraded to 0.21.0? We introduced the
Hi Gerard,
As others has mentioned I believe you're hitting Mesos-1688, can you
upgrade to the latest Mesos release (0.21.1) and let us know if it resolves
your problem?
Thanks,
Tim
On Tue, Jan 27, 2015 at 10:39 AM, Sam Bessalah samkiller@gmail.com
wrote:
Hi Geraard,
isn't this the same
Just throwing this out here, there is existing PR to add docker support for
spark framework to launch executors with docker image.
https://github.com/apache/spark/pull/3074
Hopefully this will be merged sometime.
Tim
On Thu, Jan 15, 2015 at 9:18 AM, Nicholas Chammas
nicholas.cham...@gmail.com
Hi Ethan,
How are you specifying the master to spark?
Able to recover from master failover is already handled by the underlying
Mesos scheduler, but you have to use zookeeper instead of directly passing
in the master uris.
Tim
On Mon, Jan 12, 2015 at 12:44 PM, Ethan Wolf
How did you run this benchmark, and is there a open version I can try it
with?
And what is your configurations, like spark.locality.wait, etc?
Tim
On Thu, Jan 8, 2015 at 11:44 AM, mvle m...@us.ibm.com wrote:
Hi,
I've noticed running Spark apps on Mesos is significantly slower compared
to
Hi Xuelin,
I can only speak about Mesos mode. There are two modes of management in
Spark's Mesos scheduler, which are fine-grain mode and coarse-grain mode.
In fine grain mode, each spark task launches one or more spark executors
that only live through the life time of the task. So it's
life
time? Or on the stage level?
One more question for the Mesos fine-grain mode. How is the overhead
of resource allocation and release? In MapReduce, a noticeable time is
spend on waiting the resource allocation. What is Mesos fine-grain mode?
On Thu, Jan 8, 2015 at 3:07 PM, Tim Chen
Forgot to hit reply-all.
-- Forwarded message --
From: Tim Chen t...@mesosphere.io
Date: Sun, Jan 4, 2015 at 10:46 PM
Subject: Re: Controlling number of executors on Mesos vs YARN
To: mvle m...@us.ibm.com
Hi Mike,
You're correct there is no such setting in for Mesos coarse
,
Josh
On 24 December 2014 at 06:22, Tim Chen t...@mesosphere.io wrote:
Hi Josh,
If you want to cap the amount of memory per executor in Coarse grain
mode, then yes you only get 240GB of memory as you mentioned. What's the
reason you don't want to raise the capacity of memory you use per
Hi Josh,
If you want to cap the amount of memory per executor in Coarse grain mode,
then yes you only get 240GB of memory as you mentioned. What's the reason
you don't want to raise the capacity of memory you use per executor?
In coarse grain mode the Spark executor is long living and it
54 matches
Mail list logo