We are using the mesos integration at Premier (https://www.premierinc.com/).
Obviously with the move to the attic we will likely move away from Mesos in
the future. I think deprecating the mesos integration makes sense. We
would probably continue to utilize the spark mesos components for
Unfortunate about Mesos, +1 on deprecation of mesos integration.
Regards,
Mridul
On Wed, Apr 7, 2021 at 7:12 AM Sean Owen wrote:
> I noted that Apache Mesos is moving to the attic, so won't be actively
> developed soon:
>
>
I noted that Apache Mesos is moving to the attic, so won't be actively
developed soon:
https://lists.apache.org/thread.html/rab2a820507f7c846e54a847398ab20f47698ec5bce0c8e182bfe51ba%40%3Cdev.mesos.apache.org%3E
That doesn't mean people will stop using it as a Spark resource manager
soon. But it
That does sound like it could be it - I checked our libmesos version and it
is 1.4.1. I'll try upgrading libmesos.
Thanks.
On Mon, Jul 23, 2018 at 12:13 PM Susan X. Huynh
wrote:
> Hi Nimi,
>
> This sounds similar to a bug I have come across before. See:
>
Hi Nimi,
This sounds similar to a bug I have come across before. See:
https://jira.apache.org/jira/browse/SPARK-22342?focusedCommentId=16429950=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16429950
It turned out to be a bug in libmesos (the client library used to
I've come across an issue with Mesos 1.4.1 and Spark 2.2.1. We launch Spark
tasks using the MesosClusterDispatcher in cluster mode. On a couple of
occasions, we have noticed that when the Spark Driver crashes (to various
causes - human error, network error), sometimes, when the Driver is
t; <mehdi.mezi...@ldmobile.net> wrote:
>> >> > We will be interested by the results if you give a try to Dynamic
>> >> allocation
>> >> > with mesos !
>> >> >
>> >> >
>> >> > - Mail Original -
&
>> a need for Fine grain mode after we enabled dynamic allocation
>> >> >> support
>> >> >> on the coarse grain mode.
>> >> >>
>> >> >> What's the reason you're running fine grain mo
>> On Mon, Dec 19, 2016 at 2:45 PM, Mehdi Meziane
> >> >> <mehdi.mezi...@ldmobile.net> wrote:
> >> >> > We will be interested by the results if you give a try to Dynamic
> >> >> allocation
> >> >> > with mesos !
> &g
> > We will be interested by the results if you give a try to Dynamic
>> >> allocation
>> >> > with mesos !
>> >> >
>> >> >
>> >> > - Mail Original -
>> >> > De: "Michael Gummelt" &
with mesos !
> >> >
> >> >
> >> > - Mail Original -
> >> > De: "Michael Gummelt" <mgumm...@mesosphere.io>
> >> > À: "Sumit Chawla" <sumitkcha...@gmail.com>
> >> > Cc: u...@mesos.apache.
l.com>
>> > Cc: u...@mesos.apache.org, d...@mesos.apache.org, "User"
>> > <user@spark.apache.org>, d...@spark.apache.org
>> > Envoyé: Lundi 19 Décembre 2016 22h42:55 GMT +01:00 Amsterdam / Berlin /
>> > Berne / Rome / Stockholm / Vienne
>&g
<sumitkcha...@gmail.com>
> > Cc: u...@mesos.apache.org, d...@mesos.apache.org, "User"
> > <user@spark.apache.org>, d...@spark.apache.org
> > Envoyé: Lundi 19 Décembre 2016 22h42:55 GMT +01:00 Amsterdam / Berlin /
> > Berne / Rome / Stockholm / Vienne
>
t;Sumit Chawla" <sumitkcha...@gmail.com>
> Cc: u...@mesos.apache.org, d...@mesos.apache.org, "User"
> <user@spark.apache.org>, d...@spark.apache.org
> Envoyé: Lundi 19 Décembre 2016 22h42:55 GMT +01:00 Amsterdam / Berlin /
> Berne / Rome / Stockholm / Vienne
> Objet: R
uot;User"
<user@spark.apache.org>, d...@spark.apache.org
Envoyé: Lundi 19 Décembre 2016 22h42:55 GMT +01:00 Amsterdam / Berlin / Berne /
Rome / Stockholm / Vienne
Objet: Re: Mesos Spark Fine Grained Execution - CPU count
> Is this problem of idle executors sticking around solv
> Is this problem of idle executors sticking around solved in Dynamic
Resource Allocation? Is there some timeout after which Idle executors can
just shutdown and cleanup its resources.
Yes, that's exactly what dynamic allocation does. But again I have no idea
what the state of dynamic
Great. Makes much better sense now. What will be reason to have
spark.mesos.mesosExecutor.cores more than 1, as this number doesn't include
the number of cores for tasks.
So in my case it seems like 30 CPUs are allocated to executors. And there
are 48 tasks so 48 + 30 = 78 CPUs. And i am
> I should preassume that No of executors should be less than number of
tasks.
No. Each executor runs 0 or more tasks.
Each executor consumes 1 CPU, and each task running on that executor
consumes another CPU. You can customize this via
spark.mesos.mesosExecutor.cores (
Ah thanks. looks like i skipped reading this *"Neither will executors
terminate when they’re idle."*
So in my job scenario, I should preassume that No of executors should be
less than number of tasks. Ideally one executor should execute 1 or more
tasks. But i am observing something strange
Hi Chawla,
One possible reason is that Mesos fine grain mode also takes up cores
to run the executor per host, so if you have 20 agents running Fine
grained executor it will take up 20 cores while it's still running.
Tim
On Fri, Dec 16, 2016 at 8:41 AM, Chawla,Sumit
mmelt" <mgumm...@mesosphere.io>
> Cc: u...@mesos.apache.org, "Dev" <d...@mesos.apache.org>, "User" <
> user@spark.apache.org>, "dev" <d...@spark.apache.org>
> Envoyé: Lundi 19 Décembre 2016 19h35:51 GMT +01:00 Amsterdam / Berlin /
> Berne / Ro
...@gmail.com>
À: "Michael Gummelt" <mgumm...@mesosphere.io>
Cc: u...@mesos.apache.org, "Dev" <d...@mesos.apache.org>, "User"
<user@spark.apache.org>, "dev" <d...@spark.apache.org>
Envoyé: Lundi 19 Décembre 2016 19h35:51 GMT +01:00
But coarse grained does the exact same thing which i am trying to avert
here. At the cost of lower startup, it keeps the resources reserved till
the entire duration of the job.
Regards
Sumit Chawla
On Mon, Dec 19, 2016 at 10:06 AM, Michael Gummelt
wrote:
> Hi
>
> I
Hi
I don't have a lot of experience with the fine-grained scheduler. It's
deprecated and fairly old now. CPUs should be relinquished as tasks
complete, so I'm not sure why you're seeing what you're seeing. There have
been a few discussions on the spark list regarding deprecating the
Hi
I am using Spark 1.6. I have one query about Fine Grained model in Spark.
I have a simple Spark application which transforms A -> B. Its a single
stage application. To begin the program, It starts with 48 partitions.
When the program starts running, in mesos UI it shows 48 tasks and 48 CPUs
The setting
spark.mesos.executor.docker.portmaps
Is interesting to me, without this setting, the docker executor uses
net=host and thus port mappings are not needed.
With this setting, (and just adding some random mappings) my executors fail
with less then helpful messages.
I guess some
Thanks, but something is not clear...
I have the mesos cluster.
- I want to submit my application and scheduled with chronos.
- For cluster mode I need a dispatcher, this is another container (machine
in the real world)? What will this do? It's needed when I using chronos?
- How can I access to my
When running Spark in Mesos cluster mode, the driver program runs in one of
the cluster nodes, like the other Spark processes that are spawned. You
won't need a special node for this purpose. I'm not very familiar with
Chronos, but its UI or the regular Mesos UI should show you where the
driver is
You can certainly start jobs without Chronos, but to automatically restart
finished jobs or to run jobs at specific times or periods, you'll want
something like Chronos.
dean
Dean Wampler, Ph.D.
Author: Programming Scala, 2nd Edition
http://shop.oreilly.com/product/0636920033073.do (O'Reilly)
Hi guys!
I'm a new in mesos. I have two spark application (one streaming and one
batch). I want to run both app in mesos cluster. Now for testing I want to
run in docker container so I started a simple redjack/mesos-master, but I
think a lot of think unclear for me (both mesos and spark-mesos).
This page, http://spark.apache.org/docs/latest/running-on-mesos.html,
covers many of these questions. If you submit a job with the option
--supervise, it will be restarted if it fails.
You can use Chronos for scheduling. You can create a single streaming job
with a 10 minute batch interval, if
Can you share your exact spark-submit command line?
And also cluster mode is not yet released yet (1.4) and doesn't support
spark-shell, so I think you're just using client mode unless you're using
latest master.
Tim
On Tue, May 19, 2015 at 8:57 AM, Panagiotis Garefalakis panga...@gmail.com
Tim thanks for your reply,
I am following this quite clear mesos-spark tutorial:
https://docs.mesosphere.com/tutorials/run-spark-on-mesos/
So mainly I tried running spark-shell which locally works fine but when the
jobs are submitted through mesos something goes wrong!
My question
Hello all,
I am facing a weird issue for the last couple of days running Spark on top
of Mesos and I need your help. I am running Mesos in a private cluster and
managed to deploy successfully hdfs, cassandra, marathon and play but
Spark is not working for a reason. I have tried so far:
different
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi,
I am trying to figure out how to run spark jobs on a mesos cluster.
The mesos cluster has some nodes that have tachyon install on some
nodes and I would like the spark jobs to be started on only those
nodes. Each of these nodes have been
:
http://apache-spark-user-list.1001560.n3.nabble.com/Recreating-the-Mesos-Spark-paper-s-experiments-tp22252.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
-
To unsubscribe, e-mail: user-unsubscr
Hi all,
For my master thesis I will be characterising performance of two-level
schedulers like Mesos and after reading the paper:
https://www.cs.berkeley.edu/~alig/papers/mesos.pdf
where Spark is also introduced I am wondering how some experiments and results
came about.
If this is not the
disconnects immediately.
In our case, just setting the LIBPROCESS_IP variable as described below
resolved the issue.
https://github.com/airbnb/chronos/issues/193
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Local-Dev-Env-with-Mesos-Spark-Streaming
Hi,
I have set up a cluster with Mesos (backed by Zookeeper) with three
master and three slave instances. I set up Spark (git HEAD) for use
with Mesos according to this manual:
http://people.apache.org/~pwendell/catalyst-docs/running-on-mesos.html
Using the spark-shell, I can connect to this
Hi Tobias,
Regarding my comment on closure serialization:
I was discussing it with my fellow Sparkers here and I totally overlooked
the fact that you need the class files to de-serialize the closures (or
whatever) on the workers, so you always need the jar file delivered to the
workers in order
Hi Tobias,
On Wed, May 21, 2014 at 5:45 PM, Tobias Pfeiffer t...@preferred.jp wrote:
first, thanks for your explanations regarding the jar files!
No prob :-)
On Thu, May 22, 2014 at 12:32 AM, Gerard Maas gerard.m...@gmail.com
wrote:
I was discussing it with my fellow Sparkers here and I
Here's the 1.0.0rc9 version of the docs:
https://people.apache.org/~pwendell/spark-1.0.0-rc9-docs/running-on-mesos.html
I refreshed them with the goal of steering users more towards prebuilt
packages than relying on compiling from source plus improving overall
formatting and clarity, but not
Hi Andrew,
Thanks for the current doc.
I'd almost gotten to the point where I thought that my custom code needed
to be included in the SPARK_EXECUTOR_URI but that can't possibly be
correct. The Spark workers that are launched on Mesos slaves should start
with the Spark core jars and then
Jacob D. Eisinger
IBM Emerging Technologies
jeis...@us.ibm.com - (512) 286-6075
From: Gerard Maas gerard.m...@gmail.com
To: user@spark.apache.org
Date: 05/16/2014 10:26 AM
Subject:Re: Local Dev Env with Mesos + Spark Streaming on Docker: Can't
submit jobs.
Hi Jacob
: Gerard Maas gerard.m...@gmail.com
To: user@spark.apache.org
Date: 05/05/2014 04:18 PM
Subject: Re: Local Dev Env with Mesos + Spark Streaming on Docker: Can't
submit jobs.
--
Hi Benjamin,
Yes, we initially used a modified version of the AmpLabs docker scripts
[1
@spark.apache.org
Sent: Tuesday, May 6, 2014 8:30:23 AM
Subject: Re: Local Dev Env with Mesos + Spark Streaming on Docker: Can't
submit jobs.
Howdy,
You might find the discussion Andrew and I have been having about Docker and
network security [1] applicable.
Also, I posted an answer [2
@spark.apache.org
Date: 05/05/2014 04:18 PM
Subject:Re: Local Dev Env with Mesos + Spark Streaming on Docker: Can't
submit jobs.
Hi Benjamin,
Yes, we initially used a modified version of the AmpLabs docker scripts
[1]. The amplab docker images are a good starting point.
One
Hi all,
I'm currently working on creating a set of docker images to facilitate
local development with Spark/streaming on Mesos (+zk, hdfs, kafka)
After solving the initial hurdles to get things working together in docker
containers, now everything seems to start-up correctly and the mesos UI
Hi Benjamin,
Yes, we initially used a modified version of the AmpLabs docker scripts
[1]. The amplab docker images are a good starting point.
One of the biggest hurdles has been HDFS, which requires reverse-DNS and I
didn't want to go the dnsmasq route to keep the containers relatively
simple to
49 matches
Mail list logo