Hello Everyone,
I’m just trying out the spark-shell on mesos and I don’t get any executors. To
debug it I started the vagrant box from aurora and try it out there and I can
the same issue as I’m getting on my cluster.
On Mesos the only active framework is the spark-shel, it is running 1.6.1
Hello,
I'm trying to mount a local ceph volume to my mesos container.
My cephfs is mounted on all agent at /ceph
I'm using spark 2.4 with hadoop 3.11 and I'm not using Docker to deploy spark.
The only option I could find to mount a volume though is the following (which
is also a line I added
gt;>>> Is there any way to specify also the lower bound? It is a bit annoying
>>>>> that seems that we can’t control the resource usage of an application. By
>>>>> the way, we are not using dynamic allocation.
>>>>>
>>>>> - Thodoris
>
; scheduler client can result in infinite subscribe loop" (
> https://issues.apache.org/jira/browse/MESOS-8171). It can be fixed by
> upgrading to a version of libmesos that has the fix.
>
> Susan
>
>
> On Fri, Jul 13, 2018 at 3:39 PM, Nimi W wrote:
>
>> I've come acro
PM, Nimi W wrote:
> I've come across an issue with Mesos 1.4.1 and Spark 2.2.1. We launch
> Spark tasks using the MesosClusterDispatcher in cluster mode. On a couple
> of occasions, we have noticed that when the Spark Driver crashes (to
> various causes - human error, network e
e you checked that mesos send offers to spark app mesos
>>> framework at least with 10 cores and 2GB RAM?
>>>
>>> If mesos have not available offers with 10 cores, for example, but have
>>> with 8 or 9, so you can use smaller executers for better fit for available
>
I've come across an issue with Mesos 1.4.1 and Spark 2.2.1. We launch Spark
tasks using the MesosClusterDispatcher in cluster mode. On a couple of
occasions, we have noticed that when the Spark Driver crashes (to various
causes - human error, network error), sometimes, when the Driver is
restarted
ocation.
>>
>> - Thodoris
>>
>>
>> On 10 Jul 2018, at 14:35, Pavel Plotnikov
>> wrote:
>>
>> Hello Thodoris!
>> Have you checked this:
>> - does mesos cluster have available resources?
>> - if spark have waiting tasks in queue more
gt; - does mesos cluster have available resources?
>> - if spark have waiting tasks in queue more than
>> spark.dynamicAllocation.schedulerBacklogTimeout configuration value?
>> - And then, have you checked that mesos send offers to spark app mesos
>> framework at least
ve available resources?
> - if spark have waiting tasks in queue more than
> spark.dynamicAllocation.schedulerBacklogTimeout configuration value?
> - And then, have you checked that mesos send offers to spark app mesos
> framework at least with 10 cores and 2GB RAM?
>
> If mesos ha
Thanks for your suggestion.
I have been checking Spark-jobserver. Just a off-topic question about this
project: Does Apache Spark project have any support/connection to this
Spark-jobserver project? I noticed that they do not have release for the
newest version of Spark (e.g., 2.3.1).
As you
ency?
> >>
> >> Best
> >>
> >>
> >>
> >> --
> >> Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/
> >>
> >> -
> >> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
> >>
> >>
>
>
> Quoted from:
>
> http://apache-spark-user-list.1001560.n3.nabble.com/SPARK-on-MESOS-Avoid-re-fetching-Spark-binary-tp32849p32865.html
>
>
> _
> Sent from http://apache-spark-user-list.1001560.n3.nabble.com
>
>
aiting tasks in queue more than
> spark.dynamicAllocation.schedulerBacklogTimeout configuration value?
> - And then, have you checked that mesos send offers to spark app mesos
> framework at least with 10 cores and 2GB RAM?
>
> If mesos have not available offers with 10 cores,
Hello Thodoris!
Have you checked this:
- does mesos cluster have available resources?
- if spark have waiting tasks in queue more than
spark.dynamicAllocation.schedulerBacklogTimeout configuration value?
- And then, have you checked that mesos send offers to spark app mesos
framework at least
Hello list,
We are running Apache Spark on a Mesos cluster and we face a weird behavior of
executors. When we submit an app with e.g 10 cores and 2GB of memory and max
cores 30, we expect to see 3 executors running on the cluster. However,
sometimes there are only 2... Spark applications
Essentially correct. The latency to start a Spark Job is nowhere close to
2-4 seconds under typical conditions. Creating a new Spark Application
every time instead of running multiple Jobs in one Application is not going
to lead to acceptable interactive or real-time performance, nor is that an
The latency to start a Spark Job is nowhere close to 2-4 seconds under
typical conditions. You appear to be creating a new Spark Application
everytime instead of running multiple Jobs in one Application.
On Fri, Jul 6, 2018 at 3:12 AM Tien Dat wrote:
> Dear Timothy,
>
> It works like a charm
I know there are some community efforts shown in Spark summits before,
mostly around reusing the same Spark context with multiple “jobs”.
I don’t think reducing Spark job startup time is a community priority afaik.
Tim
On Fri, Jul 6, 2018 at 7:12 PM Tien Dat wrote:
> Dear Timothy,
>
> It works
Dear Timothy,
It works like a charm now.
BTW (don't judge me if I am to greedy :-)), the latency to start a Spark job
is around 2-4 seconds, unless I am not aware of some awesome optimization on
Spark. Do you know if Spark community is working on reducing this latency?
Best
--
Sent from:
runs, the disk usage increases gradually.
>
> Therefore, our expectation is to have Spark running on Mesos without this
> binary extraction, as well as without storing the same binary every time
> new
> Spark job runs.
>
> Does that make sense to you? And do
is not much since the
file is stored at local any way.
The process that takes more time is the extraction.
Finally, since Mesos make a new folder for extracting the Spark binary each
time a new Spark job runs, the disk usage increases gradually.
Therefore, our expectation is to have Spark running
the same remote
binary as well.
Tim
On Fri, Jul 6, 2018 at 5:00 PM Tien Dat wrote:
> Dear all,
>
> We are running Spark with Mesos as the master for resource management.
> In our cluster, there are jobs that require very short response time (near
> real time applications), which u
Dear all,
We are running Spark with Mesos as the master for resource management.
In our cluster, there are jobs that require very short response time (near
real time applications), which usually around 3-5 seconds.
In order to Spark to execute with Mesos, one has to specify
. How to fix this issue (if possible ?) *
Versions : Mesos 1.2.0 spark 2.0.1 hdfs 2.7
More information, see stackoverflow issue here
<https://stackoverflow.com/questions/44703631/spark-streaming-cluster-mode-in-mesos-java-lang-runtimeexception-stream-jar-n>
.
Thanks,
RCinna
-
I have been trying to learn spark on mesos, but the spark-shell just keeps on
ignoring the offers. Here is my setup:
All the components are in the same subnet
- 1 mesos master on EC2 instance (t2.micro)
command: `mesos-master --work_dir=/tmp/abc --hostname=`
- 2 mesos agents (each with 4
a/org/apache/spark/scheduler/cluster/mesos/MesosCoarseGrainedSchedulerBackend.scala#L316
>
> On Mon, Apr 24, 2017 at 4:53 AM, Pavel Plotnikov <
> pavel.plotni...@team.wrike.com> wrote:
>
>> Hi, everyone! I run spark 2.1.0 jobs on the top of Mesos cluster in
>>
Have you run with debug logging? There are some hints in the debug logs:
https://github.com/apache/spark/blob/branch-2.1/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosCoarseGrainedSchedulerBackend.scala#L316
On Mon, Apr 24, 2017 at 4:53 AM, Pavel Plotnikov <
pavel.plo
Hi, everyone! I run spark 2.1.0 jobs on the top of Mesos cluster in
coarse-grained mode with dynamic resource allocation. And sometimes spark
mesos scheduler declines mesos offers despite the fact that not all
available resources were used (I have less workers than the possible
maximum
netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(
> SingleThreadEventExecutor.java:357)
> at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357)
> at
> io.netty.util.concurrent.SingleThreadEventExecutor$2.
> run(SingleThreadEventExecutor.java:111)
> at java.lan
ang.Thread.run(Thread.java:745)
I was trying to follow instructions here:
https://github.com/apache/spark/pull/15120
So in my Marathon json I'm defining the ports to use for the spark driver,
spark ui and block manager.
Can anyone help me get this running in bridge networking mode?
--
View
;mgumm...@mesosphere.io> wrote:
>
> Sun, are you using marathon to run the shuffle service?
>
> On Tue, Feb 7, 2017 at 7:36 PM, Sun Rui <sunrise_...@163.com> wrote:
>
>> Yi Jan,
>>
>> We have been using Spark on Mesos with dynamic allocation enabled, whi
marathon to run the shuffle service?
>
> On Tue, Feb 7, 2017 at 7:36 PM, Sun Rui <sunrise_...@163.com
> <mailto:sunrise_...@163.com>> wrote:
> Yi Jan,
>
> We have been using Spark on Mesos with dynamic allocation enabled, which
> works and improves the overall c
Sun, are you using marathon to run the shuffle service?
On Tue, Feb 7, 2017 at 7:36 PM, Sun Rui <sunrise_...@163.com> wrote:
> Yi Jan,
>
> We have been using Spark on Mesos with dynamic allocation enabled, which
> works and improves the overall cluster utilization.
>
>
Yi Jan,
We have been using Spark on Mesos with dynamic allocation enabled, which works
and improves the overall cluster utilization.
In terms of job, do you mean jobs inside a Spark application or jobs among
different applications? Maybe you can read
http://spark.apache.org/docs/latest/job
le up to spark.cores.max or however
>> many cpu cores available on the cluster, and this may be undesirable
>> because the number of executors in rdd.parallelize(collection, # of
>> partitions) is being overriden
>>
>> On Thu, Feb 2, 2017 at 1:30 PM, Michael Gummelt <
s) is being overriden
>
> On Thu, Feb 2, 2017 at 1:30 PM, Michael Gummelt <mgumm...@mesosphere.io>
> wrote:
>
>> As of Spark 2.0, Mesos mode does support setting cores on the executor
>> level, but you might need to set the property directly (--conf
>> spark.
) is being overriden
On Thu, Feb 2, 2017 at 1:30 PM, Michael Gummelt <mgumm...@mesosphere.io>
wrote:
> As of Spark 2.0, Mesos mode does support setting cores on the executor
> level, but you might need to set the property directly (--conf
> spark.executor.cores=). I've written
As of Spark 2.0, Mesos mode does support setting cores on the executor
level, but you might need to set the property directly (--conf
spark.executor.cores=). I've written about this here:
https://docs.mesosphere.com/1.8/usage/service-guides/spark/job-scheduling/.
That doc is for DC/OS
-cores)
>>>
>>> On Thu, Feb 2, 2017 at 12:41 PM, Ji Yan <ji...@drive.ai> wrote:
>>>
>>>> I have done a experiment on this today. It shows that only CPUs are
>>>> tolerant of insufficient cluster size when a job starts. On my cluster, I
>>
total-executor-cores, and --executor-cores)
>>
>> On Thu, Feb 2, 2017 at 12:41 PM, Ji Yan <ji...@drive.ai> wrote:
>>
>>> I have done a experiment on this today. It shows that only CPUs are
>>> tolerant of insufficient cluster size when a job starts. On my
have 180Gb of memory and 64 cores, when I run spark-submit ( on mesos )
>> with --cpu_cores set to 1000, the job starts up with 64 cores. but when I
>> set --memory to 200Gb, the job fails to start with "Initial job has not
>> accepted any resources; check your cluster UI to
gt; tolerant of insufficient cluster size when a job starts. On my cluster, I
> have 180Gb of memory and 64 cores, when I run spark-submit ( on mesos )
> with --cpu_cores set to 1000, the job starts up with 64 cores. but when I
> set --memory to 200Gb, the job fails to start with "Initial
I have done a experiment on this today. It shows that only CPUs are
tolerant of insufficient cluster size when a job starts. On my cluster, I
have 180Gb of memory and 64 cores, when I run spark-submit ( on mesos )
with --cpu_cores set to 1000, the job starts up with 64 cores. but when I
set
On Mon, Jan 30, 2017 at 9:47 AM, Ji Yan <ji...@drive.ai> wrote:
> Tasks begin scheduling as soon as the first executor comes up
>
>
> Thanks all for the clarification. Is this the default behavior of Spark on
> Mesos today? I think this is what we are looking for because
>
> Tasks begin scheduling as soon as the first executor comes up
Thanks all for the clarification. Is this the default behavior of Spark on
Mesos today? I think this is what we are looking for because sometimes a
job can take up lots of resources and later jobs could not get all the
res
"Launch each executor with at least 1GB RAM,
> but if mesos offers 2GB at some moment, then launch an executor with 2GB
> RAM".
>
> I wonder what's benefit of that? To reduce the "resource fragmentation"?
>
> Anyway, that is not supported at this moment. In all
each executor with at least 1GB RAM,
but if mesos offers 2GB at some moment, then launch an executor with 2GB
RAM".
I wonder what's benefit of that? To reduce the "resource fragmentation"?
Anyway, that is not supported at this moment. In all the supported cluster
managers of s
rather than a single static amount set up front. Dynamic Allocation is supported in Spark on Mesos, but we here at Mesosphere haven't been testing it much, and I'm not sure what the community adoption is. So I can't yet speak to its robustness, but we will be investing in it soon. Many users want
nly allocating as many
executors as a job needs, rather than a single static amount set up front.
Dynamic Allocation is supported in Spark on Mesos, but we here at
Mesosphere haven't been testing it much, and I'm not sure what the
community adoption is. So I can't yet speak to its robustness, but
Dear Spark Users,
Currently is there a way to dynamically allocate resources to Spark on
Mesos? Within Spark we can specify the CPU cores, memory before running
job. The way I understand is that the Spark job will not run if the CPU/Mem
requirement is not met. This may lead to decrease in overall
It seems like it's getting offer decline calls, which seems like it's
getting the offer calls and was able to reply.
Can you turn on TRACE logging in Spark with the Mesos coarse grain
scheduler and see if it says if it is processing the offers?
Tim
On Fri, Dec 30, 2016 at 2:35 PM, Ji Yan <
hen it is launched.
>
> Tim
>
>
> On Dec 30, 2016, at 1:23 PM, Ji Yan <ji...@drive.ai> wrote:
>
> Dear Spark Users,
>
> We are trying to launch Spark on Mesos from within a docker container. We
> have found that since the Spark executors need to talk back at
Hi Ji,
One way to make it fixed is to set LIBPROCESS_PORT environment variable on the
executor when it is launched.
Tim
> On Dec 30, 2016, at 1:23 PM, Ji Yan <ji...@drive.ai> wrote:
>
> Dear Spark Users,
>
> We are trying to launch Spark on Mesos from within a docker
Dear Spark Users,
We are trying to launch Spark on Mesos from within a docker container. We
have found that since the Spark executors need to talk back at the Spark
driver, there is need to do a lot of port mapping to make that happen. We
seemed to have mapped the ports on what we could find from
those logs here as well.
On Tue, Nov 22, 2016 at 4:52 AM, John Yost <hokiege...@gmail.com> wrote:
> Hi Everyone,
>
> There is probably an obvious answer to this, but not sure what it is. :)
>
> I am attempting to launch 2..n spark shells using Mesos as the master
>
Hi Everyone,
There is probably an obvious answer to this, but not sure what it is. :)
I am attempting to launch 2..n spark shells using Mesos as the master (this
is to support 1..n researchers running pyspark stuff on our data). I can
launch two or more spark shells without any problem
u20...@hotmail.com> wrote:
> Hi Guys,
>
>
> Two questions about running spark on mesos.
>
> 1, Does spark configuration of conf/slaves still work when running spark
> on mesos?
>
> According to my observations, it seemed that conf/slaves still took
> effect when running spar
Hi Guys,
Two questions about running spark on mesos.
1, Does spark configuration of conf/slaves still work when running spark on
mesos?
According to my observations, it seemed that conf/slaves still took effect
when running spark-shell.
However, it doesn't take effect when deploying
It doesn't look like we are. Can you file a JIRA? A workaround is to set
spark.mesos.executor.overhead to be at least spark.memory.offheap.size.
This is how the container is sized:
https://github.com/apache/spark/blob/master/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos
Hi,
I am trying to understand how mesos allocate memory when offheap is enabled
but it seems that the framework is only taking the heap + 400 MB overhead
into consideration for resources allocation.
Example: spark.executor.memory=3g spark.memory.offheap.size=1g ==> mesos
report 3.4g allocated for
rom
SPARK_EXECUTOR_OPTS, then I would see something wrong.
On Tue, Aug 9, 2016 at 10:13 AM, Jim Carroll <jimfcarr...@gmail.com> wrote:
> I'm running spark 2.0.0 on Mesos using spark.mesos.executor.docker.image
> to
> point to a docker container that I built with the Spark installati
I'm running spark 2.0.0 on Mesos using spark.mesos.executor.docker.image to
point to a docker container that I built with the Spark installation.
Everything is working except the Spark client process that's started inside
the container doesn't get any of my parameters I set in the spark config
Hi,
I'm launching spark application on mesos cluster.
The namespace of the metric includes the framework id for driver metrics,
and both framework id and executor id for executor metrics.
These ids are obviously assigned by mesos, and they are not permanent -
re-registering the application would
. But there is one
import thing before making the decision: data locality.
If we run spark on mesos, can it achieve good data locality when processing
HDFS data? I think spark on yarn can achieve that out of the box, but not
sure whether spark on mesos could do that.
I've searched through the archive
$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
>
> The remoteMachineHost that throws the error has write access to the
> specific
> folder.
> Any thoughts?
>
>
>
&g
Hi,
I am trying to run a spark job on mesos in cluster mode using the following
command
./bin/spark-submit --deploy-mode cluster --master mesos://172.17.0.1:7077
—-jars http://172.17.0.2:18630/mesos/extraJars.jar --class MyClass
http://172.17.0.2:18630/mesos/foo.jar
The application jar
esos bounced earlier. My apologies.
>>
>> On Thu, Oct 15, 2015 at 12:19 PM, Bharath Ravi Kumar <reachb...@gmail.com
>> > wrote:
>>
>>> (Reviving this thread since I ran into similar issues...)
>>>
>>> I'm running two spark jobs (in mesos fine grai
s,
>> Bharath
>>
>> On Thu, Oct 15, 2015 at 12:29 PM, Bharath Ravi Kumar <reachb...@gmail.com
>> > wrote:
>>
>>> Resending since user@mesos bounced earlier. My apologies.
>>>
>>> On Thu, Oct 15, 2015 at 12:19 PM, Bharath Ravi Kumar <
>>&
gt;
> On Thu, Oct 15, 2015 at 12:19 PM, Bharath Ravi Kumar <reachb...@gmail.com>
> wrote:
>
>> (Reviving this thread since I ran into similar issues...)
>>
>> I'm running two spark jobs (in mesos fine grained mode), each belonging
>> to a different mesos
Resending since user@mesos bounced earlier. My apologies.
On Thu, Oct 15, 2015 at 12:19 PM, Bharath Ravi Kumar <reachb...@gmail.com>
wrote:
> (Reviving this thread since I ran into similar issues...)
>
> I'm running two spark jobs (in mesos fine grained mode), each belonging t
(Reviving this thread since I ran into similar issues...)
I'm running two spark jobs (in mesos fine grained mode), each belonging to
a different mesos role, say low and high. The low:high mesos weights are
1:10. On expected lines, I see that the low priority job occupies cluster
resources
e thing I don't get is why is it trying to take all 3GB at startup?
> That seems excessive. So if I want to run a job that only needs 512MB, I
> need to have 3GB free at all times? Doesn't make sense.
>
> We are using sparks native mesos support. On spark submit we use:
Hi guys,
Here is the info for Ceph : http://ceph.com/
We are investigating and using Ceph for distributed storage and monitoring,
specifically interested
in using Ceph as the underlied file system storage for spark. However, we had
no experience for achiveing
that. Any body has seen such
Do you have specific reasons to use Ceph? I used Ceph before, I'm not too
in love with it especially when I was using the Ceph Object Gateway S3 API.
There are some incompatibilities with aws s3 api. You really really need to
try it because making the commitment. Did you managed to install it?
On
t;
> Best,
> Sun.
>
> --
> fightf...@163.com
>
>
> *From:* Jerry Lam <chiling...@gmail.com>
> *Date:* 2015-09-23 09:37
> *To:* fightf...@163.com
> *CC:* user <user@spark.apache.org>
> *Subject:* Re: Spark standalone/Mesos on t
Lam
Date: 2015-09-23 09:37
To: fightf...@163.com
CC: user
Subject: Re: Spark standalone/Mesos on top of Ceph
Do you have specific reasons to use Ceph? I used Ceph before, I'm not too in
love with it especially when I was using the Ceph Object Gateway S3 API. There
are some incompatibilities
I'm using spark 1.2.2 on mesos 0.21
I have a java job that is submitted to mesos from marathon.
I also have cgroups configured for mesos on each node. Even though the job,
when running, uses 512MB, it tries to take over 3GB at startup and is killed
by cgroups.
When I start mesos-slave, It's
to
include in the distribution, right?.
I thought of using the Docker Mesos integration, but I have been unable to
find information on this (see my other question on Docker/Mesos/Spark).
Any other thoughts on the best way to include packages in Spark WITHOUT
installing on each node would be appreciated
ation on this (see my other question on Docker/Mesos/Spark).
> Any other thoughts on the best way to include packages in Spark WITHOUT
> installing on each node would be appreciated!
>
> John
>
esn't rule the world yet).
>>>
>>> So I can see this from both perspectives now and passing in the
>>> properties file will probably work just fine for me, but for my better
>>> understanding: When the executor starts, will it read any of the
>>> environm
of just mesos/docker (since I'm fully aware that docker
>>> doesn't rule the world yet).
>>>
>>> So I can see this from both perspectives now and passing in the properties
>>> file will probably work just fine for me, but for my better understanding:
>>&
perties given to it by the
> dispatcher and nothing more?
>
> Lemme know if anything needs more clarification and thanks for your mesos
> contribution to spark!
>
> - Alan
>
> On Thu, Sep 17, 2015 at 5:03 PM, Timothy Chen <t...@mesosphere.io> wrote:
>
>
>>> Adding this info to the docs would be great. Is the appropriate action
>>> to create an issue regarding improvement of the docs? For those of us who
>>> are gaining the experience having such a pointer is very helpful.
>>>
>>> Tom
>>>
;>
>> Tom
>>
>> From: Tim Chen <t...@mesosphere.io>
>> Date: Thursday, September 10, 2015 at 10:25 AM
>> To: Tom Waterhouse <tomwa...@cisco.com>
>> Cc: "user@spark.apache.org" <user@spark.apache.org>
>> Subject: Re: Spark on Mes
you for the explanation. You are correct, my Mesos experience is
>>>> very light, and I haven’t deployed anything via Marathon yet. What you
>>>> have stated here makes sense, I will look into doing this.
>>>>
>>>> Adding this info to the docs wo
ou
>>> have stated here makes sense, I will look into doing this.
>>>
>>> Adding this info to the docs would be great. Is the appropriate action to
>>> create an issue regarding improvement of the docs? For those of us who are
>>> gaining the e
the properties given to it by the
dispatcher and nothing more?
Lemme know if anything needs more clarification and thanks for your mesos
contribution to spark!
- Alan
On Thu, Sep 17, 2015 at 5:03 PM, Timothy Chen <t...@mesosphere.io> wrote:
> Hi Alan,
>
> If I understand correctly,
> Cc: "user@spark.apache.org" <user@spark.apache.org>
> Subject: Re: Spark on Mesos with Jobs in Cluster Mode Documentation
>
> Hi Tom,
>
> Sorry the documentation isn't really rich, since it's probably assuming
> users understands how Mesos and framework
om<mailto:tomwa...@cisco.com>>
Cc: "user@spark.apache.org<mailto:user@spark.apache.org>"
<user@spark.apache.org<mailto:user@spark.apache.org>>
Subject: Re: Spark on Mesos with Jobs in Cluster Mode Documentation
Hi Tom,
Sorry the documentation isn't really rich,
>>> fine-grained(default)). Have you gone through this documentation already?
>>> http://spark.apache.org/docs/latest/running-on-mesos.html#using-a-mesos-master-url
>>>
>>> Thanks
>>> Best Regards
>>>
>>> On Tue, Sep 8, 2015 at 12:54 PM, canan chen
services with Marathon, and you can use
Marathon to launch the Spark dispatcher.
Then all clients instead of specifying the Mesos master URL (e.g:
mesos://mesos.master:2181), then just talks to the dispatcher only
(mesos://spark-dispatcher.mesos:7077), and the dispatcher will then start
and watch
figure the system. As running there is one instance
of the Spark Mesos dispatcher running outside of Mesos, so not a part of the
sphere of Mesos resource management.
I used the following Stack Overflow posts as guidelines:
http://stackoverflow.com/questions/31164725/spark-mesos-dispatche
coarse-grained or
>> fine-grained(default)). Have you gone through this documentation already?
>> http://spark.apache.org/docs/latest/running-on-mesos.html#using-a-mesos-master-url
>>
>> Thanks
>> Best Regards
>>
>> On Tue, Sep 8, 2015 at 12:54 PM, can
Hi all,
I try to run spark on mesos, but it looks like I can not allocate resources
from mesos. I am not expert of mesos, but from the mesos log, it seems
spark always decline the offer from mesos. Not sure what's wrong, maybe
need some configuration change. Here's the mesos master log
I0908 15
ccn...@gmail.com> wrote:
> Hi all,
>
> I try to run spark on mesos, but it looks like I can not allocate
> resources from mesos. I am not expert of mesos, but from the mesos log, it
> seems spark always decline the offer from mesos. Not sure what's wrong,
> maybe need some conf
tion already?
> http://spark.apache.org/docs/latest/running-on-mesos.html#using-a-mesos-master-url
>
> Thanks
> Best Regards
>
> On Tue, Sep 8, 2015 at 12:54 PM, canan chen <ccn...@gmail.com> wrote:
>
>> Hi all,
>>
>> I try to run spark on mesos, but it lo
is that as far as I understand I need this file in the root
directory of the executor dir and I can't find a way to make spark executor
to pull this file (not without changing spark code).
Am I missing something?
It seems that spark do support mesos+docker so I wonder what other people
with this setup
.
Thanks
Best Regards
On Mon, Aug 3, 2015 at 2:25 PM, Akash Mishra akash.mishr...@gmail.com
wrote:
Hello *,
We are trying to build some Batch jobs using Spark on Mesos. Mesos offer's
two main mode of deployment of Spark job.
1. Fine-grained
2. Coarse-grained
When we are running the spark
Hello *,
We are trying to build some Batch jobs using Spark on Mesos. Mesos offer's
two main mode of deployment of Spark job.
1. Fine-grained
2. Coarse-grained
When we are running the spark jobs in fine grained mode then spark is using
max amount of offers from Mesos and running the job
...@gmail.com wrote:
Hi all,
I am running Spark 1.4.1 on mesos 0.23.0
While I am able to start spark-shell on the node with mesos-master
running, it works fine. But when I try to start spark-shell on mesos-slave
nodes, I'm encounter this error. I greatly appreciate any help.
15/07/27 22:14
1 - 100 of 178 matches
Mail list logo