Hello all,
As a quick follow up for this, I have been using Spark on Yarn till now and
am currently exploring Mesos and Marathon. Using yarn, we could tell the
spark job about the number of executors and number of cores as well, is
there a way to do it on mesos? I'm using Spark 1.4.1 on Mesos
Consider the spark.max.cores configuration option -- it should do what you
require.
On Tue, Aug 11, 2015 at 8:26 AM, Haripriya Ayyalasomayajula
aharipriy...@gmail.com wrote:
Hello all,
As a quick follow up for this, I have been using Spark on Yarn till now
and am currently exploring Mesos
To put this on the devs' radar, I suggest creating a JIRA for it (and
checking first if one already exists).
issues.apache.org/jira/
Nick
On Tue, May 19, 2015 at 1:34 PM Matei Zaharia matei.zaha...@gmail.com
wrote:
Yeah, this definitely seems useful there. There might also be some ways to
Hey Tom,
Are you using the fine-grained or coarse-grained scheduler? For the
coarse-grained scheduler, there is a spark.cores.max config setting that will
limit the total # of cores it grabs. This was there in earlier versions too.
Matei
On May 19, 2015, at 12:39 PM, Thomas Dudziak
I'm using fine-grained for a multi-tenant environment which is why I would
welcome the limit of tasks per job :)
cheers,
Tom
On Tue, May 19, 2015 at 10:05 AM, Matei Zaharia matei.zaha...@gmail.com
wrote:
Hey Tom,
Are you using the fine-grained or coarse-grained scheduler? For the
I read the other day that there will be a fair number of improvements in
1.4 for Mesos. Could I ask for one more (if it isn't already in there): a
configurable limit for the number of tasks for jobs run on Mesos ? This
would be a very simple yet effective way to prevent a job dominating the
Yeah, this definitely seems useful there. There might also be some ways to cap
the application in Mesos, but I'm not sure.
Matei
On May 19, 2015, at 1:11 PM, Thomas Dudziak tom...@gmail.com wrote:
I'm using fine-grained for a multi-tenant environment which is why I would
welcome the limit