Re: Wish for 1.4: upper bound on # tasks in Mesos

2015-08-11 Thread Rick Moritz
Consider the spark.max.cores configuration option -- it should do what you require. On Tue, Aug 11, 2015 at 8:26 AM, Haripriya Ayyalasomayajula < aharipriy...@gmail.com> wrote: > Hello all, > > As a quick follow up for this, I have been using Spark on Yarn till now > and am currently exploring Me

Re: Wish for 1.4: upper bound on # tasks in Mesos

2015-08-10 Thread Haripriya Ayyalasomayajula
Hello all, As a quick follow up for this, I have been using Spark on Yarn till now and am currently exploring Mesos and Marathon. Using yarn, we could tell the spark job about the number of executors and number of cores as well, is there a way to do it on mesos? I'm using Spark 1.4.1 on Mesos 0.23

Re: Wish for 1.4: upper bound on # tasks in Mesos

2015-05-20 Thread Nicholas Chammas
To put this on the devs' radar, I suggest creating a JIRA for it (and checking first if one already exists). issues.apache.org/jira/ Nick On Tue, May 19, 2015 at 1:34 PM Matei Zaharia wrote: > Yeah, this definitely seems useful there. There might also be some ways to > cap the application in M

Re: Wish for 1.4: upper bound on # tasks in Mesos

2015-05-19 Thread Matei Zaharia
Yeah, this definitely seems useful there. There might also be some ways to cap the application in Mesos, but I'm not sure. Matei > On May 19, 2015, at 1:11 PM, Thomas Dudziak wrote: > > I'm using fine-grained for a multi-tenant environment which is why I would > welcome the limit of tasks per

Re: Wish for 1.4: upper bound on # tasks in Mesos

2015-05-19 Thread Thomas Dudziak
I'm using fine-grained for a multi-tenant environment which is why I would welcome the limit of tasks per job :) cheers, Tom On Tue, May 19, 2015 at 10:05 AM, Matei Zaharia wrote: > Hey Tom, > > Are you using the fine-grained or coarse-grained scheduler? For the > coarse-grained scheduler, ther

Re: Wish for 1.4: upper bound on # tasks in Mesos

2015-05-19 Thread Matei Zaharia
Hey Tom, Are you using the fine-grained or coarse-grained scheduler? For the coarse-grained scheduler, there is a spark.cores.max config setting that will limit the total # of cores it grabs. This was there in earlier versions too. Matei > On May 19, 2015, at 12:39 PM, Thomas Dudziak wrote: >

Wish for 1.4: upper bound on # tasks in Mesos

2015-05-19 Thread Thomas Dudziak
I read the other day that there will be a fair number of improvements in 1.4 for Mesos. Could I ask for one more (if it isn't already in there): a configurable limit for the number of tasks for jobs run on Mesos ? This would be a very simple yet effective way to prevent a job dominating the cluster