Wish for 1.4: upper bound on # tasks in Mesos

2015-05-19 Thread Thomas Dudziak
I read the other day that there will be a fair number of improvements in
1.4 for Mesos. Could I ask for one more (if it isn't already in there): a
configurable limit for the number of tasks for jobs run on Mesos ? This
would be a very simple yet effective way to prevent a job dominating the
cluster.

cheers,
Tom


Re: Wish for 1.4: upper bound on # tasks in Mesos

2015-08-10 Thread Haripriya Ayyalasomayajula
Hello all,

As a quick follow up for this, I have been using Spark on Yarn till now and
am currently exploring Mesos and Marathon. Using yarn, we could tell the
spark job about the number of executors and number of cores as well, is
there a way to do it on mesos? I'm using Spark 1.4.1 on Mesos 0.23.0 and
Marathon 0.9. When we launch a marathon app, is there a way we can tell it
the max number of cores per executor ( which comes down to maximum number
of tasks per executor (for each instance of app)) - please correct me if I
am wrong.

- I have been browsing over various documentation details but did not come
across a direct solution.

- Is there a JIRA issue in progress already/a bug fix for the same?

I greatly appreciate any help and would love to follow up/investigate
further if you can suggest if there is any JIRA issue already or any
pointers..


On Wed, May 20, 2015 at 8:27 AM, Nicholas Chammas <
nicholas.cham...@gmail.com> wrote:

> To put this on the devs' radar, I suggest creating a JIRA for it (and
> checking first if one already exists).
>
> issues.apache.org/jira/
>
> Nick
>
> On Tue, May 19, 2015 at 1:34 PM Matei Zaharia 
> wrote:
>
>> Yeah, this definitely seems useful there. There might also be some ways
>> to cap the application in Mesos, but I'm not sure.
>>
>> Matei
>>
>> On May 19, 2015, at 1:11 PM, Thomas Dudziak  wrote:
>>
>> I'm using fine-grained for a multi-tenant environment which is why I
>> would welcome the limit of tasks per job :)
>>
>> cheers,
>> Tom
>>
>> On Tue, May 19, 2015 at 10:05 AM, Matei Zaharia 
>> wrote:
>>
>>> Hey Tom,
>>>
>>> Are you using the fine-grained or coarse-grained scheduler? For the
>>> coarse-grained scheduler, there is a spark.cores.max config setting that
>>> will limit the total # of cores it grabs. This was there in earlier
>>> versions too.
>>>
>>> Matei
>>>
>>> > On May 19, 2015, at 12:39 PM, Thomas Dudziak  wrote:
>>> >
>>> > I read the other day that there will be a fair number of improvements
>>> in 1.4 for Mesos. Could I ask for one more (if it isn't already in there):
>>> a configurable limit for the number of tasks for jobs run on Mesos ? This
>>> would be a very simple yet effective way to prevent a job dominating the
>>> cluster.
>>> >
>>> > cheers,
>>> > Tom
>>> >
>>>
>>>
>>
>>


-- 
Regards,
Haripriya Ayyalasomayajula


Re: Wish for 1.4: upper bound on # tasks in Mesos

2015-08-11 Thread Rick Moritz
Consider the spark.max.cores configuration option -- it should do what you
require.

On Tue, Aug 11, 2015 at 8:26 AM, Haripriya Ayyalasomayajula <
aharipriy...@gmail.com> wrote:

> Hello all,
>
> As a quick follow up for this, I have been using Spark on Yarn till now
> and am currently exploring Mesos and Marathon. Using yarn, we could tell
> the spark job about the number of executors and number of cores as well, is
> there a way to do it on mesos? I'm using Spark 1.4.1 on Mesos 0.23.0 and
> Marathon 0.9. When we launch a marathon app, is there a way we can tell it
> the max number of cores per executor ( which comes down to maximum number
> of tasks per executor (for each instance of app)) - please correct me if I
> am wrong.
>
> - I have been browsing over various documentation details but did not come
> across a direct solution.
>
> - Is there a JIRA issue in progress already/a bug fix for the same?
>
> I greatly appreciate any help and would love to follow up/investigate
> further if you can suggest if there is any JIRA issue already or any
> pointers..
>
>
> On Wed, May 20, 2015 at 8:27 AM, Nicholas Chammas <
> nicholas.cham...@gmail.com> wrote:
>
>> To put this on the devs' radar, I suggest creating a JIRA for it (and
>> checking first if one already exists).
>>
>> issues.apache.org/jira/
>>
>> Nick
>>
>> On Tue, May 19, 2015 at 1:34 PM Matei Zaharia 
>> wrote:
>>
>>> Yeah, this definitely seems useful there. There might also be some ways
>>> to cap the application in Mesos, but I'm not sure.
>>>
>>> Matei
>>>
>>> On May 19, 2015, at 1:11 PM, Thomas Dudziak  wrote:
>>>
>>> I'm using fine-grained for a multi-tenant environment which is why I
>>> would welcome the limit of tasks per job :)
>>>
>>> cheers,
>>> Tom
>>>
>>> On Tue, May 19, 2015 at 10:05 AM, Matei Zaharia >> > wrote:
>>>
 Hey Tom,

 Are you using the fine-grained or coarse-grained scheduler? For the
 coarse-grained scheduler, there is a spark.cores.max config setting that
 will limit the total # of cores it grabs. This was there in earlier
 versions too.

 Matei

 > On May 19, 2015, at 12:39 PM, Thomas Dudziak 
 wrote:
 >
 > I read the other day that there will be a fair number of improvements
 in 1.4 for Mesos. Could I ask for one more (if it isn't already in there):
 a configurable limit for the number of tasks for jobs run on Mesos ? This
 would be a very simple yet effective way to prevent a job dominating the
 cluster.
 >
 > cheers,
 > Tom
 >


>>>
>>>
>
>
> --
> Regards,
> Haripriya Ayyalasomayajula
>
>


Re: Wish for 1.4: upper bound on # tasks in Mesos

2015-05-19 Thread Matei Zaharia
Hey Tom,

Are you using the fine-grained or coarse-grained scheduler? For the 
coarse-grained scheduler, there is a spark.cores.max config setting that will 
limit the total # of cores it grabs. This was there in earlier versions too.

Matei

> On May 19, 2015, at 12:39 PM, Thomas Dudziak  wrote:
> 
> I read the other day that there will be a fair number of improvements in 1.4 
> for Mesos. Could I ask for one more (if it isn't already in there): a 
> configurable limit for the number of tasks for jobs run on Mesos ? This would 
> be a very simple yet effective way to prevent a job dominating the cluster.
> 
> cheers,
> Tom
> 


-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Re: Wish for 1.4: upper bound on # tasks in Mesos

2015-05-19 Thread Thomas Dudziak
I'm using fine-grained for a multi-tenant environment which is why I would
welcome the limit of tasks per job :)

cheers,
Tom

On Tue, May 19, 2015 at 10:05 AM, Matei Zaharia 
wrote:

> Hey Tom,
>
> Are you using the fine-grained or coarse-grained scheduler? For the
> coarse-grained scheduler, there is a spark.cores.max config setting that
> will limit the total # of cores it grabs. This was there in earlier
> versions too.
>
> Matei
>
> > On May 19, 2015, at 12:39 PM, Thomas Dudziak  wrote:
> >
> > I read the other day that there will be a fair number of improvements in
> 1.4 for Mesos. Could I ask for one more (if it isn't already in there): a
> configurable limit for the number of tasks for jobs run on Mesos ? This
> would be a very simple yet effective way to prevent a job dominating the
> cluster.
> >
> > cheers,
> > Tom
> >
>
>


Re: Wish for 1.4: upper bound on # tasks in Mesos

2015-05-19 Thread Matei Zaharia
Yeah, this definitely seems useful there. There might also be some ways to cap 
the application in Mesos, but I'm not sure.

Matei

> On May 19, 2015, at 1:11 PM, Thomas Dudziak  wrote:
> 
> I'm using fine-grained for a multi-tenant environment which is why I would 
> welcome the limit of tasks per job :)
> 
> cheers,
> Tom
> 
> On Tue, May 19, 2015 at 10:05 AM, Matei Zaharia  > wrote:
> Hey Tom,
> 
> Are you using the fine-grained or coarse-grained scheduler? For the 
> coarse-grained scheduler, there is a spark.cores.max config setting that will 
> limit the total # of cores it grabs. This was there in earlier versions too.
> 
> Matei
> 
> > On May 19, 2015, at 12:39 PM, Thomas Dudziak  > > wrote:
> >
> > I read the other day that there will be a fair number of improvements in 
> > 1.4 for Mesos. Could I ask for one more (if it isn't already in there): a 
> > configurable limit for the number of tasks for jobs run on Mesos ? This 
> > would be a very simple yet effective way to prevent a job dominating the 
> > cluster.
> >
> > cheers,
> > Tom
> >
> 
> 



Re: Wish for 1.4: upper bound on # tasks in Mesos

2015-05-20 Thread Nicholas Chammas
To put this on the devs' radar, I suggest creating a JIRA for it (and
checking first if one already exists).

issues.apache.org/jira/

Nick

On Tue, May 19, 2015 at 1:34 PM Matei Zaharia 
wrote:

> Yeah, this definitely seems useful there. There might also be some ways to
> cap the application in Mesos, but I'm not sure.
>
> Matei
>
> On May 19, 2015, at 1:11 PM, Thomas Dudziak  wrote:
>
> I'm using fine-grained for a multi-tenant environment which is why I would
> welcome the limit of tasks per job :)
>
> cheers,
> Tom
>
> On Tue, May 19, 2015 at 10:05 AM, Matei Zaharia 
> wrote:
>
>> Hey Tom,
>>
>> Are you using the fine-grained or coarse-grained scheduler? For the
>> coarse-grained scheduler, there is a spark.cores.max config setting that
>> will limit the total # of cores it grabs. This was there in earlier
>> versions too.
>>
>> Matei
>>
>> > On May 19, 2015, at 12:39 PM, Thomas Dudziak  wrote:
>> >
>> > I read the other day that there will be a fair number of improvements
>> in 1.4 for Mesos. Could I ask for one more (if it isn't already in there):
>> a configurable limit for the number of tasks for jobs run on Mesos ? This
>> would be a very simple yet effective way to prevent a job dominating the
>> cluster.
>> >
>> > cheers,
>> > Tom
>> >
>>
>>
>
>