You might just want the CapacityScheduler which has the feature(s) you are 
want...

On Oct 18, 2012, at 5:03 PM, lohit wrote:

> In a big cluster with hundreds are users, not all of them would submit
> their job to specific queues.
> And creating hundreds are pool is not very easy. Having something like
> userMaxMaps would limit users from consuming all resources.
> We could create small set of queues with more weight, but JobTracker would
> still waste time in preemption of user tasks who has consumed all resources.
> It would have been nice to have such a config.
> 
> 2012/10/18 Sandy Ryza <sandy.r...@cloudera.com>
> 
>> You're correct that there's no way to put a hard limit on the number of
>> maps or reduces for a given user, and a user can potentially consume all of
>> the cluster resources.  However, if there are multiple users contending for
>> resources, the scheduler makes an effort to schedule tasks equally, so it
>> would be unlikely for a single user to get that much of the cluster.
>> 
>> Can I ask what you need a userMaxMaps/Reducers for?
>> 
>> On Thu, Oct 18, 2012 at 4:41 PM, lohit <lohit.vijayar...@gmail.com> wrote:
>> 
>>> I am trying to understand FairScheduler configs I am trying to see if
>> there
>>> is a way to achieve the below.
>>> I see that if there are no pools configured (or only few pools are
>>> configured ) and a user submits a job, it would end up in his own pool,
>>> right?
>>> Now, I see there are some limits you can set globally for such users, for
>>> example userMaxJobsDefault.
>>> Is there a way to set userMaxMaps or userMaxReducers? It looks like if I
>>> have few pools configured and a user who submits a job without specify a
>>> pool will be given his own pool. He can potentially consume 100% of
>>> Map/Reduce slots. Is my understand correct?
>>> 
>>> --
>>> Have a Nice Day!
>>> Lohit
>>> 
>> 
> 
> 
> 
> -- 
> Have a Nice Day!
> Lohit

--
Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/


Reply via email to