Hi Kumar and Ravi-
Thanks for your quick responses. Although I don't believe changing anything
from default, it is possible because we are using a distribution. I will
check.

Thanks!

Brad


On Fri, Aug 30, 2013 at 1:46 PM, kumar y <ykk1...@gmail.com> wrote:

>
>  and you can find that by looking at your mapred-site.xml . look for this
> property
>
> mapred-site.xml: <name>mapred.jobtracker.taskScheduler</name>
>
> you should be using the default FIFO or
> <value>org.apache.hadoop.mapred.FairScheduler</value>   or
> <value>org.apache.hadoop.mapred.CapacityTaskScheduler</value>
>
> Depends on what scheduler you are using , please read thru the following
> link to understand how the scheduler works ( like allocation , preemption
> etc ) .
>
>
> http://hadoop.apache.org/docs/stable/capacity_scheduler.html
> http://hadoop.apache.org/docs/stable/fair_scheduler.html
>
>
>
>>
>> ---------- Forwarded message ----------
>> From: Ravi Kiran <maghamraviki...@gmail.com>
>> Date: Fri, Aug 30, 2013 at 10:30 AM
>> Subject: Re: Mappers per job per user
>> To: user@hive.apache.org
>>
>>
>> Hi Brad,
>>
>>   I believe you have configured a Capacity Scheduler for scheduling the
>> jobs rather than the default FIFO scheduler.
>>
>> Regards
>> Ravi Magham
>>
>>
>> On Fri, Aug 30, 2013 at 10:07 PM, Brad Ruderman 
>> <bruder...@radiumone.com>wrote:
>>
>>> Hi All-
>>> I was hoping to gather some insight in how the hadoop (and or hive) job
>>> scheduler distributes mappers per user. I am running into an issue where I
>>> see that hadoop (and or hive) is evenly distributing mappers per user
>>> instead of per job.
>>>
>>> For example:
>>> -We have 1000 mapper capacity
>>> -10 Jobs are running  total under User A
>>> -Each job is using 100 mappers
>>> -User B starts a job
>>> -User B's job is allocated 250 mappers
>>> -User A's jobs decrease to 75 mappers each instead of 100
>>>
>>> What could be causing this allocation to occur by user and job, instead
>>> of just by job. I reviewed the hive/hadoop documentation and wasn't able to
>>> find any references to distributing jobs by user.
>>>
>>> All of the jobs are being executed within the hive shell, or using the
>>> hive command.
>>>
>>> Thanks,
>>> Brad
>>>
>>
>>
>>
>

Reply via email to