rsh J [ha...@cloudera.com]
Sent: Tuesday, August 14, 2012 3:12 PM
To: user@hadoop.apache.org
Subject: Re: Pending reducers
I guess this is the regular behavior of the default FIFO task
scheduler. It takes into account the reducer load and that may be why
it refused to schedule the rest up immediately
I guess this is the regular behavior of the default FIFO task
scheduler. It takes into account the reducer load and that may be why
it refused to schedule the rest up immediately. You may have better
luck using either Fair or Capacity schedulers.
On Tue, Aug 14, 2012 at 5:56 PM, Evert Lammerts wr
> whats the memory/cpu stats on the machines ? are they exhausted
No, they're not. The nodes themselves have more than enough memory available,
and the load on the cores sits between 0.8 and 0.9.
Is current load in terms other than available slots even taken into account in
the default schedule
whats the memory/cpu stats on the machines ? are they exhausted
On Tue, Aug 14, 2012 at 5:20 PM, Evert Lammerts wrote:
>> reducers of multiple jobs do run con-currently as long as they have the
>> resources available.
>
> Yep, and that's what's not happening in my situation. 528 reduce slots, 400
> reducers of multiple jobs do run con-currently as long as they have the
> resources available.
Yep, and that's what's not happening in my situation. 528 reduce slots, 400
taken by one job, 26 of another job remain in pending state. What could explain
this behavior?
Evert
>
> If you want to
reducers of multiple jobs do run con-currently as long as they have
the resources available.
If you want to limit someone overtaking the cluster, then you can
create different job queues and assign quota to each queue. You also
have the flexibility of allocating max quota per user in a queue as
we