Manish,
The pre-emption code in capacity scheduler was found to require a good
relook and due to the inherent complexity of the problem is likely to
have issues of the type you have noticed. We have decided to relook at
the pre-emption code from scratch and to this effect removed it from the
I am seeing the the same problem posted on the list on the 11th and have not
any reply.
Billy
- Original Message -
From: "Manish Katyal"
Newsgroups: gmane.comp.jakarta.lucene.hadoop.user
To:
Sent: Wednesday, May 13, 2009 11:48 AM
Subject: Regarding Capacity Schedu
I'm experimenting with the Capacity scheduler (0.19.0) in a multi-cluster
environment.
I noticed that unlike the mappers, the reducers are not pre-empted?
I have two queues (high and low) that are each running big jobs (70+ maps
each). The scheduler splits the mappers as per the queue
guara
Does the Capacity Scheduler not recover reduce tasks in the setting
mapred.capacity-scheduler.queue.{name}.reclaim-time-limit?
on my test it only recovers map task if it can not get its full Guaranteed
Capacity.
Billy
There is no patch for the Capacity Scheduler for 0.18.x.
> -Original Message-
> From: Bill Au [mailto:bill.w...@gmail.com]
> Sent: Saturday, February 14, 2009 1:00 AM
> To: core-user@hadoop.apache.org
> Subject: capacity scheduler for 0.18.x?
>
> I see that the
I see that there is a patch for the fair scheduler for 0.18.1 in
HADOOP-3746. Does anyone know if there is a similar patch for the capacity
scheduler? I did a search on JIRA but didn't find anything.
Bill