Manish,

The pre-emption code in capacity scheduler was found to require a good relook and due to the inherent complexity of the problem is likely to have issues of the type you have noticed. We have decided to relook at the pre-emption code from scratch and to this effect removed it from the 0.20 branch to start afresh.

Thanks
Hemanth

Manish Katyal wrote:
I'm experimenting with the Capacity scheduler (0.19.0) in a multi-cluster
environment.
I noticed that unlike the mappers, the reducers are not pre-empted?

I have two queues (high and low) that are each running big jobs (70+ maps
each).  The scheduler splits the mappers as per the queue
guaranteed-capacity (5/8ths for the high and the rest for the low). However,
the reduce jobs are not interleaved -- the reduce job in the high queue is
blocked waiting for the reduce job in the low queue to complete.

Is this a bug or by design?

*Low queue:*
Guaranteed Capacity (%) : 37.5
Guaranteed Capacity Maps : 3
Guaranteed Capacity Reduces : *3*
User Limit : 100
Reclaim Time limit : 300
Number of Running Maps : 3
Number of Running Reduces : *7*
Number of Waiting Maps : 131
Number of Waiting Reduces : 0
Priority Supported : NO

*High queue:*
Guaranteed Capacity (%) : 62.5
Guaranteed Capacity Maps : 5
Guaranteed Capacity Reduces : 5
User Limit : 100
Reclaim Time limit : 300
Number of Running Maps : 4
Number of Running Reduces : *0*
Number of Waiting Maps : 68
Number of Waiting Reduces : *7*
Priority Supported : NO


Reply via email to