[ 
https://issues.apache.org/jira/browse/HADOOP-4943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zheng Shao updated HADOOP-4943:
-------------------------------

    Affects Version/s:     (was: 0.19.0)
               Status: Patch Available  (was: Open)

Tested on an 8-node cluster with (4 map, 2 reduce) on half of the cluster, and 
(2 map, 4 reduces) on the other half.

Tested with a streaming job of 24 zero-length input + 24 reducers 
(map/reduce='sleep 60'). We are able to schedule all of maps at the same time 
(also for reducers).

Tested with a setting of 12, we are able to uniformly spread the load.

Tested with a setting of 48, we are able to throttle at running 24 maps 
(instead of going beyond limit) (also for reducers).

The test was done on hadoop 0.17.


> fair share scheduler does not utilize all slots if the task trackers are 
> configured heterogeneously
> ---------------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-4943
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4943
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: contrib/fair-share
>            Reporter: Zheng Shao
>            Assignee: Zheng Shao
>         Attachments: HADOOP-4943-1.patch
>
>
> There is some code in the fairshare scheduler that tries to make the load 
> across the whole cluster the same.
> That piece of code will break if the task trackers are configured 
> differently. Basically, we will stop assigning more tasks to tasks trackers 
> that have tasks above the cluster average, but we may still want to do that 
> because other task trackers may have less slots.
> We should change the code to maintain a cluster-wide slot usage percentage 
> (instead of absolute number of slot usage) to make sure the load is evenly 
> distributed.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to