[ 
https://issues.apache.org/jira/browse/HADOOP-4943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12659713#action_12659713
 ] 

Matei Zaharia commented on HADOOP-4943:
---------------------------------------

Looks good to me. This is an issue that actually affects other schedulers too, 
because that "max load" code was taken from the implementation of the default 
scheduler. (Unless it has been fixed until then). Do you think it might be 
possible to create a unit test for this using the fake tasktrackers in the 
existing test class?

> fair share scheduler does not utilize all slots if the task trackers are 
> configured heterogeneously
> ---------------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-4943
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4943
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: contrib/fair-share
>    Affects Versions: 0.19.0
>            Reporter: Zheng Shao
>            Assignee: Zheng Shao
>             Fix For: 0.19.1
>
>         Attachments: HADOOP-4943-1.patch
>
>
> There is some code in the fairshare scheduler that tries to make the load 
> across the whole cluster the same.
> That piece of code will break if the task trackers are configured 
> differently. Basically, we will stop assigning more tasks to tasks trackers 
> that have tasks above the cluster average, but we may still want to do that 
> because other task trackers may have less slots.
> We should change the code to maintain a cluster-wide slot usage percentage 
> (instead of absolute number of slot usage) to make sure the load is evenly 
> distributed.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to