[ 
https://issues.apache.org/jira/browse/HADOOP-2141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12542896
 ] 

Runping Qi commented on HADOOP-2141:
------------------------------------


I don't think this Jira is that urgent and  we have to have a quick patch for 
it.
I'd prefer to have a right framework in place to address the speculative 
execution policy, and in 
long term the task scheduling policy.

It is important for the job tracker to collect accurate execution stats of the 
mappers/reducers
and use the stats in task scheduling. I don't  think that is complicated.

In long term, it will be nice if the job tracker can obtain the machine 
specs of  the task trackers (# of CPUs, Mem, disks, network info, etc), and the 
current load info on 
the machines, and use these data in scheduling.


> speculative execution start up condition based on completion time
> -----------------------------------------------------------------
>
>                 Key: HADOOP-2141
>                 URL: https://issues.apache.org/jira/browse/HADOOP-2141
>             Project: Hadoop
>          Issue Type: Improvement
>          Components: mapred
>    Affects Versions: 0.15.0
>            Reporter: Koji Noguchi
>            Assignee: Arun C Murthy
>             Fix For: 0.16.0
>
>
> We had one job with speculative execution hang.
> 4 reduce tasks were stuck with 95% completion because of a bad disk. 
> Devaraj pointed out 
> bq . One of the conditions that must be met for launching a speculative 
> instance of a task is that it must be at least 20% behind the average 
> progress, and this is not true here.
> It would be nice if speculative execution also starts up when tasks stop 
> making progress.
> Devaraj suggested 
> bq. Maybe, we should introduce a condition for average completion time for 
> tasks in the speculative execution check. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to