[ 
https://issues.apache.org/jira/browse/MAPREDUCE-4788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13569940#comment-13569940
 ] 

Jeff Schott commented on MAPREDUCE-4788:
----------------------------------------

We are seeing the exact same thing. After a lot of experimenting, it seems to 
happen consistently when we set the number of reducers very large for a given 
job/data set. 

After increasing the number of slots with these parameters:
yarn.nodemanager.resource.memory-mb
mapreduce.tasktracker.map.tasks.maximum
mapreduce.tasktracker.reduce.tasks.maximum

We were able to run over 100 reducers for wordcount. It now occurs for 
wordcount when reducers are over 500. However it also occurs with other larger 
jobs and only 30 reducers.

Any help, is much appreciated. 
                
> Job are marking as FAILED even if there are no failed tasks in it
> -----------------------------------------------------------------
>
>                 Key: MAPREDUCE-4788
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-4788
>             Project: Hadoop Map/Reduce
>          Issue Type: Bug
>          Components: mrv2
>    Affects Versions: 2.0.1-alpha, 2.0.2-alpha
>            Reporter: Devaraj K
>            Assignee: Devaraj K
>
> Sometimes Jobs are marking as FAILED and some the tasks are marking as KILLED 
> in it. 
> In MRAppMaster, JobFinishEvent is triggering and waiting for the 5000 millis. 
> If any tasks final state is unknown by this time those tasks are marking as 
> KILLED and Job state is marking as FAILED.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to