[
https://issues.apache.org/jira/browse/HADOOP-4246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12636823#action_12636823
]
Hudson commented on HADOOP-4246:
--------------------------------
Integrated in Hadoop-trunk #623 (See
[http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/623/])
. Ensure we have the correct lower bound on the number of retries for
fetching map-outputs; also fixed the case where the reducer automatically kills
on too many unique map-outputs could not be fetched for small jobs. Contributed
by Amareshwari Sri Ramadasu.
> Reduce task copy errors may not kill it eventually
> --------------------------------------------------
>
> Key: HADOOP-4246
> URL: https://issues.apache.org/jira/browse/HADOOP-4246
> Project: Hadoop Core
> Issue Type: Bug
> Components: mapred
> Affects Versions: 0.19.0
> Reporter: Amareshwari Sriramadasu
> Assignee: Amareshwari Sriramadasu
> Priority: Blocker
> Fix For: 0.19.0
>
> Attachments: patch-4246.txt, patch-4246.txt, patch-4246.txt,
> patch-4246.txt
>
>
> maxFetchRetriesPerMap in reduce task can be zero some times (when
> maxMapRunTime is less than 4 seconds or mapred.reduce.copy.backoff is less
> than 4). This will not count reduce task copy errors to kill it eventually.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.