[
https://issues.apache.org/jira/browse/HADOOP-1984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12543790
]
Amar Kamat commented on HADOOP-1984:
------------------------------------
Submitting a patch which implements the strategy discussed above. The way it
works is as follows :
1. On the first failed attempt the copy is backed off by {{backoff_init}},
currently set to *4 sec*
2. For subsequent failures the map output copy is backed off by
{{backoff_init}} * {{backoff_base}}^{{num_retries-1}}^, {{backoff_base}}
currently set to *2*.
3. Backoff continues until the total time spent on the copy is less than
{{max_backoff}} which is user configurable using {{mapred.reduce.copy.backoff}}.
i.e {{backoff(1) + backoff(2) + .... backoff(max_retries) ~
max_backoff}}, default {{max_backoff}} is set to *300 (5 min)*
4. After a total of {{max_backoff}} time the job tracker is notified.
5. This cycle continues for every new map on the host since {{num_retries}} is
for a particular map.
> some reducer stuck at copy phase and progress extremely slowly
> --------------------------------------------------------------
>
> Key: HADOOP-1984
> URL: https://issues.apache.org/jira/browse/HADOOP-1984
> Project: Hadoop
> Issue Type: Bug
> Components: mapred
> Affects Versions: 0.16.0
> Reporter: Runping Qi
> Assignee: Amar Kamat
> Priority: Critical
> Fix For: 0.16.0
>
> Attachments: HADOOP-1984-simple.patch, HADOOP-1984.patch
>
>
> In many cases, some reducers got stuck at copy phase, progressing extremely
> slowly.
> The entire cluster seems doing nothing. This causes a very bad long tails of
> otherwise well tuned map/red jobs.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.