[
https://issues.apache.org/jira/browse/HADOOP-1411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12499217
]
Owen O'Malley commented on HADOOP-1411:
---------------------------------------
I think the underlying source of the problem is that the Map parameter is too
big. I think it would be better if the handlers were done more explicitly as:
{code}
addHandler(Class<? extends Exception>, RetryHandler);
{code}
But I think this patch should go in since it fixes the problem and we can
address any further concerns later.
> AlreadyBeingCreatedException from task retries
> ----------------------------------------------
>
> Key: HADOOP-1411
> URL: https://issues.apache.org/jira/browse/HADOOP-1411
> Project: Hadoop
> Issue Type: Bug
> Components: dfs
> Affects Versions: 0.13.0
> Reporter: Nigel Daley
> Assigned To: Hairong Kuang
> Priority: Blocker
> Fix For: 0.13.0
>
> Attachments: createRetry-tw.patch, createRetry-tw2.patch,
> createRetry.patch, createRetry1.patch
>
>
> HADOOP-1407 indicates 2 bugs: a mapred bug which will be fixed as part of
> 1407, and a DFSClient bug that will be fixed here.
> Note that the test run in 1407 was without speculation.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.