[
https://issues.apache.org/jira/browse/HADOOP-4654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Amareshwari Sriramadasu updated HADOOP-4654:
--------------------------------------------
Attachment: patch-4654-1-0.18.txt
Attaching patch after removing unnecessary fs. exists call from earlier patch.
test-patch result on branch 0.18:
{noformat}
[exec] -1 overall.
[exec]
[exec] +1 @author. The patch does not contain any @author tags.
[exec]
[exec] -1 tests included. The patch doesn't appear to include any new
or modified tests.
[exec] Please justify why no tests are needed for
this patch.
[exec]
[exec] +1 javadoc. The javadoc tool did not generate any warning
messages.
[exec]
[exec] +1 javac. The applied patch does not increase the total number
of javac compiler warnings.
[exec]
[exec] +1 findbugs. The patch does not introduce any new Findbugs
warnings.
{noformat}
All core and contrib tests passed on 0.18.
It is difficult to add testcase for this, manually checked whether deletion
happens for temporary files when the tasks are failed/killed.
> remove temporary output directory of failed tasks
> -------------------------------------------------
>
> Key: HADOOP-4654
> URL: https://issues.apache.org/jira/browse/HADOOP-4654
> Project: Hadoop Core
> Issue Type: Bug
> Components: mapred
> Affects Versions: 0.17.2, 0.18.1
> Reporter: Christian Kunz
> Assignee: Amareshwari Sriramadasu
> Fix For: 0.20.0
>
> Attachments: patch-4654-0.18.txt, patch-4654-1-0.18.txt
>
>
> When dfs is getting full (80+% of reserved space), the rate of write failures
> increases, such that more map-reduce tasks can fail. By not cleaning up the
> temporary output directory of tasks the situation worsens over the lifetime
> of a job, increasing the probability of the whole job failing.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.