[
http://issues.apache.org/jira/browse/HADOOP-639?page=comments#action_12455871 ]
Mahadev konar commented on HADOOP-639:
--------------------------------------
sorry to have miscommunicated again! :). .. what I meant was the patch should
still rely on KILLJOBACTION but rather than cleaning up the job directory in
each task cleanup we could just have a cleanup method on running job that
cleans up the tasks which the RunningJob has and then cleans up the job
directory --- (implying that the job directory cleanup code would move from
task cleanup to job cleanup).
so the code would look something like:
if (JOBKILLACTION){
for all runningJob.tasks {
tasks.cleanup()
}
deleteJobDir()
}
Also I have just been commenting since I took a look at the patch :). I can
file different bugs so that you do not have to incorporate these changes into
your patch which would avoid making your patch huge.
> task cleanup messages can get lost, causing task trackers to keep tasks
> forever
> -------------------------------------------------------------------------------
>
> Key: HADOOP-639
> URL: http://issues.apache.org/jira/browse/HADOOP-639
> Project: Hadoop
> Issue Type: Bug
> Components: mapred
> Affects Versions: 0.7.2
> Reporter: Owen O'Malley
> Assigned To: Arun C Murthy
> Attachments: HADOOP-639_1.patch, HADOOP-639_2_20061130.patch,
> HADOOP-639_3_20061201.patch, HADOOP-639_4_20061205.patch
>
>
> If the pollForTaskWithClosedJob call from a job tracker to a task tracker
> times out when a job completes, the tasks are never cleaned up. This can
> cause the mini m/r cluster to hang on shutdown, but also is a resource leak.
--
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira