[ 
https://issues.apache.org/jira/browse/HADOOP-4906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12658862#action_12658862
 ] 

Arun C Murthy commented on HADOOP-4906:
---------------------------------------

bq. These are not really required to be hanging around till the job completion. 
As soon as the task finishes, these can be cleared. 

Unfortunately this isn't true. We need the TaskInProgress object in 
TaskTracker.tasks at least for failing the TaskAttempt of maps whose 
map-outputs are lost... see TaskTracker.mapOutputLost. I guess we need to just 
set defaultJobConf/localJobConf to null in TaskInProgress.cleanup for now? Sigh!

----

Unrelated rant: My head still hurts from having to track this down in the mess 
that the TaskTracker has evolved into... I guess we need to seriously start 
thinking of cleaning up the TaskTracker, moving TaskTracker.TaskInProgress 
(very bad name too!) to it's own file, simplifying the interaction between the 
child TaskAttempt and the TaskTracker etc. Thoughts? 

> TaskTracker running out of memory after running several tasks
> -------------------------------------------------------------
>
>                 Key: HADOOP-4906
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4906
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: mapred
>    Affects Versions: 0.19.0
>            Reporter: Arun C Murthy
>            Assignee: Sharad Agarwal
>            Priority: Blocker
>             Fix For: 0.20.0
>
>         Attachments: 4906_v1.patch
>
>
> Looks like the TaskTracker isn't cleaning up correctly after completed/failed 
> tasks, I suspect that the JobConfs aren't being deallocated. Eventually the 
> TaskTracker runs out of memory after running several tasks.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to