[
https://issues.apache.org/jira/browse/HADOOP-4906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12657723#action_12657723
]
Johan Oskarsson commented on HADOOP-4906:
-----------------------------------------
I have seen this happen too. In one case the input path entry in our JobConf
used up quite a bit of of ram since we were trying to merge a lot of small
files into bigger ones. The JobConf for each task was kept in ram until the Job
completed, but since we were running a lot of tasks the TaskTracker ran out of
ram before the job could finish.
> TaskTracker running out of memory after running several tasks
> -------------------------------------------------------------
>
> Key: HADOOP-4906
> URL: https://issues.apache.org/jira/browse/HADOOP-4906
> Project: Hadoop Core
> Issue Type: Bug
> Components: mapred
> Affects Versions: 0.19.0
> Reporter: Arun C Murthy
> Priority: Blocker
> Fix For: 0.20.0
>
>
> Looks like the TaskTracker isn't cleaning up correctly after completed/failed
> tasks, I suspect that the JobConfs aren't being deallocated. Eventually the
> TaskTracker runs out of memory after running several tasks.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.