[ 
https://issues.apache.org/jira/browse/MAPREDUCE-3343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13149761#comment-13149761
 ] 

Eli Collins commented on MAPREDUCE-3343:
----------------------------------------

How does the test cover that the job is removed from the archives? Looks like 
it should pass even if we remove the call to removeTaskDistributedCacheManager 
in TT and the test.
                
> TaskTracker Out of Memory because of distributed cache
> ------------------------------------------------------
>
>                 Key: MAPREDUCE-3343
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-3343
>             Project: Hadoop Map/Reduce
>          Issue Type: Bug
>          Components: mrv1
>    Affects Versions: 0.20.205.0
>            Reporter: Ahmed Radwan
>            Assignee: zhaoyunjiong
>              Labels: mapreduce, patch
>         Attachments: MAPREDUCE-3343_rev2.patch, 
> mapreduce-3343-release-0.20.205.0.patch
>
>
> This Out of Memory happens when you run large number of jobs (using the 
> distributed cache) on a TaskTracker. 
> Seems the basic issue is with the distributedCacheManager (instance of 
> TrackerDistributedCacheManager in TaskTracker.java), this gets created during 
> TaskTracker.initialize(), and it keeps references to 
> TaskDistributedCacheManager for every submitted job via the jobArchives Map, 
> also references to CacheStatus via cachedArchives map. I am not seeing these 
> cleaned up between jobs, so this can out of memory problems after really 
> large number of jobs are submitted. We have seen this issue in a number of 
> cases.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to