[ https://issues.apache.org/jira/browse/MAPREDUCE-5508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13816210#comment-13816210 ]
Xi Fang commented on MAPREDUCE-5508: ------------------------------------ One way to confirm that is to set to mapred.jobtracker.completeuserjobs.maximum = 0 and run some jobs. After all the jobs are done, wait for a while and check the number of FS objects in FileSystem#Cache. > JobTracker memory leak caused by unreleased FileSystem objects in > JobInProgress#cleanupJob > ------------------------------------------------------------------------------------------ > > Key: MAPREDUCE-5508 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-5508 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: jobtracker > Affects Versions: 1-win, 1.2.1 > Reporter: Xi Fang > Assignee: Xi Fang > Priority: Critical > Fix For: 1-win, 1.3.0 > > Attachments: CleanupQueue.java, JobInProgress.java, > MAPREDUCE-5508.1.patch, MAPREDUCE-5508.2.patch, MAPREDUCE-5508.3.patch, > MAPREDUCE-5508.patch > > > MAPREDUCE-5351 fixed a memory leak problem but introducing another filesystem > object (see "tempDirFs") that is not properly released. > {code} JobInProgress#cleanupJob() > void cleanupJob() { > ... > tempDirFs = jobTempDirPath.getFileSystem(conf); > CleanupQueue.getInstance().addToQueue( > new PathDeletionContext(jobTempDirPath, conf, userUGI, jobId)); > ... > if (tempDirFs != fs) { > try { > fs.close(); > } catch (IOException ie) { > ... > } > {code} -- This message was sent by Atlassian JIRA (v6.1#6144)