[ https://issues.apache.org/jira/browse/HDFS-10549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15416627#comment-15416627 ]
Yiqun Lin commented on HDFS-10549: ---------------------------------- Thanks [~xiaochen] for the comments. Agree with your idea. Post the new patch for adding the unit test. The test has passed in my local env and the test will failed if we keep the origin logic in {{DFSClient#closeAllFilesBeingWritten}}. Thanks for the review. > Memory leak if exception happens when closing files being written > ----------------------------------------------------------------- > > Key: HDFS-10549 > URL: https://issues.apache.org/jira/browse/HDFS-10549 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs-client > Affects Versions: 2.7.1 > Reporter: Yiqun Lin > Assignee: Yiqun Lin > Attachments: HDFS-10549.001.patch, HDFS-10549.002.patch > > > As HADOOP-13264 memtioned, the code dfsClient.endFileLease(fileId) in > {{DFSOutputStream}} will not be executed when the IOException happened in > {{closeImpl()}}. > {code} > public void close() throws IOException { > synchronized (this) { > try (TraceScope ignored = > dfsClient.newPathTraceScope("DFSOutputStream#close", src)) { > closeImpl(); > } > } > dfsClient.endFileLease(fileId); > } > } > {code} > This will cause that the files not be closed in {{DFSClient}} and finally > lead to the memory leak. In {{DFSStripedOutputStream}}, it existed the same > problem. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org