[ 
https://issues.apache.org/jira/browse/HDFS-10549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15417555#comment-15417555
 ] 

Xiao Chen commented on HDFS-10549:
----------------------------------

bq. Can you help mark that jira? It seems that I am not allowed to do that in 
HADOOP-COMMON jiras.
Done. Seems like you're not listed as contributor on common, only hdfs. I can't 
add you though (it reports server issue when I tried), hope someone on watch 
could help. Thanks!

> Memory leak if exception happens when closing files being written
> -----------------------------------------------------------------
>
>                 Key: HDFS-10549
>                 URL: https://issues.apache.org/jira/browse/HDFS-10549
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: hdfs-client
>    Affects Versions: 2.7.1
>            Reporter: Yiqun Lin
>            Assignee: Yiqun Lin
>         Attachments: HDFS-10549.001.patch, HDFS-10549.002.patch, 
> HDFS-10549.003.patch
>
>
> As HADOOP-13264 memtioned, the code dfsClient.endFileLease(fileId) in 
> {{DFSOutputStream}} will not be executed when the IOException happened in 
> {{closeImpl()}}.
> {code}
>   public void close() throws IOException {
>     synchronized (this) {
>       try (TraceScope ignored =
>           dfsClient.newPathTraceScope("DFSOutputStream#close", src)) {
>         closeImpl();
>       }
>     }
>     dfsClient.endFileLease(fileId);
>     }
>   }
> {code}
> This will cause that the files not be closed in {{DFSClient}} and finally 
> lead to the memory leak. In {{DFSStripedOutputStream}}, it existed the same 
> problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to