[ https://issues.apache.org/jira/browse/HDFS-464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12784439#action_12784439 ]
Christian Kunz commented on HDFS-464: ------------------------------------- Now, with libhdfs back in HDFS (finally!) there is no reason not to apply this patch. Most of the memory leaks are on the failure path, but there are a couple which happen on the success path as well (hdfsExists, connecting with user id). Therefore, I will mark it as blocker. BTW, not being a JNI expert, I am not confident that all memory leaks are fixed. There might be more. > Memory leaks in libhdfs > ----------------------- > > Key: HDFS-464 > URL: https://issues.apache.org/jira/browse/HDFS-464 > Project: Hadoop HDFS > Issue Type: Bug > Affects Versions: 0.20.1 > Reporter: Christian Kunz > Assignee: Christian Kunz > Attachments: HADOOP-6034.patch, patch.HADOOP-6034, > patch.HADOOP-6034.0.18 > > > hdfsExists does not call destroyLocalReference for jPath anytime, > hdfsDelete does not call it when it fails, and > hdfsRename does not call it for jOldPath and jNewPath when it fails -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.