[ 
http://issues.apache.org/jira/browse/HADOOP-459?page=comments#action_12446709 ] 
            
Sameer Paranjpye commented on HADOOP-459:
-----------------------------------------

libhdfs has a number of deficiencies that need to be addressed, these are:
- Memory leaks when reading/writing HDFS files, the memory leaks occur not only 
in i/o operations but in a large number of functions that don't release local 
references to Java objects
- Filesystem and file handles obtained from libhdfs cannot be used in multiple 
threads
- Code needs some refactoring
  - there are a number of global functions defined in hdfsJniHelper.h, a header 
which is fortunately included only in one place at this time
  - system constants like O_RDONLY and O_WRONLY are re-defined
- hdfsGetPathInfo does not work if the file or directory being passed in 
doesn't exits
- An hdfsExists function to emulate fs.exists is not available
- a function to free the data structure allocated by hdfsGetHosts is not 
available
- A Configuration object is created every time a file is opened, instead of 
getting it with fs.getConf
- return values from java methods are not copied to the 'jvalue' type, which is 
unsafe and causes stack corruption in some cases




> libhdfs leaks memory when writing to files
> ------------------------------------------
>
>                 Key: HADOOP-459
>                 URL: http://issues.apache.org/jira/browse/HADOOP-459
>             Project: Hadoop
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.5.0
>            Reporter: Christian Kunz
>         Assigned To: Sameer Paranjpye
>
> hdfsWrite leaks memory when called repeatedly. The same probably applies to 
> repeated reads using hdfsRead

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to