[ 
https://issues.apache.org/jira/browse/HDFS-559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12745890#action_12745890
 ] 

Steve Loughran commented on HDFS-559:
-------------------------------------

There is a way to to do this in the java Instrumentation API:
http://java.sun.com/javase/6/docs/api/java/lang/instrument/Instrumentation.html#getObjectSize%28java.lang.Object%29

It just needs someone to write an instrumentation agent that can be loaded 
within Hadoop -even the unit tests- and print out the sizes of various classes; 
test this on different JVMs and you would have a table for the cost of the 
directory structure on different machines

It would still be good to run the HADOOP-1687 how-much-work-before-OOM 
benchmarks, as that is the most realistic test of datastructure size.

> Work out the memory consumption of NN artifacts on a compressed pointer JVM
> ---------------------------------------------------------------------------
>
>                 Key: HDFS-559
>                 URL: https://issues.apache.org/jira/browse/HDFS-559
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: name-node
>    Affects Versions: 0.21.0
>         Environment: 64-bit and 32 bit JVMs, Java6u14 and jdk7 betas, with 
> -XX compressed oops enabled/disabled
>            Reporter: Steve Loughran
>            Assignee: Steve Loughran
>            Priority: Minor
>
> Following up HADOOP-1687, it would be nice to know the size of datatypes in 
> under the java16u14 JVM, which offers compressed pointers.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to