[ https://issues.apache.org/jira/browse/HDFS-559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12747356#action_12747356 ]
Steve Loughran commented on HDFS-559: ------------------------------------- 64 bit, no oop sizeof(BlockInfo) = 56 sizeof(INode) = 80 sizeof(INodeDirectory) = 64 sizeof(INodeDirectorywithQuota) = 96 sizeof(DatanodeDescriptor) = 168 =bigger > Work out the memory consumption of NN artifacts on a compressed pointer JVM > --------------------------------------------------------------------------- > > Key: HDFS-559 > URL: https://issues.apache.org/jira/browse/HDFS-559 > Project: Hadoop HDFS > Issue Type: Improvement > Components: name-node > Affects Versions: 0.21.0 > Environment: 64-bit and 32 bit JVMs, Java6u14 and jdk7 betas, with > -XX compressed oops enabled/disabled > Reporter: Steve Loughran > Assignee: Steve Loughran > Priority: Minor > > Following up HADOOP-1687, it would be nice to know the size of datatypes in > under the java16u14 JVM, which offers compressed pointers. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.