[ https://issues.apache.org/jira/browse/HDFS-6658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14063077#comment-14063077 ]
Amir Langer commented on HDFS-6658: ----------------------------------- [~ cmccabe] When I looked at off heap I was using the direct memory buffer. This allowed me to off load RAM. What you suggesting (if I understand correctly) is to simply allocate and deallocate RAM ourselves, right? Not sure about the cost of deallocation (and its management) - but in any case, you're not reducing RAM, you're just avoiding the GC. It seems a completely orthogonal issue to reducing RAM needed by the service. > Namenode memory optimization - Block replicas list > --------------------------------------------------- > > Key: HDFS-6658 > URL: https://issues.apache.org/jira/browse/HDFS-6658 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode > Affects Versions: 2.4.1 > Reporter: Amir Langer > Assignee: Amir Langer > Attachments: Namenode Memory Optimizations - Block replicas list.docx > > > Part of the memory consumed by every BlockInfo object in the Namenode is a > linked list of block references for every DatanodeStorageInfo (called > "triplets"). > We propose to change the way we store the list in memory. > Using primitive integer indexes instead of object references will reduce the > memory needed for every block replica (when compressed oops is disabled) and > in our new design the list overhead will be per DatanodeStorageInfo and not > per block replica. > see attached design doc. for details and evaluation results. -- This message was sent by Atlassian JIRA (v6.2#6252)