[ 
https://issues.apache.org/jira/browse/HDFS-6658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14062514#comment-14062514
 ] 

Colin Patrick McCabe commented on HDFS-6658:
--------------------------------------------

bq. If we don't have caching, we need to cope with the added latency of 
off-heap memory - It is after all backed up by a file.

Amir, there's no file involved.  See my comment here: 
https://issues.apache.org/jira/browse/HDFS-6658?focusedCommentId=14061374&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14061374

I'm talking about memory.  Memory, not disk.  It is simply RAM that is not 
managed by the JVM.  There's more information here: 
http://stackoverflow.com/questions/6091615/difference-between-on-heap-and-off-heap.

> Namenode memory optimization - Block replicas list 
> ---------------------------------------------------
>
>                 Key: HDFS-6658
>                 URL: https://issues.apache.org/jira/browse/HDFS-6658
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: namenode
>    Affects Versions: 2.4.1
>            Reporter: Amir Langer
>            Assignee: Amir Langer
>         Attachments: Namenode Memory Optimizations - Block replicas list.docx
>
>
> Part of the memory consumed by every BlockInfo object in the Namenode is a 
> linked list of block references for every DatanodeStorageInfo (called 
> "triplets"). 
> We propose to change the way we store the list in memory. 
> Using primitive integer indexes instead of object references will reduce the 
> memory needed for every block replica (when compressed oops is disabled) and 
> in our new design the list overhead will be per DatanodeStorageInfo and not 
> per block replica.
> see attached design doc. for details and evaluation results.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to