[ 
https://issues.apache.org/jira/browse/HDFS-1499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12931972#action_12931972
 ] 

Allen Wittenauer commented on HDFS-1499:
----------------------------------------

What do you store the hbase files in then?

> mv the namenode NameSpace and BlocksMap to hbase to save the namenode memory
> ----------------------------------------------------------------------------
>
>                 Key: HDFS-1499
>                 URL: https://issues.apache.org/jira/browse/HDFS-1499
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: name-node
>            Reporter: dl.brain.ln
>
> The NameNode stores all its metadata in the main memory of the machine on 
> which it is deployed. With the file-count and block number growing, namenode 
> machine can't hold anymore files and blocks in its memory and thus restrict 
> the HDFS cluster growth. So many people are talking and thinking abont this 
> problem. Google's next version of GFS use bigtable to store the metadata of 
> the DFS and that seem works. What if we use hbase as the same?
> In the namenode structure, the namespace of the filesystem and the map of 
> block -> datanodes, datanode->blocks which keeped in memory are consume most 
> of the namenode's heap, what if we store those data structure in hbase to 
> decrease the namenode's memory?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to