[ 
https://issues.apache.org/jira/browse/HDFS-6088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13931464#comment-13931464
 ] 

Todd Lipcon commented on HDFS-6088:
-----------------------------------

Any chance we could determine this automatically based on heap size? Would be 
nice to avoid having yet another config that users have to set.

> Add configurable maximum block count for datanode
> -------------------------------------------------
>
>                 Key: HDFS-6088
>                 URL: https://issues.apache.org/jira/browse/HDFS-6088
>             Project: Hadoop HDFS
>          Issue Type: Bug
>            Reporter: Kihwal Lee
>
> Currently datanode resources are protected by the free space check and the 
> balancer.  But datanodes can run out of memory simply storing too many 
> blocks. If the sizes of blocks are small, datanodes will appear to have 
> plenty of space to put more blocks.
> I propose adding a configurable max block count to datanode. Since datanodes 
> can have different heap configurations, it will make sense to make it 
> datanode-level, rather than something enforced by namenode.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to