[ 
https://issues.apache.org/jira/browse/HDFS-8131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16558746#comment-16558746
 ] 

Yongjun Zhang commented on HDFS-8131:
-------------------------------------

Hm, just noticed HDFS-4946 for my comment #3 above. 

Thanks.

> Implement a space balanced block placement policy
> -------------------------------------------------
>
>                 Key: HDFS-8131
>                 URL: https://issues.apache.org/jira/browse/HDFS-8131
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: namenode
>    Affects Versions: 3.0.0-alpha1
>            Reporter: Liu Shaohui
>            Assignee: Liu Shaohui
>            Priority: Minor
>              Labels: BlockPlacementPolicy
>             Fix For: 2.8.0, 2.7.4, 3.0.0-alpha1
>
>         Attachments: HDFS-8131-branch-2.7.patch, HDFS-8131-v1.diff, 
> HDFS-8131-v2.diff, HDFS-8131-v3.diff, HDFS-8131.004.patch, 
> HDFS-8131.005.patch, HDFS-8131.006.patch, balanced.png
>
>
> The default block placement policy will choose datanodes for new blocks 
> randomly, which will result in unbalanced space used percent among datanodes 
> after an cluster expansion. The old datanodes always are in high used percent 
> of space and new added ones are in low percent.
> Through we can used the external balance tool to balance the space used rate, 
> it will cost extra network IO and it's not easy to control the balance speed.
> An easy solution is to implement an balanced block placement policy which 
> will choose low used percent datanodes for new blocks with a little high 
> possibility. In a not long term, the used percent of datanodes will trend to 
> be balanced.
> Suggestions and discussions are welcomed. Thanks



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to