[ 
https://issues.apache.org/jira/browse/HDFS-13279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16432482#comment-16432482
 ] 

Tao Jie commented on HDFS-13279:
--------------------------------

Thank you [~ajayydv] for reply!
1, I set up another jira HDFS-13418, and I will try not to modify any current 
logic.
2, I try to make my point clear. There are 2 approaches to choose the right 
node:
# As you mentioned, first choose the right rack with weight, then choose a node 
from the selected rack.
# As the code in patch,  first choose a node as it used to, then check if it 
need another choosing according to the weight of the rack.

I understand approach 1 could be more accurate, also approach 2 works well as I 
have tested. I prefer approach 2 because it reuse rather than rewrite the 
current logic of choosing nodes.
For the consideration of performance, approach 2 do have some overhead. In a 
certain proportion of choosing, we may choose twice. But I don't think the 
chance will be over 10% in a typical case. For approach 1, I think we should 
have more test for the performance since the choosing logic is totally changed 
and it seems to be heavier than the former logic.


> Datanodes usage is imbalanced if number of nodes per rack is not equal
> ----------------------------------------------------------------------
>
>                 Key: HDFS-13279
>                 URL: https://issues.apache.org/jira/browse/HDFS-13279
>             Project: Hadoop HDFS
>          Issue Type: Bug
>    Affects Versions: 2.8.3, 3.0.0
>            Reporter: Tao Jie
>            Assignee: Tao Jie
>            Priority: Major
>         Attachments: HDFS-13279.001.patch, HDFS-13279.002.patch, 
> HDFS-13279.003.patch, HDFS-13279.004.patch, HDFS-13279.005.patch
>
>
> In a Hadoop cluster, number of nodes on a rack could be different. For 
> example, we have 50 Datanodes in all and 15 datanodes per rack, it would 
> remain 5 nodes on the last rack. In this situation, we find that storage 
> usage on the last 5 nodes would be much higher than other nodes.
>  With the default blockplacement policy, for each block, the first 
> replication has the same probability to write to each datanode, but the 
> probability for the 2nd/3rd replication to write to the last 5 nodes would 
> much higher than to other nodes. 
>  Consider we write 50 blocks to such 50 datanodes. The first rep of 100 block 
> would distirbuted to 50 node equally. The 2rd rep of blocks which the 1st rep 
> is on rack1(15 reps) would send equally to other 35 nodes and each nodes 
> receive 0.428 rep. So does blocks on rack2 and rack3. As a result, node on 
> rack4(5 nodes) would receive 1.29 replications in all, while other node would 
> receive 0.97 reps.
> ||-||Rack1(15 nodes)||Rack2(15 nodes)||Rack3(15 nodes)||Rack4(5 nodes)||
> |From rack1|-|15/35=0.43|0.43|0.43|
> |From rack2|0.43|-|0.43|0.43|
> |From rack3|0.43|0.43|-|0.43|
> |From rack4|5/45=0.11|0.11|0.11|-|
> |Total|0.97|0.97|0.97|1.29|



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to