[ 
https://issues.apache.org/jira/browse/HDFS-7891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14366652#comment-14366652
 ] 

Walter Su commented on HDFS-7891:
---------------------------------

3rd. network topology of 1k nodes cannot compare to the metadata of 100m files. 
Block placement policy costs little CPU time, it will not become the bottleneck 
of Namenode. This is another reason I thought {{maxNodesPerRack}} method will 
be fine.

> A block placement policy with best fault tolerance
> --------------------------------------------------
>
>                 Key: HDFS-7891
>                 URL: https://issues.apache.org/jira/browse/HDFS-7891
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>            Reporter: Walter Su
>            Assignee: Walter Su
>         Attachments: HDFS-7891.002.patch, HDFS-7891.patch, 
> PlacementPolicyBenchmark.txt, testresult.txt
>
>
> a block placement policy tries its best to place replicas to most racks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to