[ https://issues.apache.org/jira/browse/HDFS-13573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16478806#comment-16478806 ]
Yiqun Lin commented on HDFS-13573: ---------------------------------- {quote} I would suggest not to leave out the scenario when the writer is not on a datanode and have the following: ... {quote} Change looks good to me. [~zvenczel], feel free to attach the updated patch. > Javadoc for BlockPlacementPolicyDefault is inaccurate > ----------------------------------------------------- > > Key: HDFS-13573 > URL: https://issues.apache.org/jira/browse/HDFS-13573 > Project: Hadoop HDFS > Issue Type: Bug > Affects Versions: 3.1.0 > Reporter: Yiqun Lin > Assignee: Zsolt Venczel > Priority: Trivial > Attachments: HDFS-13573.01.patch > > > Current rule of default block placement policy: > {quote}The replica placement strategy is that if the writer is on a datanode, > the 1st replica is placed on the local machine, > otherwise a random datanode. The 2nd replica is placed on a datanode > that is on a different rack. The 3rd replica is placed on a datanode > which is on a different node of the rack as the second replica. > {quote} > *if the writer is on a datanode, the 1st replica is placed on the local > machine*, actually this can be decided by the hdfs client. The client can > pass {{CreateFlag#NO_LOCAL_WRITE}} that request to not put a block replica on > the local datanode. But subsequent replicas will still follow default block > placement policy. -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org