Hi Lars,

I think BlockPlacementPolicyRackFaultTolerant can do it.
This policy tries to place 3 replica separately in different racks.

<property>
  <name>dfs.block.replicator.classname</name>
  
<value>org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyRackFaultTolerant</value>
</property>

See also:
https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyRackFaultTolerant.java

Thanks,
- Takanobu
________________________________________
From: Lars Francke <lars.fran...@gmail.com>
Sent: Thursday, July 4, 2019 18:15
To: hdfs-user@hadoop.apache.org
Subject: BlockPlacementPolicy question with hierarchical topology

Hi,

I have a customer who wants to make sure that copies of his data are 
distributed amongst datacenters. So they are using rack names like this 
/dc1/rack1, /dc1/rack2, /dc2/rack1 etc.

Unfortunately, the BlockPlacementPolicyDefault seems to place all blocks on 
/dc1/* sometimes.

Is there a way to guarantee that /dc1/* and /dc2/* will be used in this 
scenario?

Looking at chooseRandomWithStorageTypeTwoTrial it seems to consider the full 
"scope" and not its components. I couldn't find anything in the code but I had 
hoped I'm missing something: Is there a way to configure HDFS for the behaviour 
I'd like?

Thanks!

Lars

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-user-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-user-h...@hadoop.apache.org

Reply via email to