Oh, you are right. It doesn't meet your needs. Sorry for the confusion.
Seems it may be difficult to achive it with the existing policies.
- Takanobu
From: Lars Francke
Sent: Thursday, July 4, 2019 7:53:35 PM
To: 浅沼 孝信
Cc: hdfs-user@hadoop.apache.org
Hi Takanobu,
thanks for the quick reply. I missed that class.
But does it really do what I need?
If I have these racks:
/dc1/rack1
/dc1/rack2
/dc1/rack3
/dc2/rack1
/dc2/rack2
/dc2/rack3
And I place a single block in HDFS, couldn't this policy chose /dc1/rack1,
/dc1/rack2, /dc1/rack3 at random?
Hi Lars,
I think BlockPlacementPolicyRackFaultTolerant can do it.
This policy tries to place 3 replica separately in different racks.
dfs.block.replicator.classname
org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyRackFaultTolerant
See also:
Hi,
I have a customer who wants to make sure that copies of his data are
distributed amongst datacenters. So they are using rack names like this
/dc1/rack1, /dc1/rack2, /dc2/rack1 etc.
Unfortunately, the BlockPlacementPolicyDefault seems to place all blocks on
/dc1/* sometimes.
Is there a way