I created this bug report, in an attempt to fix the FreeBSD port:
https://issues.apache.org/jira/browse/HADOOP-16388 but there was no answer.
Does anybody know if Hadoop is a maintained project, and if yes, how to
get a hold of somebody who can help with this bug?
Thank you,
Yuri
Oh, you are right. It doesn't meet your needs. Sorry for the confusion.
Seems it may be difficult to achive it with the existing policies.
- Takanobu
From: Lars Francke
Sent: Thursday, July 4, 2019 7:53:35 PM
To: 浅沼 孝信
Cc: hdfs-user@hadoop.apache.org
Hi Takanobu,
thanks for the quick reply. I missed that class.
But does it really do what I need?
If I have these racks:
/dc1/rack1
/dc1/rack2
/dc1/rack3
/dc2/rack1
/dc2/rack2
/dc2/rack3
And I place a single block in HDFS, couldn't this policy chose /dc1/rack1,
/dc1/rack2, /dc1/rack3 at random?
Hi Lars,
I think BlockPlacementPolicyRackFaultTolerant can do it.
This policy tries to place 3 replica separately in different racks.
dfs.block.replicator.classname
org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyRackFaultTolerant
See also:
Hi,
I have a customer who wants to make sure that copies of his data are
distributed amongst datacenters. So they are using rack names like this
/dc1/rack1, /dc1/rack2, /dc2/rack1 etc.
Unfortunately, the BlockPlacementPolicyDefault seems to place all blocks on
/dc1/* sometimes.
Is there a way
unsubscribe
-- --
??: "kevin su";
: 2019??7??4??(??) 7:11
??: "user@hadoop.apache.org";
: Dose Yarn shared cache use memory ?
Hi all,
I want make sure that yarn shared cache and distributed cache whether used
memory or