[
https://issues.apache.org/jira/browse/HDFS-17868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=18048716#comment-18048716
]
ASF GitHub Bot commented on HDFS-17868:
---------------------------------------
eubnara opened a new pull request, #8159:
URL: https://github.com/apache/hadoop/pull/8159
<!--
Thanks for sending a pull request!
1. If this is your first time, please read our contributor guidelines:
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute
2. Make sure your PR title starts with JIRA issue id, e.g.,
'HADOOP-17799. Your PR title ...'.
-->
### Description of PR
I wrote description on https://issues.apache.org/jira/browse/HDFS-17868.
### How was this patch tested?
Manual test on the private hdfs cluster.
### For code changes:
- [x] Does the title or this PR starts with the corresponding JIRA issue id
(e.g. 'HADOOP-17799. Your PR title ...')?
- [ ] Object storage: have the integration tests been executed and the
endpoint declared according to the connector-specific documentation?
- [ ] If adding new dependencies to the code, are these dependencies
licensed in a way that is compatible for inclusion under [ASF
2.0](http://www.apache.org/legal/resolved.html#category-a)?
- [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`,
`NOTICE-binary` files?
> introduce BlockPlacementPolicyCrossDC for multi datacenter stretched hdfs
> cluster
> ---------------------------------------------------------------------------------
>
> Key: HDFS-17868
> URL: https://issues.apache.org/jira/browse/HDFS-17868
> Project: Hadoop HDFS
> Issue Type: New Feature
> Components: block placement
> Reporter: YUBI LEE
> Priority: Major
>
> I got ideas from https://dan.naver.com/25/sessions/692,
> https://www.youtube.com/watch?v=1h4k_Dbt0t8, I implemented
> "BlockPlacementPolicyCrossDC" policy. Thanks to [~acedia28].
> It would be better if [~acedia28] shares the better version of the block
> placement policy.
> It introduces some configurations:
> (default value written in parenthesis)
> {code}
> dfs.block.replicator.cross.dc.async.enabled (false)
> dfs.block.replicator.cross.dc.preferred.datacenter
> dfs.block.replicator.cross.dc.bandwidth.limit.mb (5120)
> dfs.block.replicator.cross.dc.bandwidth.refill.period.sec (1)
> dfs.block.replicator.cross.dc.sync.paths
> dfs.block.replicator.cross.dc.limited.sync.paths
> {code}
> According to ideas from the session I mentioned above, this policy introduces
> 3 ways to write hdfs block.
> - sync write: the original hdfs way
> - limited sync write: using bucket4j, sync write < threshold, async write >
> threshold.
> - async write: return datanode candidates only which locate the same
> datacenter to hdfs client, under replicated blocks will replicated later in
> asynchronous way.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]