[
https://issues.apache.org/jira/browse/HDFS-16964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Xiaoqiao He resolved HDFS-16964.
--------------------------------
Fix Version/s: 3.4.0
Hadoop Flags: Reviewed
Resolution: Fixed
> Improve processing of excess redundancy after failover
> ------------------------------------------------------
>
> Key: HDFS-16964
> URL: https://issues.apache.org/jira/browse/HDFS-16964
> Project: Hadoop HDFS
> Issue Type: Improvement
> Reporter: Shuyan Zhang
> Assignee: Shuyan Zhang
> Priority: Major
> Labels: pull-request-available
> Fix For: 3.4.0
>
>
> After failover, the block with excess redundancy cannot be processed until
> all replicas are not stale, because the stale ones may have been deleted.
> That is to say, we need to wait for the FBRs of all datanodes on which the
> block resides before deleting the redundant replicas. This is unnecessary, we
> can bypass stale replicas when dealing with excess replicas, and delete
> non-stale excess replicas in a more timely manner.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]