[
https://issues.apache.org/jira/browse/HDFS-17150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Xiaoqiao He resolved HDFS-17150.
--------------------------------
Fix Version/s: 3.4.0
Hadoop Flags: Reviewed
Resolution: Fixed
> EC: Fix the bug of failed lease recovery.
> -----------------------------------------
>
> Key: HDFS-17150
> URL: https://issues.apache.org/jira/browse/HDFS-17150
> Project: Hadoop HDFS
> Issue Type: Bug
> Reporter: Shuyan Zhang
> Assignee: Shuyan Zhang
> Priority: Major
> Labels: pull-request-available
> Fix For: 3.4.0
>
>
> If the client crashes without writing the minimum number of internal blocks
> required by the EC policy, the lease recovery process for the corresponding
> unclosed file may continue to fail. Taking RS(6,3) policy as an example, the
> timeline is as follows:
> 1. The client writes some data to only 5 datanodes;
> 2. Client crashes;
> 3. NN fails over;
> 4. Now the result of `uc.getNumExpectedLocations()` completely depends on
> block report, and there are 5 datanodes reporting internal blocks;
> 5. When the lease expires hard limit, NN issues a block recovery command;
> 6. The datanode checks the command and finds that the number of internal
> blocks is insufficient, resulting in an error and recovery failure;
> 7. The lease expires hard limit again, and NN issues a block recovery command
> again, but the recovery fails again......
> When the number of internal blocks written by the client is less than 6, the
> block group is actually unrecoverable. We should equate this situation to the
> case where the number of replicas is 0 when processing replica files, i.e.,
> directly remove the last block group and close the file.
>
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]