[ 
https://issues.apache.org/jira/browse/HDFS-17218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17775674#comment-17775674
 ] 

ASF GitHub Bot commented on HDFS-17218:
---------------------------------------

ZanderXu commented on code in PR #6176:
URL: https://github.com/apache/hadoop/pull/6176#discussion_r1360449635


##########
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java:
##########
@@ -1007,6 +1013,7 @@ public void updateRegInfo(DatanodeID nodeReg) {
     for(DatanodeStorageInfo storage : getStorageInfos()) {
       if (storage.getStorageType() != StorageType.PROVIDED) {
         storage.setBlockReportCount(0);
+        storage.setBlockContentsStale(true);

Review Comment:
   Thanks @haiyang1987 for your report. And thanks @zhangshuyan0 for your 
review.
   
   I think this modification is a new bug, not related to this case, we need to 
fix this bug in a new issue. 
   
   We can produce this bug by the following steps:
   
   - Assume that there is block1 contains three replicas, dn1, dn2
   - DN1 is shutdown for maintenance for corrupt disk
   - Admin removed the corrupted disk and restart datanode 
   - DN1 try to register it to NameNode through registerDatanode rpc
   - End-user try to decrease the replicas of block from 2 to 1 through 
setReplication RPC
   - Block1 still contains three replicas in namenode, but the dn1 is not 
existed because it is stored in a corrupt disk
   - NameNode select dn2 as a redundancy replica for this block to delete
   - DN1 try to report all stored blocks to namnode through blockreport rpc
   - NameNode will remove the dn1 replica for block1 because the blockreport 
from DN1 doesn't contains block1
   
   After these two operations(setReplication from end-user and restart from 
admin), the block1 may lose all replicas.
   So I think we should mark all storage as a stale storage while namenode 
processing registerdatanode rpc, so that this case can be fixed.
   
   @zhangshuyan0 @haiyang1987 I'm looking forward your good idea, thanks. 
   
   





> NameNode should remove its excess blocks from the ExcessRedundancyMap When a 
> DN registers
> -----------------------------------------------------------------------------------------
>
>                 Key: HDFS-17218
>                 URL: https://issues.apache.org/jira/browse/HDFS-17218
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: namanode
>            Reporter: Haiyang Hu
>            Assignee: Haiyang Hu
>            Priority: Major
>              Labels: pull-request-available
>         Attachments: image-2023-10-12-15-52-52-336.png
>
>
> Currently found that DN will lose all pending DNA_INVALIDATE blocks if it 
> restarts.
> *Root case*
> Current DN enables asynchronously deletion, it have many pending deletion 
> blocks in memory.
> when DN restarts, these cached blocks may be lost. it causes some blocks in 
> the excess map in the namenode to be leaked and this will result in many 
> blocks having more replicas then expected.
> *solution*
> Consider NameNode should remove its excess blocks from the 
> ExcessRedundancyMap When a DN registers,
> this approach will ensure that when processing the DN's full block report, 
> the 'processExtraRedundancy' can be performed according to the actual of the 
> blocks.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to