[ https://issues.apache.org/jira/browse/HDFS-17218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17775730#comment-17775730 ]
ASF GitHub Bot commented on HDFS-17218: --------------------------------------- zhangshuyan0 commented on code in PR #6176: URL: https://github.com/apache/hadoop/pull/6176#discussion_r1360562024 ########## hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java: ########## @@ -1007,6 +1013,7 @@ public void updateRegInfo(DatanodeID nodeReg) { for(DatanodeStorageInfo storage : getStorageInfos()) { if (storage.getStorageType() != StorageType.PROVIDED) { storage.setBlockReportCount(0); + storage.setBlockContentsStale(true); Review Comment: @ZanderXu Thanks for your reply. I think this modification is not a new bug. Before this patch, NameNode knows all excess replicas even though a DataNode is re-registered, so it wouldn't delete more replicas than expected. As the situation you just said, we can discuss it from two aspects: 1. If NameNode knows the corrupt replicas corresponding to corrupt disk, it will not delete the only healthy replica. 2. If NameNode know nothing about the corrupt disk, the essence of the problem is that the Admin manually removed some replicas without notifying NameNode. Then in the time between "the replica has been removed" and "NN learns that the replica has been removed", there is always a chance that the only healthy replica will be deleted. So, I think the key to solving this problem is to immediately notify NN which disk is corrupt. Restarting & re-registering may not be necessary. > NameNode should remove its excess blocks from the ExcessRedundancyMap When a > DN registers > ----------------------------------------------------------------------------------------- > > Key: HDFS-17218 > URL: https://issues.apache.org/jira/browse/HDFS-17218 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namanode > Reporter: Haiyang Hu > Assignee: Haiyang Hu > Priority: Major > Labels: pull-request-available > Attachments: image-2023-10-12-15-52-52-336.png > > > Currently found that DN will lose all pending DNA_INVALIDATE blocks if it > restarts. > *Root case* > Current DN enables asynchronously deletion, it have many pending deletion > blocks in memory. > when DN restarts, these cached blocks may be lost. it causes some blocks in > the excess map in the namenode to be leaked and this will result in many > blocks having more replicas then expected. > *solution* > Consider NameNode should remove its excess blocks from the > ExcessRedundancyMap When a DN registers, > this approach will ensure that when processing the DN's full block report, > the 'processExtraRedundancy' can be performed according to the actual of the > blocks. -- This message was sent by Atlassian Jira (v8.20.10#820010) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org