[ 
https://issues.apache.org/jira/browse/HDFS-17093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17749010#comment-17749010
 ] 

ASF GitHub Bot commented on HDFS-17093:
---------------------------------------

zhangshuyan0 commented on code in PR #5855:
URL: https://github.com/apache/hadoop/pull/5855#discussion_r1278756125


##########
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockReportLease.java:
##########
@@ -269,4 +272,84 @@ private StorageBlockReport[] 
createReports(DatanodeStorage[] dnStorages,
     }
     return storageBlockReports;
   }
+
+  @Test

Review Comment:
   Need add a timeout here.



##########
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java:
##########
@@ -2904,7 +2908,8 @@ public boolean processReport(final DatanodeID nodeID,
       }
       if (namesystem.isInStartupSafeMode()
           && !StorageType.PROVIDED.equals(storageInfo.getStorageType())
-          && storageInfo.getBlockReportCount() > 0) {
+          && storageInfo.getBlockReportCount() > 0
+          && totalReportNum == currentReportNum) {

Review Comment:
   If a datanode report twice during namenode safemode, the second report will 
be almost completely processed, which may extend startup time. How about modify 
code like this? This can also avoid changes in the method signature.
   ```
   if (namesystem.isInStartupSafeMode()
             && !StorageType.PROVIDED.equals(storageInfo.getStorageType())
             && storageInfo.getBlockReportCount() > 0) {
           blockLog.info("BLOCK* processReport 0x{} with lease ID 0x{}: "
               + "discarded non-initial block report from datanode {} storage 
{} "
               + " because namenode still in startup phase",
               strBlockReportId, fullBrLeaseId, nodeID, 
storageInfo.getStorageID());
           boolean needRemoveLease = true;
           for (DatanodeStorageInfo sInfo : node.getStorageInfos()) {
             if (sInfo.getBlockReportCount() == 0) {
               needRemoveLease = false;
             }
           }
           if (needRemoveLease) {
             blockReportLeaseManager.removeLease(node);
           }
           return !node.hasStaleStorages();
         }
   ```





> In the case of all datanodes sending FBR when the namenode restarts (large 
> clusters), there is an issue with incomplete block reporting
> ---------------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-17093
>                 URL: https://issues.apache.org/jira/browse/HDFS-17093
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: namenode
>    Affects Versions: 3.3.4
>            Reporter: Yanlei Yu
>            Priority: Minor
>              Labels: pull-request-available
>
> In our cluster of 800+ nodes, after restarting the namenode, we found that 
> some datanodes did not report enough blocks, causing the namenode to stay in 
> secure mode for a long time after restarting because of incomplete block 
> reporting
> I found in the logs of the datanode with incomplete block reporting that the 
> first FBR attempt failed, possibly due to namenode stress, and then a second 
> FBR attempt was made as follows:
> {code:java}
> ....
> 2023-07-17 11:29:28,982 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Unsuccessfully sent block report 0x6237a52c1e817e,  containing 12 storage 
> report(s), of which we sent 1. The reports had 1099057 total blocks and used 
> 1 RPC(s). This took 294 msec to generate and 101721 msecs for RPC and NN 
> processing. Got back no commands.
> 2023-07-17 11:37:04,014 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Successfully sent block report 0x62382416f3f055,  containing 12 storage 
> report(s), of which we sent 12. The reports had 1099048 total blocks and used 
> 12 RPC(s). This took 295 msec to generate and 11647 msecs for RPC and NN 
> processing. Got back no commands. {code}
> There's nothing wrong with that. Retry the send if it fails But on the 
> namenode side of the logic:
> {code:java}
> if (namesystem.isInStartupSafeMode()
>     && !StorageType.PROVIDED.equals(storageInfo.getStorageType())
>     && storageInfo.getBlockReportCount() > 0) {
>   blockLog.info("BLOCK* processReport 0x{} with lease ID 0x{}: "
>       + "discarded non-initial block report from {}"
>       + " because namenode still in startup phase",
>       strBlockReportId, fullBrLeaseId, nodeID);
>   blockReportLeaseManager.removeLease(node);
>   return !node.hasStaleStorages();
> } {code}
> When a disk was identified as the report is not the first time, namely 
> storageInfo. GetBlockReportCount > 0, Will remove the ticket from the 
> datanode, lead to a second report failed because no lease



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to