qinyuren created HDFS-16420: ------------------------------- Summary: ec + balancer may cause missing block Key: HDFS-16420 URL: https://issues.apache.org/jira/browse/HDFS-16420 Project: Hadoop HDFS Issue Type: Bug Reporter: qinyuren Attachments: image-2022-01-10-17-31-35-910.png, image-2022-01-10-17-32-56-981.png
We have a similar problem as HDFS-16297 described. In our cluster, we used ec(6+3) + balancer, and the missing block happened. We got the block(blk_-9223372036824119008) info from fsck, only 5 live replications and multiple redundant replications. blk_-9223372036824119008_220037616 len=133370338 MISSING! Live_repl=5 blk_-9223372036824119007:DatanodeInfoWithStorage, blk_-9223372036824119002:DatanodeInfoWithStorage, blk_-9223372036824119001:DatanodeInfoWithStorage, blk_-9223372036824119000:DatanodeInfoWithStorage, blk_-9223372036824119004:DatanodeInfoWithStorage, blk_-9223372036824119004:DatanodeInfoWithStorage, blk_-9223372036824119004:DatanodeInfoWithStorage, blk_-9223372036824119004:DatanodeInfoWithStorage, blk_-9223372036824119004:DatanodeInfoWithStorage, blk_-9223372036824119004:DatanodeInfoWithStorage We searched the log from all datanode, and found that the internal blocks of blk_-9223372036824119008 were deleted almost at the same time. 08:15:58,521 INFO impl.FsDatasetAsyncDiskService (FsDatasetAsyncDiskService.java:deleteAsync(225)) - Scheduling blk_-9223372036824119008_220037616 replica FinalizedReplica, blk_-9223372036824119008_220037616, FINALIZED 08:15:58,550 INFO impl.FsDatasetAsyncDiskService (FsDatasetAsyncDiskService.java:run(333)) - Deleted BP-1606066499-xxxx-1606188026755 blk_-9223372036824119008_220037616 URI file:/data15/hadoop/hdfs/data/current/BP-1606066499-xxxx-1606188026755/current/finalized/subdir19/subdir9/blk_-9223372036824119008 08:16:21,214 INFO impl.FsDatasetAsyncDiskService (FsDatasetAsyncDiskService.java:run(333)) - Deleted BP-1606066499-xxxx-1606188026755 blk_-9223372036824119006_220037616 URI file:/data4/hadoop/hdfs/data/current/BP-1606066499-xxxx-1606188026755/current/finalized/subdir19/subdir9/blk_-9223372036824119006 08:16:55,737 INFO impl.FsDatasetAsyncDiskService (FsDatasetAsyncDiskService.java:run(333)) - Deleted BP-1606066499-xxxx-1606188026755 blk_-9223372036824119005_220037616 URI file:/data2/hadoop/hdfs/data/current/BP-1606066499-xxxx-1606188026755/current/finalized/subdir19/subdir9/blk_-9223372036824119005 The total number of internal blocks deleted during 08:15-08:17 are as follows internal block delete num blk_-9223372036824119008 1 blk_-9223372036824119006 1 blk_-9223372036824119005 1 blk_-9223372036824119004 50 blk_-9223372036824119003 1 blk_-9223372036824119000 1 During 08:15 to 08:17, we restarted 2 Datanodes and triggered full block reporting immediately. There are 2 questions: 1. Why are there so many replicas of this block? 2. Why delete the internal block with only one copy? The reasons for the first problem may be as follows: 1. We set the full block report period of some datanodes to 168 hours. 2. We have done a namenode HA operation. 3. After namenode HA, the state of storage became stale, and the state not change until next full block report. 4. The balancer copied the replica without deleting the replica from source node, because the source node have the stale storage, and the request was put into postponedMisreplicatedBlocks. 5. Balancer continues to copy the replica, eventually resulting in multiple copies of a replica !image-2022-01-10-17-31-35-910.png! The set of rescannedMisreplicatedBlocks have so many block to remove. !image-2022-01-10-17-32-56-981.png! -- This message was sent by Atlassian Jira (v8.20.1#820001) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org