[ https://issues.apache.org/jira/browse/HDFS-17488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17844840#comment-17844840 ]
ASF GitHub Bot commented on HDFS-17488: --------------------------------------- hadoop-yetus commented on PR #6759: URL: https://github.com/apache/hadoop/pull/6759#issuecomment-2101828246 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |:----:|----------:|--------:|:--------:|:-------:| |||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 02s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 02s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 02s | | detect-secrets was not available. | | +0 :ok: | markdownlint | 0m 02s | | markdownlint was not available. | | +0 :ok: | spotbugs | 0m 00s | | spotbugs executables are not available. | | +1 :green_heart: | @author | 0m 01s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 00s | | The patch appears to include 2 new or modified test files. | |||| _ trunk Compile Tests _ | | +0 :ok: | mvndep | 2m 14s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 86m 40s | | trunk passed | | +1 :green_heart: | compile | 38m 18s | | trunk passed | | +1 :green_heart: | checkstyle | 5m 51s | | trunk passed | | -1 :x: | mvnsite | 4m 27s | [/branch-mvnsite-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6759/6/artifact/out/branch-mvnsite-hadoop-common-project_hadoop-common.txt) | hadoop-common in trunk failed. | | +1 :green_heart: | javadoc | 10m 13s | | trunk passed | | +1 :green_heart: | shadedclient | 157m 54s | | branch has no errors when building and testing our client artifacts. | |||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 2m 16s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 9m 29s | | the patch passed | | +1 :green_heart: | compile | 35m 37s | | the patch passed | | +1 :green_heart: | javac | 35m 37s | | the patch passed | | +1 :green_heart: | blanks | 0m 01s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 5m 50s | | the patch passed | | -1 :x: | mvnsite | 4m 30s | [/patch-mvnsite-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6759/6/artifact/out/patch-mvnsite-hadoop-common-project_hadoop-common.txt) | hadoop-common in the patch failed. | | +1 :green_heart: | javadoc | 10m 18s | | the patch passed | | +1 :green_heart: | shadedclient | 167m 52s | | patch has no errors when building and testing our client artifacts. | |||| _ Other Tests _ | | +1 :green_heart: | asflicense | 5m 28s | | The patch does not generate ASF License warnings. | | | | 512m 54s | | | | Subsystem | Report/Notes | |----------:|:-------------| | GITHUB PR | https://github.com/apache/hadoop/pull/6759 | | Optional Tests | dupname asflicense mvnsite codespell detsecrets markdownlint compile javac javadoc mvninstall unit shadedclient spotbugs checkstyle | | uname | MINGW64_NT-10.0-17763 f9e61a0ebff0 3.4.10-87d57229.x86_64 2024-02-14 20:17 UTC x86_64 Msys | | Build tool | maven | | Personality | /c/hadoop/dev-support/bin/hadoop.sh | | git revision | trunk / b42a03e1d18816a1bf2f65e82508800c8f885785 | | Default Java | Azul Systems, Inc.-1.8.0_332-b09 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6759/6/testReport/ | | modules | C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs U: . | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6759/6/console | | versions | git=2.44.0.windows.1 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. > DN can fail IBRs with NPE when a volume is removed > -------------------------------------------------- > > Key: HDFS-17488 > URL: https://issues.apache.org/jira/browse/HDFS-17488 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs > Reporter: Felix N > Assignee: Felix N > Priority: Major > Labels: pull-request-available > > > Error logs > {code:java} > 2024-04-22 15:46:33,422 [BP-1842952724-10.22.68.249-1713771988830 > heartbeating to localhost/127.0.0.1:64977] ERROR datanode.DataNode > (BPServiceActor.java:run(922)) - Exception in BPOfferService for Block pool > BP-1842952724-10.22.68.249-1713771988830 (Datanode Uuid > 1659ffaf-1a80-4a8e-a542-643f6bd97ed4) service to localhost/127.0.0.1:64977 > java.lang.NullPointerException > at > org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolClientSideTranslatorPB.blockReceivedAndDeleted(DatanodeProtocolClientSideTranslatorPB.java:246) > at > org.apache.hadoop.hdfs.server.datanode.IncrementalBlockReportManager.sendIBRs(IncrementalBlockReportManager.java:218) > at > org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:749) > at > org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:920) > at java.lang.Thread.run(Thread.java:748) {code} > The root cause is in BPOfferService#notifyNamenodeBlock, happens when it's > called on a block belonging to a volume already removed prior. Because the > volume was already removed > > {code:java} > private void notifyNamenodeBlock(ExtendedBlock block, BlockStatus status, > String delHint, String storageUuid, boolean isOnTransientStorage) { > checkBlock(block); > final ReceivedDeletedBlockInfo info = new ReceivedDeletedBlockInfo( > block.getLocalBlock(), status, delHint); > final DatanodeStorage storage = dn.getFSDataset().getStorage(storageUuid); > > // storage == null here because it's already removed earlier. > for (BPServiceActor actor : bpServices) { > actor.getIbrManager().notifyNamenodeBlock(info, storage, > isOnTransientStorage); > } > } {code} > so IBRs with a null storage are now pending. > The reason why notifyNamenodeBlock can trigger on such blocks is up in > DirectoryScanner#reconcile > {code:java} > public void reconcile() throws IOException { > LOG.debug("reconcile start DirectoryScanning"); > scan(); > // If a volume is removed here after scan() already finished running, > // diffs is stale and checkAndUpdate will run on a removed volume > // HDFS-14476: run checkAndUpdate with batch to avoid holding the lock too > // long > int loopCount = 0; > synchronized (diffs) { > for (final Map.Entry<String, ScanInfo> entry : diffs.getEntries()) { > dataset.checkAndUpdate(entry.getKey(), entry.getValue()); > ... > } {code} > Inside checkAndUpdate, memBlockInfo is null because all the block meta in > memory is removed during the volume removal, but diskFile still exists. Then > DataNode#notifyNamenodeDeletedBlock (and further down the line, > notifyNamenodeBlock) is called on this block. > -- This message was sent by Atlassian Jira (v8.20.10#820010) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org