[jira] [Commented] (HDFS-17424) [FGL] DelegationTokenSecretManager supports fine-grained lock
[ https://issues.apache.org/jira/browse/HDFS-17424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17837505#comment-17837505 ] ASF GitHub Bot commented on HDFS-17424: --- ferhui merged PR #6696: URL: https://github.com/apache/hadoop/pull/6696 > [FGL] DelegationTokenSecretManager supports fine-grained lock > - > > Key: HDFS-17424 > URL: https://issues.apache.org/jira/browse/HDFS-17424 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: ZanderXu >Assignee: Yuanbo Liu >Priority: Major > Labels: pull-request-available > > DelegationTokenSecretManager supports fine-grained lock -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-17424) [FGL] DelegationTokenSecretManager supports fine-grained lock
[ https://issues.apache.org/jira/browse/HDFS-17424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hui Fei resolved HDFS-17424. Resolution: Fixed > [FGL] DelegationTokenSecretManager supports fine-grained lock > - > > Key: HDFS-17424 > URL: https://issues.apache.org/jira/browse/HDFS-17424 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: ZanderXu >Assignee: Yuanbo Liu >Priority: Major > Labels: pull-request-available > > DelegationTokenSecretManager supports fine-grained lock -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-17367) Add PercentUsed for Different StorageTypes in JMX
[ https://issues.apache.org/jira/browse/HDFS-17367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HDFS-17367: -- Labels: pull-request-available (was: ) > Add PercentUsed for Different StorageTypes in JMX > - > > Key: HDFS-17367 > URL: https://issues.apache.org/jira/browse/HDFS-17367 > Project: Hadoop HDFS > Issue Type: Improvement > Components: metrics, namenode >Affects Versions: 3.5.0 >Reporter: Hualong Zhang >Assignee: Hualong Zhang >Priority: Major > Labels: pull-request-available > > Currently, the NameNode only displays PercentUsed for the entire cluster. We > plan to add corresponding PercentUsed metrics for different StorageTypes. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-17470) FsVolumeList#getNextVolume can be moved out of DataSetLock
[ https://issues.apache.org/jira/browse/HDFS-17470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17837412#comment-17837412 ] ASF GitHub Bot commented on HDFS-17470: --- hadoop-yetus commented on PR #6733: URL: https://github.com/apache/hadoop/pull/6733#issuecomment-2057634063 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 32s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 44m 39s | | trunk passed | | +1 :green_heart: | compile | 1m 20s | | trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | compile | 1m 16s | | trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | +1 :green_heart: | checkstyle | 1m 6s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 23s | | trunk passed | | +1 :green_heart: | javadoc | 1m 6s | | trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | javadoc | 1m 40s | | trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | +1 :green_heart: | spotbugs | 3m 12s | | trunk passed | | +1 :green_heart: | shadedclient | 35m 16s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 11s | | the patch passed | | +1 :green_heart: | compile | 1m 11s | | the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | javac | 1m 11s | | the patch passed | | +1 :green_heart: | compile | 1m 7s | | the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | +1 :green_heart: | javac | 1m 7s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 56s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 14s | | the patch passed | | +1 :green_heart: | javadoc | 0m 51s | | the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | javadoc | 1m 35s | | the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | +1 :green_heart: | spotbugs | 3m 13s | | the patch passed | | +1 :green_heart: | shadedclient | 35m 27s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 228m 44s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6733/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 43s | | The patch does not generate ASF License warnings. | | | | 368m 51s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.44 ServerAPI=1.44 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6733/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6733 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 065368168f63 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 382dfcc3c3e8c4fdd85102c8b1cd602d4fc35298 | | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6733/1/testReport/ | | Max. process+thread count | 3780 (vs.
[jira] [Commented] (HDFS-17466) Move FsVolumeList#getVolumes() invocation out of DataSetLock
[ https://issues.apache.org/jira/browse/HDFS-17466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17837371#comment-17837371 ] ASF GitHub Bot commented on HDFS-17466: --- hadoop-yetus commented on PR #6728: URL: https://github.com/apache/hadoop/pull/6728#issuecomment-2057414555 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 6m 50s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 33m 18s | | trunk passed | | +1 :green_heart: | compile | 0m 41s | | trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | compile | 0m 43s | | trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | +1 :green_heart: | checkstyle | 0m 40s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 48s | | trunk passed | | +1 :green_heart: | javadoc | 0m 43s | | trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | javadoc | 1m 7s | | trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | +1 :green_heart: | spotbugs | 1m 48s | | trunk passed | | +1 :green_heart: | shadedclient | 23m 3s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 38s | | the patch passed | | +1 :green_heart: | compile | 0m 39s | | the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | javac | 0m 39s | | the patch passed | | +1 :green_heart: | compile | 0m 34s | | the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | +1 :green_heart: | javac | 0m 34s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 29s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 37s | | the patch passed | | +1 :green_heart: | javadoc | 0m 30s | | the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | javadoc | 1m 2s | | the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | +1 :green_heart: | spotbugs | 1m 45s | | the patch passed | | +1 :green_heart: | shadedclient | 21m 54s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 204m 54s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6728/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 31s | | The patch does not generate ASF License warnings. | | | | 304m 15s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.datanode.TestDirectoryScanner | | | hadoop.hdfs.tools.TestDFSAdmin | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.45 ServerAPI=1.45 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6728/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6728 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux f0c1ad3790f1 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 814377ae9d9dde8bbe8786e0da491220cf156fbe | | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6728/2/testReport/ | | Max.
[jira] [Commented] (HDFS-17465) RBF: Use ProportionRouterRpcFairnessPolicyController get “java.Lang. Error: Maximum permit count exceeded”
[ https://issues.apache.org/jira/browse/HDFS-17465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17837348#comment-17837348 ] ASF GitHub Bot commented on HDFS-17465: --- goiri merged PR #6727: URL: https://github.com/apache/hadoop/pull/6727 > RBF: Use ProportionRouterRpcFairnessPolicyController get “java.Lang. Error: > Maximum permit count exceeded” > --- > > Key: HDFS-17465 > URL: https://issues.apache.org/jira/browse/HDFS-17465 > Project: Hadoop HDFS > Issue Type: Bug > Components: rbf >Affects Versions: 3.5.0 >Reporter: Xiping Zhang >Assignee: Xiping Zhang >Priority: Blocker > Labels: pull-request-available > Attachments: image-2024-04-14-15-39-59-531.png, > image-2024-04-14-16-07-32-362.png, image-2024-04-14-16-23-18-499.png > > > !image-2024-04-14-15-39-59-531.png! -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-17469) Audit log for reportBadBlocks RPC
[ https://issues.apache.org/jira/browse/HDFS-17469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17837298#comment-17837298 ] ASF GitHub Bot commented on HDFS-17469: --- hadoop-yetus commented on PR #6731: URL: https://github.com/apache/hadoop/pull/6731#issuecomment-2056991455 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 6m 44s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 32m 52s | | trunk passed | | +1 :green_heart: | compile | 0m 46s | | trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | compile | 0m 41s | | trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | +1 :green_heart: | checkstyle | 0m 40s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 45s | | trunk passed | | +1 :green_heart: | javadoc | 0m 42s | | trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | javadoc | 1m 6s | | trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | +1 :green_heart: | spotbugs | 1m 47s | | trunk passed | | +1 :green_heart: | shadedclient | 21m 20s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 36s | | the patch passed | | +1 :green_heart: | compile | 0m 39s | | the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | javac | 0m 39s | | the patch passed | | +1 :green_heart: | compile | 0m 34s | | the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | +1 :green_heart: | javac | 0m 34s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 30s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 38s | | the patch passed | | +1 :green_heart: | javadoc | 0m 30s | | the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | javadoc | 1m 6s | | the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | +1 :green_heart: | spotbugs | 1m 45s | | the patch passed | | +1 :green_heart: | shadedclient | 20m 58s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 207m 1s | | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 31s | | The patch does not generate ASF License warnings. | | | | 303m 44s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.45 ServerAPI=1.45 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6731/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6731 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 86a865fcb53f 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 88c4dbd803eadd34297817514c3760aa38819615 | | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6731/1/testReport/ | | Max. process+thread count | 3918 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6731/1/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0
[jira] [Commented] (HDFS-17467) IncrementalBlockReportManager#getPerStorageIBR may throw NPE when remove volumes
[ https://issues.apache.org/jira/browse/HDFS-17467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17837287#comment-17837287 ] ASF GitHub Bot commented on HDFS-17467: --- hadoop-yetus commented on PR #6730: URL: https://github.com/apache/hadoop/pull/6730#issuecomment-2056953388 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 21s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 31m 58s | | trunk passed | | +1 :green_heart: | compile | 0m 43s | | trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | compile | 0m 39s | | trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | +1 :green_heart: | checkstyle | 0m 37s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 45s | | trunk passed | | +1 :green_heart: | javadoc | 0m 41s | | trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | javadoc | 1m 8s | | trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | +1 :green_heart: | spotbugs | 1m 47s | | trunk passed | | +1 :green_heart: | shadedclient | 21m 4s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 39s | | the patch passed | | +1 :green_heart: | compile | 0m 38s | | the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | javac | 0m 38s | | the patch passed | | +1 :green_heart: | compile | 0m 34s | | the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | +1 :green_heart: | javac | 0m 34s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 31s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 39s | | the patch passed | | +1 :green_heart: | javadoc | 0m 32s | | the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | javadoc | 1m 3s | | the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | +1 :green_heart: | spotbugs | 1m 45s | | the patch passed | | +1 :green_heart: | shadedclient | 20m 43s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 201m 36s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6730/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 31s | | The patch does not generate ASF License warnings. | | | | 290m 23s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.datanode.TestBPOfferService | | | hadoop.hdfs.server.datanode.TestDirectoryScanner | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.45 ServerAPI=1.45 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6730/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6730 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 2fab40cb64dc 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / a23b3b2f0304c55902d615953973ba72ab42ce89 | | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | Test Results |
[jira] [Updated] (HDFS-17466) Move FsVolumeList#getVolumes() invocation out of DataSetLock
[ https://issues.apache.org/jira/browse/HDFS-17466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] farmmamba updated HDFS-17466: - Summary: Move FsVolumeList#getVolumes() invocation out of DataSetLock (was: Remove FsVolumeList#getVolumes() invocation out of DataSetLock) > Move FsVolumeList#getVolumes() invocation out of DataSetLock > > > Key: HDFS-17466 > URL: https://issues.apache.org/jira/browse/HDFS-17466 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode >Affects Versions: 3.4.0 >Reporter: farmmamba >Assignee: farmmamba >Priority: Major > Labels: pull-request-available > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-17470) FsVolumeList#getNextVolume can be moved out of DataSetLock
[ https://issues.apache.org/jira/browse/HDFS-17470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HDFS-17470: -- Labels: pull-request-available (was: ) > FsVolumeList#getNextVolume can be moved out of DataSetLock > -- > > Key: HDFS-17470 > URL: https://issues.apache.org/jira/browse/HDFS-17470 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode >Reporter: farmmamba >Assignee: farmmamba >Priority: Major > Labels: pull-request-available > > FsVolumeList#getNextVolume can be out of BLOCK_POOl read lock. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-17470) FsVolumeList#getNextVolume can be moved out of DataSetLock
[ https://issues.apache.org/jira/browse/HDFS-17470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17837239#comment-17837239 ] ASF GitHub Bot commented on HDFS-17470: --- hfutatzhanghb opened a new pull request, #6733: URL: https://github.com/apache/hadoop/pull/6733 ### Description of PR > FsVolumeList#getNextVolume can be moved out of DataSetLock > -- > > Key: HDFS-17470 > URL: https://issues.apache.org/jira/browse/HDFS-17470 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode >Reporter: farmmamba >Assignee: farmmamba >Priority: Major > > FsVolumeList#getNextVolume can be out of BLOCK_POOl read lock. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-17470) FsVolumeList#getNextVolume can be moved out of DataSetLock
[ https://issues.apache.org/jira/browse/HDFS-17470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] farmmamba updated HDFS-17470: - Issue Type: Improvement (was: Task) > FsVolumeList#getNextVolume can be moved out of DataSetLock > -- > > Key: HDFS-17470 > URL: https://issues.apache.org/jira/browse/HDFS-17470 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Reporter: farmmamba >Assignee: farmmamba >Priority: Major > > FsVolumeList#getNextVolume can be out of BLOCK_POOl read lock. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-17470) FsVolumeList#getNextVolume can be moved out of DataSetLock
[ https://issues.apache.org/jira/browse/HDFS-17470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] farmmamba updated HDFS-17470: - Parent: HDFS-15382 Issue Type: Sub-task (was: Improvement) > FsVolumeList#getNextVolume can be moved out of DataSetLock > -- > > Key: HDFS-17470 > URL: https://issues.apache.org/jira/browse/HDFS-17470 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode >Reporter: farmmamba >Assignee: farmmamba >Priority: Major > > FsVolumeList#getNextVolume can be out of BLOCK_POOl read lock. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-17470) FsVolumeList#getNextVolume can be moved out of DataSetLock
farmmamba created HDFS-17470: Summary: FsVolumeList#getNextVolume can be moved out of DataSetLock Key: HDFS-17470 URL: https://issues.apache.org/jira/browse/HDFS-17470 Project: Hadoop HDFS Issue Type: Task Components: datanode Reporter: farmmamba Assignee: farmmamba FsVolumeList#getNextVolume can be out of BLOCK_POOl read lock. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-17466) Remove FsVolumeList#getVolumes() invocation out of DataSetLock
[ https://issues.apache.org/jira/browse/HDFS-17466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17837209#comment-17837209 ] ASF GitHub Bot commented on HDFS-17466: --- hfutatzhanghb commented on PR #6728: URL: https://github.com/apache/hadoop/pull/6728#issuecomment-2056671003 The failed UT was passed in my local. > Remove FsVolumeList#getVolumes() invocation out of DataSetLock > -- > > Key: HDFS-17466 > URL: https://issues.apache.org/jira/browse/HDFS-17466 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode >Affects Versions: 3.4.0 >Reporter: farmmamba >Assignee: farmmamba >Priority: Major > Labels: pull-request-available > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-17466) Remove FsVolumeList#getVolumes() invocation out of DataSetLock
[ https://issues.apache.org/jira/browse/HDFS-17466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17837208#comment-17837208 ] ASF GitHub Bot commented on HDFS-17466: --- hfutatzhanghb commented on PR #6728: URL: https://github.com/apache/hadoop/pull/6728#issuecomment-205807 @zhangshuyan0 Sir, could you please help me review this PR when you have free time? Thanks a lot. > Remove FsVolumeList#getVolumes() invocation out of DataSetLock > -- > > Key: HDFS-17466 > URL: https://issues.apache.org/jira/browse/HDFS-17466 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode >Affects Versions: 3.4.0 >Reporter: farmmamba >Assignee: farmmamba >Priority: Major > Labels: pull-request-available > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-17458) Remove unnecessary BP lock in ReplicaMap
[ https://issues.apache.org/jira/browse/HDFS-17458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17837206#comment-17837206 ] ASF GitHub Bot commented on HDFS-17458: --- zhangshuyan0 commented on code in PR #6717: URL: https://github.com/apache/hadoop/pull/6717#discussion_r1565660296 ## hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ReplicaMap.java: ## @@ -120,15 +118,13 @@ ReplicaInfo get(String bpid, long blockId) { ReplicaInfo add(String bpid, ReplicaInfo replicaInfo) { checkBlockPool(bpid); checkBlock(replicaInfo); -try (AutoCloseDataSetLock l = lockManager.readLock(LockLevel.BLOCK_POOl, bpid)) { - LightWeightResizableGSet m = map.get(bpid); - if (m == null) { -// Add an entry for block pool if it does not exist already -map.putIfAbsent(bpid, new LightWeightResizableGSet()); -m = map.get(bpid); - } - return m.put(replicaInfo); +LightWeightResizableGSet m = map.get(bpid); +if (m == null) { + // Add an entry for block pool if it does not exist already + map.putIfAbsent(bpid, new LightWeightResizableGSet()); + m = map.get(bpid); } +return m.put(replicaInfo); Review Comment: It's not safe here. If there is somebody changing the `map` after line125 but before line127, the `replicaInfo` may not be able to added to `map`. > Remove unnecessary BP lock in ReplicaMap > > > Key: HDFS-17458 > URL: https://issues.apache.org/jira/browse/HDFS-17458 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Affects Versions: 3.4.0 >Reporter: farmmamba >Assignee: farmmamba >Priority: Major > Labels: pull-request-available > > In HDFS-16429 we make LightWeightResizableGSet to be thread safe, and in > HDFS-16511 we change some methods in ReplicaMap to acquire read lock instead > of acquiring write lock. > This PR try to remove unnecessary Block_Pool read lock further. > Recently, I performed stress tests on datanodes to measure their read/write > operations/second. > Before we removing some lock, it can only achieve ~2K write ops. After > optimizing, it can achieve more than 5K write ops. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-17466) Remove FsVolumeList#getVolumes() invocation out of DataSetLock
[ https://issues.apache.org/jira/browse/HDFS-17466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17837197#comment-17837197 ] ASF GitHub Bot commented on HDFS-17466: --- hadoop-yetus commented on PR #6728: URL: https://github.com/apache/hadoop/pull/6728#issuecomment-2056636387 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 50s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 50m 22s | | trunk passed | | +1 :green_heart: | compile | 1m 40s | | trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | compile | 1m 25s | | trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | +1 :green_heart: | checkstyle | 1m 19s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 40s | | trunk passed | | +1 :green_heart: | javadoc | 1m 16s | | trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | javadoc | 1m 55s | | trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | +1 :green_heart: | spotbugs | 3m 54s | | trunk passed | | +1 :green_heart: | shadedclient | 44m 56s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 24s | | the patch passed | | +1 :green_heart: | compile | 1m 25s | | the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | javac | 1m 25s | | the patch passed | | +1 :green_heart: | compile | 1m 14s | | the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | +1 :green_heart: | javac | 1m 14s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 1m 6s | [/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6728/1/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 52 unchanged - 0 fixed = 53 total (was 52) | | +1 :green_heart: | mvnsite | 1m 26s | | the patch passed | | +1 :green_heart: | javadoc | 1m 12s | | the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | javadoc | 1m 41s | | the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | +1 :green_heart: | spotbugs | 3m 48s | | the patch passed | | +1 :green_heart: | shadedclient | 45m 36s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 263m 42s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6728/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 49s | | The patch does not generate ASF License warnings. | | | | 433m 49s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.TestRollingUpgrade | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.44 ServerAPI=1.44 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6728/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6728 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux a6d212bc0247 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / a7f45345d08ffbec02ac97dacc1b88caf9f3e63f | | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | Multi-JDK versions |
[jira] [Commented] (HDFS-15413) DFSStripedInputStream throws exception when datanodes close idle connections
[ https://issues.apache.org/jira/browse/HDFS-15413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17837170#comment-17837170 ] ASF GitHub Bot commented on HDFS-15413: --- zhangshuyan0 commented on PR #5829: URL: https://github.com/apache/hadoop/pull/5829#issuecomment-2056526550 @Neilxzn Hi, this patch is very useful, would you mind further fixing this PR? > DFSStripedInputStream throws exception when datanodes close idle connections > > > Key: HDFS-15413 > URL: https://issues.apache.org/jira/browse/HDFS-15413 > Project: Hadoop HDFS > Issue Type: Bug > Components: ec, erasure-coding, hdfs-client >Affects Versions: 3.1.3 > Environment: - Hadoop 3.1.3 > - erasure coding with ISA-L and RS-3-2-1024k scheme > - running in kubernetes > - dfs.client.socket-timeout = 1 > - dfs.datanode.socket.write.timeout = 1 >Reporter: Andrey Elenskiy >Priority: Critical > Labels: pull-request-available > Attachments: out.log > > > We've run into an issue with compactions failing in HBase when erasure coding > is enabled on a table directory. After digging further I was able to narrow > it down to a seek + read logic and able to reproduce the issue with hdfs > client only: > {code:java} > import org.apache.hadoop.conf.Configuration; > import org.apache.hadoop.fs.Path; > import org.apache.hadoop.fs.FileSystem; > import org.apache.hadoop.fs.FSDataInputStream; > public class ReaderRaw { > public static void main(final String[] args) throws Exception { > Path p = new Path(args[0]); > int bufLen = Integer.parseInt(args[1]); > int sleepDuration = Integer.parseInt(args[2]); > int countBeforeSleep = Integer.parseInt(args[3]); > int countAfterSleep = Integer.parseInt(args[4]); > Configuration conf = new Configuration(); > FSDataInputStream istream = FileSystem.get(conf).open(p); > byte[] buf = new byte[bufLen]; > int readTotal = 0; > int count = 0; > try { > while (true) { > istream.seek(readTotal); > int bytesRemaining = bufLen; > int bufOffset = 0; > while (bytesRemaining > 0) { > int nread = istream.read(buf, 0, bufLen); > if (nread < 0) { > throw new Exception("nread is less than zero"); > } > readTotal += nread; > bufOffset += nread; > bytesRemaining -= nread; > } > count++; > if (count == countBeforeSleep) { > System.out.println("sleeping for " + sleepDuration + " > milliseconds"); > Thread.sleep(sleepDuration); > System.out.println("resuming"); > } > if (count == countBeforeSleep + countAfterSleep) { > System.out.println("done"); > break; > } > } > } catch (Exception e) { > System.out.println("exception on read " + count + " read total " > + readTotal); > throw e; > } > } > } > {code} > The issue appears to be due to the fact that datanodes close the connection > of EC client if it doesn't fetch next packet for longer than > dfs.client.socket-timeout. The EC client doesn't retry and instead assumes > that those datanodes went away resulting in "missing blocks" exception. > I was able to consistently reproduce with the following arguments: > {noformat} > bufLen = 100 (just below 1MB which is the size of the stripe) > sleepDuration = (dfs.client.socket-timeout + 1) * 1000 (in our case 11000) > countBeforeSleep = 1 > countAfterSleep = 7 > {noformat} > I've attached the entire log output of running the snippet above against > erasure coded file with RS-3-2-1024k policy. And here are the logs from > datanodes of disconnecting the client: > datanode 1: > {noformat} > 2020-06-15 19:06:20,697 INFO datanode.DataNode: Likely the client has stopped > reading, disconnecting it (datanode-v11-0-hadoop.hadoop:9866:DataXceiver > error processing READ_BLOCK operation src: /10.128.23.40:53748 dst: > /10.128.14.46:9866); java.net.SocketTimeoutException: 1 millis timeout > while waiting for channel to be ready for write. ch : > java.nio.channels.SocketChannel[connected local=/10.128.14.46:9866 > remote=/10.128.23.40:53748] > {noformat} > datanode 2: > {noformat} > 2020-06-15 19:06:20,341 INFO datanode.DataNode: Likely the client has stopped > reading, disconnecting it (datanode-v11-1-hadoop.hadoop:9866:DataXceiver > error processing READ_BLOCK operation src: /10.128.23.40:48772 dst: > /10.128.9.42:9866); java.net.SocketTimeoutException: 1 millis timeout > while waiting for channel to be
[jira] [Resolved] (HDFS-17383) Datanode current block token should come from active NameNode in HA mode
[ https://issues.apache.org/jira/browse/HDFS-17383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shuyan Zhang resolved HDFS-17383. - Fix Version/s: 3.5.0 Hadoop Flags: Reviewed Target Version/s: 3.5.0 Assignee: lei w Resolution: Fixed > Datanode current block token should come from active NameNode in HA mode > > > Key: HDFS-17383 > URL: https://issues.apache.org/jira/browse/HDFS-17383 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: lei w >Assignee: lei w >Priority: Major > Labels: pull-request-available > Fix For: 3.5.0 > > Attachments: reproduce.diff > > > We found that transfer block failed during the namenode upgrade. The specific > error reported was that the block token verification failed. The reason is > that during the datanode transfer block process, the source datanode uses its > own generated block token, and the keyid comes from ANN or SBN. However, > because the newly upgraded NN has just been started, the keyid owned by the > source datanode may not be owned by the target datanode, so the write fails. > Here's how to reproduce this situation in the attachment -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-17383) Datanode current block token should come from active NameNode in HA mode
[ https://issues.apache.org/jira/browse/HDFS-17383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17837167#comment-17837167 ] ASF GitHub Bot commented on HDFS-17383: --- zhangshuyan0 merged PR #6562: URL: https://github.com/apache/hadoop/pull/6562 > Datanode current block token should come from active NameNode in HA mode > > > Key: HDFS-17383 > URL: https://issues.apache.org/jira/browse/HDFS-17383 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: lei w >Priority: Major > Labels: pull-request-available > Attachments: reproduce.diff > > > We found that transfer block failed during the namenode upgrade. The specific > error reported was that the block token verification failed. The reason is > that during the datanode transfer block process, the source datanode uses its > own generated block token, and the keyid comes from ANN or SBN. However, > because the newly upgraded NN has just been started, the keyid owned by the > source datanode may not be owned by the target datanode, so the write fails. > Here's how to reproduce this situation in the attachment -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-17383) Datanode current block token should come from active NameNode in HA mode
[ https://issues.apache.org/jira/browse/HDFS-17383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17837144#comment-17837144 ] ASF GitHub Bot commented on HDFS-17383: --- hadoop-yetus commented on PR #6562: URL: https://github.com/apache/hadoop/pull/6562#issuecomment-2056402030 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 50s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 6 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 49m 42s | | trunk passed | | +1 :green_heart: | compile | 1m 24s | | trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | compile | 1m 15s | | trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | +1 :green_heart: | checkstyle | 1m 15s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 25s | | trunk passed | | +1 :green_heart: | javadoc | 1m 9s | | trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | javadoc | 1m 45s | | trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | +1 :green_heart: | spotbugs | 3m 17s | | trunk passed | | +1 :green_heart: | shadedclient | 40m 56s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 13s | | the patch passed | | +1 :green_heart: | compile | 1m 18s | | the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | javac | 1m 18s | | the patch passed | | +1 :green_heart: | compile | 1m 8s | | the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | +1 :green_heart: | javac | 1m 8s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 1m 5s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 18s | | the patch passed | | +1 :green_heart: | javadoc | 1m 3s | | the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | javadoc | 1m 28s | | the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | +1 :green_heart: | spotbugs | 3m 31s | | the patch passed | | +1 :green_heart: | shadedclient | 41m 18s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 261m 38s | | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 1m 1s | | The patch does not generate ASF License warnings. | | | | 420m 4s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.44 ServerAPI=1.44 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6562/9/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6562 | | JIRA Issue | HDFS-17383 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux d5ed772e6a61 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / c4a86cc4be7f275bc51fa4f471a2d7c1f0e79a54 | | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6562/9/testReport/ | | Max. process+thread count | 2878 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6562/9/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. > Datanode
[jira] [Updated] (HDFS-17469) Audit log for reportBadBlocks RPC
[ https://issues.apache.org/jira/browse/HDFS-17469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HDFS-17469: -- Labels: pull-request-available (was: ) > Audit log for reportBadBlocks RPC > - > > Key: HDFS-17469 > URL: https://issues.apache.org/jira/browse/HDFS-17469 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Reporter: dzcxzl >Priority: Minor > Labels: pull-request-available > > After [HDFS-10347|https://issues.apache.org/jira/browse/HDFS-10347], we can > know the DN corresponding to the reported bad block, but we do not know the > reported Client IP. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-17469) Audit log for reportBadBlocks RPC
[ https://issues.apache.org/jira/browse/HDFS-17469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17837134#comment-17837134 ] ASF GitHub Bot commented on HDFS-17469: --- cxzl25 opened a new pull request, #6731: URL: https://github.com/apache/hadoop/pull/6731 ### Description of PR After [HDFS-10347|https://issues.apache.org/jira/browse/HDFS-10347], we can know the DN corresponding to the reported bad block, but we do not know the reported Client IP. ### How was this patch tested? ### For code changes: - [ ] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')? - [ ] Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, `NOTICE-binary` files? > Audit log for reportBadBlocks RPC > - > > Key: HDFS-17469 > URL: https://issues.apache.org/jira/browse/HDFS-17469 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Reporter: dzcxzl >Priority: Minor > > After [HDFS-10347|https://issues.apache.org/jira/browse/HDFS-10347], we can > know the DN corresponding to the reported bad block, but we do not know the > reported Client IP. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-17467) IncrementalBlockReportManager#getPerStorageIBR may throw NPE when remove volumes
[ https://issues.apache.org/jira/browse/HDFS-17467?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HDFS-17467: -- Labels: pull-request-available (was: ) > IncrementalBlockReportManager#getPerStorageIBR may throw NPE when remove > volumes > > > Key: HDFS-17467 > URL: https://issues.apache.org/jira/browse/HDFS-17467 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode >Affects Versions: 3.4.0 >Reporter: farmmamba >Assignee: farmmamba >Priority: Major > Labels: pull-request-available > > When we remove volumes, it may cause > IncrementalBlockReportManager#getPerStorageIBR throws NPE. > Consider below situation: > 1、we have down createRbw、finalizeBlock. But have not done > datanode.closeBlock in method `BlockReceiver.PacketResponder#finalizeBlock`. > 2、we remove volume which replica was written to and it executes such code: > `storageMap.remove(storageUuid);` > 3、 we begin to execute datanode.closeBlock which try to send IBR to NameNode. > but when getting DatanodeStorage from storageMap using > storageUuid, we will get null because we have remove this storageUuid key > from storageMap. > 4、Throw NPE in getPerStorageIBR method, because ConcurrentHashMap don't allow > null key. > > > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-17469) Audit log for reportBadBlocks RPC
dzcxzl created HDFS-17469: - Summary: Audit log for reportBadBlocks RPC Key: HDFS-17469 URL: https://issues.apache.org/jira/browse/HDFS-17469 Project: Hadoop HDFS Issue Type: Improvement Components: namenode Reporter: dzcxzl After [HDFS-10347|https://issues.apache.org/jira/browse/HDFS-10347], we can know the DN corresponding to the reported bad block, but we do not know the reported Client IP. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-17467) IncrementalBlockReportManager#getPerStorageIBR may throw NPE when remove volumes
[ https://issues.apache.org/jira/browse/HDFS-17467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17837131#comment-17837131 ] ASF GitHub Bot commented on HDFS-17467: --- hfutatzhanghb opened a new pull request, #6730: URL: https://github.com/apache/hadoop/pull/6730 ### Description of PR Refer to HDFS-17467. > IncrementalBlockReportManager#getPerStorageIBR may throw NPE when remove > volumes > > > Key: HDFS-17467 > URL: https://issues.apache.org/jira/browse/HDFS-17467 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode >Affects Versions: 3.4.0 >Reporter: farmmamba >Assignee: farmmamba >Priority: Major > > When we remove volumes, it may cause > IncrementalBlockReportManager#getPerStorageIBR throws NPE. > Consider below situation: > 1、we have down createRbw、finalizeBlock. But have not done > datanode.closeBlock in method `BlockReceiver.PacketResponder#finalizeBlock`. > 2、we remove volume which replica was written to and it executes such code: > `storageMap.remove(storageUuid);` > 3、 we begin to execute datanode.closeBlock which try to send IBR to NameNode. > but when getting DatanodeStorage from storageMap using > storageUuid, we will get null because we have remove this storageUuid key > from storageMap. > 4、Throw NPE in getPerStorageIBR method, because ConcurrentHashMap don't allow > null key. > > > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-17424) [FGL] DelegationTokenSecretManager supports fine-grained lock
[ https://issues.apache.org/jira/browse/HDFS-17424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17837122#comment-17837122 ] ASF GitHub Bot commented on HDFS-17424: --- hadoop-yetus commented on PR #6696: URL: https://github.com/apache/hadoop/pull/6696#issuecomment-2056272750 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 7m 49s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ HDFS-17384 Compile Tests _ | | +1 :green_heart: | mvninstall | 32m 40s | | HDFS-17384 passed | | +1 :green_heart: | compile | 0m 43s | | HDFS-17384 passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | compile | 0m 39s | | HDFS-17384 passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | +1 :green_heart: | checkstyle | 0m 41s | | HDFS-17384 passed | | +1 :green_heart: | mvnsite | 0m 47s | | HDFS-17384 passed | | +1 :green_heart: | javadoc | 0m 43s | | HDFS-17384 passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | javadoc | 1m 6s | | HDFS-17384 passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | +1 :green_heart: | spotbugs | 1m 47s | | HDFS-17384 passed | | +1 :green_heart: | shadedclient | 21m 8s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 38s | | the patch passed | | +1 :green_heart: | compile | 0m 38s | | the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | javac | 0m 38s | | the patch passed | | +1 :green_heart: | compile | 0m 35s | | the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | +1 :green_heart: | javac | 0m 35s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 31s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 37s | | the patch passed | | +1 :green_heart: | javadoc | 0m 32s | | the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 | | +1 :green_heart: | javadoc | 1m 2s | | the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | +1 :green_heart: | spotbugs | 1m 41s | | the patch passed | | +1 :green_heart: | shadedclient | 21m 16s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 209m 1s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6696/4/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 28s | | The patch does not generate ASF License warnings. | | | | 306m 11s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.datanode.TestDirectoryScanner | | | hadoop.hdfs.tools.TestDFSAdmin | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.45 ServerAPI=1.45 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6696/4/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6696 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 46ffa4fbf2b0 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | HDFS-17384 / 14a76f9e796075759c8739dd2fafe54f9a457a6b | | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 | | Test Results |
[jira] [Created] (HDFS-17468) Update ISA-L to 2.31.0 in the build image
Takanobu Asanuma created HDFS-17468: --- Summary: Update ISA-L to 2.31.0 in the build image Key: HDFS-17468 URL: https://issues.apache.org/jira/browse/HDFS-17468 Project: Hadoop HDFS Issue Type: Task Reporter: Takanobu Asanuma Assignee: Takanobu Asanuma Intel ISA-L has several improvements in version 2.31.0. Let's update ISA-L in our build image to this version. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-17467) IncrementalBlockReportManager#getPerStorageIBR may throw NPE when remove volumes
[ https://issues.apache.org/jira/browse/HDFS-17467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17837114#comment-17837114 ] farmmamba commented on HDFS-17467: -- [~hexiaoqiao] [~zhangshuyan] [~ayushsaxena] [~tomscut] Sir, could you please check this problem when you are free ? Thanks a lot. > IncrementalBlockReportManager#getPerStorageIBR may throw NPE when remove > volumes > > > Key: HDFS-17467 > URL: https://issues.apache.org/jira/browse/HDFS-17467 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode >Affects Versions: 3.4.0 >Reporter: farmmamba >Assignee: farmmamba >Priority: Major > > When we remove volumes, it may cause > IncrementalBlockReportManager#getPerStorageIBR throws NPE. > Consider below situation: > 1、we have down createRbw、finalizeBlock. But have not done > datanode.closeBlock in method `BlockReceiver.PacketResponder#finalizeBlock`. > 2、we remove volume which replica was written to and it executes such code: > `storageMap.remove(storageUuid);` > 3、 we begin to execute datanode.closeBlock which try to send IBR to NameNode. > but when getting DatanodeStorage from storageMap using > storageUuid, we will get null because we have remove this storageUuid key > from storageMap. > 4、Throw NPE in getPerStorageIBR method, because ConcurrentHashMap don't allow > null key. > > > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-17467) IncrementalBlockReportManager#getPerStorageIBR may throw NPE when remove volumes
[ https://issues.apache.org/jira/browse/HDFS-17467?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] farmmamba updated HDFS-17467: - Description: When we remove volumes, it may cause IncrementalBlockReportManager#getPerStorageIBR throws NPE. Consider below situation: 1、we have down createRbw、finalizeBlock. But have not done datanode.closeBlock in method `BlockReceiver.PacketResponder#finalizeBlock`. 2、we remove volume which replica was written to and it executes such code: `storageMap.remove(storageUuid);` 3、 we begin to execute datanode.closeBlock which try to send IBR to NameNode. but when getting DatanodeStorage from storageMap using storageUuid, we will get null because we have remove this storageUuid key from storageMap. 4、Throw NPE in getPerStorageIBR method, because ConcurrentHashMap don't allow null key. was: When we remove volumes, it may causeConsider below situation: > IncrementalBlockReportManager#getPerStorageIBR may throw NPE when remove > volumes > > > Key: HDFS-17467 > URL: https://issues.apache.org/jira/browse/HDFS-17467 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode >Affects Versions: 3.4.0 >Reporter: farmmamba >Assignee: farmmamba >Priority: Major > > When we remove volumes, it may cause > IncrementalBlockReportManager#getPerStorageIBR throws NPE. > Consider below situation: > 1、we have down createRbw、finalizeBlock. But have not done > datanode.closeBlock in method `BlockReceiver.PacketResponder#finalizeBlock`. > 2、we remove volume which replica was written to and it executes such code: > `storageMap.remove(storageUuid);` > 3、 we begin to execute datanode.closeBlock which try to send IBR to NameNode. > but when getting DatanodeStorage from storageMap using > storageUuid, we will get null because we have remove this storageUuid key > from storageMap. > 4、Throw NPE in getPerStorageIBR method, because ConcurrentHashMap don't allow > null key. > > > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-17467) IncrementalBlockReportManager#getPerStorageIBR may throw NPE when remove volumes
[ https://issues.apache.org/jira/browse/HDFS-17467?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] farmmamba updated HDFS-17467: - Description: When we remove volumes, it may causeConsider below situation: > IncrementalBlockReportManager#getPerStorageIBR may throw NPE when remove > volumes > > > Key: HDFS-17467 > URL: https://issues.apache.org/jira/browse/HDFS-17467 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode >Affects Versions: 3.4.0 >Reporter: farmmamba >Assignee: farmmamba >Priority: Major > > When we remove volumes, it may causeConsider below situation: > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-17467) IncrementalBlockReportManager#getPerStorageIBR may throw NPE when remove volumes
farmmamba created HDFS-17467: Summary: IncrementalBlockReportManager#getPerStorageIBR may throw NPE when remove volumes Key: HDFS-17467 URL: https://issues.apache.org/jira/browse/HDFS-17467 Project: Hadoop HDFS Issue Type: Bug Components: datanode Affects Versions: 3.4.0 Reporter: farmmamba Assignee: farmmamba -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-17465) RBF: Use ProportionRouterRpcFairnessPolicyController get “java.Lang. Error: Maximum permit count exceeded”
[ https://issues.apache.org/jira/browse/HDFS-17465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiping Zhang resolved HDFS-17465. - Resolution: Fixed > RBF: Use ProportionRouterRpcFairnessPolicyController get “java.Lang. Error: > Maximum permit count exceeded” > --- > > Key: HDFS-17465 > URL: https://issues.apache.org/jira/browse/HDFS-17465 > Project: Hadoop HDFS > Issue Type: Bug > Components: rbf >Affects Versions: 3.5.0 >Reporter: Xiping Zhang >Assignee: Xiping Zhang >Priority: Blocker > Labels: pull-request-available > Attachments: image-2024-04-14-15-39-59-531.png, > image-2024-04-14-16-07-32-362.png, image-2024-04-14-16-23-18-499.png > > > !image-2024-04-14-15-39-59-531.png! -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org