[jira] [Commented] (HDFS-17228) Improve documentation related to BlockManager
[ https://issues.apache.org/jira/browse/HDFS-17228?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17778899#comment-17778899 ] ASF GitHub Bot commented on HDFS-17228: --- haiyang1987 commented on PR #6195: URL: https://github.com/apache/hadoop/pull/6195#issuecomment-1776392248 Get it, thanks @ayushtkn for you comment. > Improve documentation related to BlockManager > - > > Key: HDFS-17228 > URL: https://issues.apache.org/jira/browse/HDFS-17228 > Project: Hadoop HDFS > Issue Type: Improvement > Components: block placement, documentation >Affects Versions: 3.3.3, 3.3.6 >Reporter: JiangHua Zhu >Assignee: JiangHua Zhu >Priority: Minor > Labels: pull-request-available > Fix For: 3.4.0 > > Attachments: image-2023-10-17-17-25-27-363.png > > > In the BlockManager file, some important comments are missing. > Happens here: > !image-2023-10-17-17-25-27-363.png! > If it is improved, the robustness of the distributed system can be increased. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-17235) Fix javadoc errors in BlockManager
[ https://issues.apache.org/jira/browse/HDFS-17235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17778898#comment-17778898 ] ASF GitHub Bot commented on HDFS-17235: --- haiyang1987 commented on PR #6214: URL: https://github.com/apache/hadoop/pull/6214#issuecomment-1776391222 Thanks @ayushtkn @slfan1989 @steveloughran help me review and merge. > Fix javadoc errors in BlockManager > -- > > Key: HDFS-17235 > URL: https://issues.apache.org/jira/browse/HDFS-17235 > Project: Hadoop HDFS > Issue Type: Improvement > Components: documentation >Reporter: Haiyang Hu >Assignee: Haiyang Hu >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > > There are 2 errors in BlockManager.java > https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6194/4/artifact/out/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04.txt > {code:java} > [ERROR] > /home/jenkins/jenkins-agent/workspace/hadoop-multibranch_PR-6194/ubuntu-focal/src/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java:153: > error: reference not found > [ERROR] * by {@link DFS_NAMENODE_MAINTENANCE_REPLICATION_MIN_KEY}. This > number has to <= > [ERROR] ^ > [ERROR] > /home/jenkins/jenkins-agent/workspace/hadoop-multibranch_PR-6194/ubuntu-focal/src/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java:154: > error: reference not found > [ERROR] * {@link DFS_NAMENODE_REPLICATION_MIN_KEY}. > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-17237) Remove IPCLoggerChannel Metrics when the logger is closed
[ https://issues.apache.org/jira/browse/HDFS-17237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17778773#comment-17778773 ] ASF GitHub Bot commented on HDFS-17237: --- hadoop-yetus commented on PR #6217: URL: https://github.com/apache/hadoop/pull/6217#issuecomment-1775727643 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 12m 8s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 44m 5s | | trunk passed | | +1 :green_heart: | compile | 1m 16s | | trunk passed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 1m 12s | | trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | checkstyle | 1m 8s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 22s | | trunk passed | | -1 :x: | javadoc | 1m 5s | [/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6217/1/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04.txt) | hadoop-hdfs in trunk failed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04. | | +1 :green_heart: | javadoc | 1m 36s | | trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | spotbugs | 3m 12s | | trunk passed | | +1 :green_heart: | shadedclient | 34m 19s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 10s | | the patch passed | | +1 :green_heart: | compile | 1m 13s | | the patch passed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 1m 13s | | the patch passed | | +1 :green_heart: | compile | 1m 3s | | the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | javac | 1m 3s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 57s | | hadoop-hdfs-project/hadoop-hdfs: The patch generated 0 new + 9 unchanged - 3 fixed = 9 total (was 12) | | +1 :green_heart: | mvnsite | 1m 12s | | the patch passed | | -1 :x: | javadoc | 0m 51s | [/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6217/1/artifact/out/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04.txt) | hadoop-hdfs in the patch failed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04. | | +1 :green_heart: | javadoc | 1m 27s | | the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | spotbugs | 3m 10s | | the patch passed | | +1 :green_heart: | shadedclient | 34m 14s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 209m 29s | | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 42s | | The patch does not generate ASF License warnings. | | | | 356m 44s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6217/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6217 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 30c2cb3a0a2f 5.15.0-86-generic #96-Ubuntu SMP Wed Sep 20 08:23:49 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 2766261e79ac836adfcdd2c19c67d5bf79f45cd0 | | Default Java | Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/jav
[jira] [Commented] (HDFS-17231) HA: Safemode should exit when resources are from low to available
[ https://issues.apache.org/jira/browse/HDFS-17231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17778739#comment-17778739 ] ASF GitHub Bot commented on HDFS-17231: --- hadoop-yetus commented on PR #6207: URL: https://github.com/apache/hadoop/pull/6207#issuecomment-1775578665 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 25s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 33m 36s | | trunk passed | | +1 :green_heart: | compile | 0m 53s | | trunk passed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 0m 48s | | trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | checkstyle | 0m 45s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 52s | | trunk passed | | -1 :x: | javadoc | 0m 51s | [/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6207/3/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04.txt) | hadoop-hdfs in trunk failed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04. | | +1 :green_heart: | javadoc | 1m 10s | | trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | spotbugs | 1m 55s | | trunk passed | | +1 :green_heart: | shadedclient | 21m 36s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 43s | | the patch passed | | +1 :green_heart: | compile | 0m 46s | | the patch passed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 0m 46s | | the patch passed | | +1 :green_heart: | compile | 0m 40s | | the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | javac | 0m 40s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 33s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 45s | | the patch passed | | -1 :x: | javadoc | 0m 38s | [/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6207/3/artifact/out/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04.txt) | hadoop-hdfs in the patch failed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04. | | +1 :green_heart: | javadoc | 1m 3s | | the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | -1 :x: | spotbugs | 1m 52s | [/new-spotbugs-hadoop-hdfs-project_hadoop-hdfs.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6207/3/artifact/out/new-spotbugs-hadoop-hdfs-project_hadoop-hdfs.html) | hadoop-hdfs-project/hadoop-hdfs generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) | | +1 :green_heart: | shadedclient | 21m 56s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 194m 35s | | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 37s | | The patch does not generate ASF License warnings. | | | | 288m 18s | | | | Reason | Tests | |---:|:--| | SpotBugs | module:hadoop-hdfs-project/hadoop-hdfs | | | Inconsistent synchronization of org.apache.hadoop.hdfs.server.namenode.FSNamesystem.manualSafeMode; locked 75% of time Unsynchronized access at FSNamesystem.java:75% of time Unsynchronized access at FSNamesystem.java:[line 4538] | | | Inconsistent synchronization of org.apache.hadoop.hdfs.server.namenode.FSNamesystem.resourceLowSafeMode; locked 75% of time Unsynchronized access at FSNamesystem.java:75% of time Unsynchronized access at FSNamesystem.java:[line 4538] | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base:
[jira] [Commented] (HDFS-17223) Add journalnode maintenance node list
[ https://issues.apache.org/jira/browse/HDFS-17223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17778714#comment-17778714 ] ASF GitHub Bot commented on HDFS-17223: --- hadoop-yetus commented on PR #6183: URL: https://github.com/apache/hadoop/pull/6183#issuecomment-1775462405 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 51s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 0s | | xmllint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 3 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 49m 46s | | trunk passed | | +1 :green_heart: | compile | 1m 29s | | trunk passed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 1m 18s | | trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | checkstyle | 1m 12s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 24s | | trunk passed | | -1 :x: | javadoc | 1m 10s | [/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6183/2/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04.txt) | hadoop-hdfs in trunk failed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04. | | +1 :green_heart: | javadoc | 1m 40s | | trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | spotbugs | 3m 32s | | trunk passed | | +1 :green_heart: | shadedclient | 41m 43s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 14s | | the patch passed | | +1 :green_heart: | compile | 1m 19s | | the patch passed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 1m 19s | | the patch passed | | +1 :green_heart: | compile | 1m 11s | | the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | javac | 1m 11s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 1m 3s | [/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6183/2/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs-project/hadoop-hdfs: The patch generated 3 new + 222 unchanged - 0 fixed = 225 total (was 222) | | +1 :green_heart: | mvnsite | 1m 18s | | the patch passed | | -1 :x: | javadoc | 1m 0s | [/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6183/2/artifact/out/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04.txt) | hadoop-hdfs in the patch failed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04. | | -1 :x: | javadoc | 1m 35s | [/results-javadoc-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_382-8u382-ga-1~20.04.1-b05.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6183/2/artifact/out/results-javadoc-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_382-8u382-ga-1~20.04.1-b05.txt) | hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_382-8u382-ga-1~20.04.1-b05 with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 generated 1 new + 1 unchanged - 0 fixed = 2 total (was 1) | | +1 :green_heart: | spotbugs | 3m 37s | | the patch passed | | +1 :green_heart: | shadedclient | 42m 17s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 240m 54s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6183/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | -1 :x: | asflicense | 0m 42s | [/results-asflicense.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6183/2/artifact/out/results-asflicense.txt) | T
[jira] [Commented] (HDFS-17228) Improve documentation related to BlockManager
[ https://issues.apache.org/jira/browse/HDFS-17228?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17778699#comment-17778699 ] ASF GitHub Bot commented on HDFS-17228: --- ayushtkn commented on PR #6195: URL: https://github.com/apache/hadoop/pull/6195#issuecomment-1775366643 @haiyang1987 I have merged your PR, thanx for taking care :-) > Improve documentation related to BlockManager > - > > Key: HDFS-17228 > URL: https://issues.apache.org/jira/browse/HDFS-17228 > Project: Hadoop HDFS > Issue Type: Improvement > Components: block placement, documentation >Affects Versions: 3.3.3, 3.3.6 >Reporter: JiangHua Zhu >Assignee: JiangHua Zhu >Priority: Minor > Labels: pull-request-available > Fix For: 3.4.0 > > Attachments: image-2023-10-17-17-25-27-363.png > > > In the BlockManager file, some important comments are missing. > Happens here: > !image-2023-10-17-17-25-27-363.png! > If it is improved, the robustness of the distributed system can be increased. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-17235) Fix javadoc errors in BlockManager
[ https://issues.apache.org/jira/browse/HDFS-17235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ayush Saxena resolved HDFS-17235. - Fix Version/s: 3.4.0 Hadoop Flags: Reviewed Resolution: Fixed > Fix javadoc errors in BlockManager > -- > > Key: HDFS-17235 > URL: https://issues.apache.org/jira/browse/HDFS-17235 > Project: Hadoop HDFS > Issue Type: Improvement > Components: documentation >Reporter: Haiyang Hu >Assignee: Haiyang Hu >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > > There are 2 errors in BlockManager.java > https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6194/4/artifact/out/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04.txt > {code:java} > [ERROR] > /home/jenkins/jenkins-agent/workspace/hadoop-multibranch_PR-6194/ubuntu-focal/src/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java:153: > error: reference not found > [ERROR] * by {@link DFS_NAMENODE_MAINTENANCE_REPLICATION_MIN_KEY}. This > number has to <= > [ERROR] ^ > [ERROR] > /home/jenkins/jenkins-agent/workspace/hadoop-multibranch_PR-6194/ubuntu-focal/src/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java:154: > error: reference not found > [ERROR] * {@link DFS_NAMENODE_REPLICATION_MIN_KEY}. > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-17235) Fix javadoc errors in BlockManager
[ https://issues.apache.org/jira/browse/HDFS-17235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17778696#comment-17778696 ] ASF GitHub Bot commented on HDFS-17235: --- ayushtkn merged PR #6214: URL: https://github.com/apache/hadoop/pull/6214 > Fix javadoc errors in BlockManager > -- > > Key: HDFS-17235 > URL: https://issues.apache.org/jira/browse/HDFS-17235 > Project: Hadoop HDFS > Issue Type: Improvement > Components: documentation >Reporter: Haiyang Hu >Assignee: Haiyang Hu >Priority: Major > Labels: pull-request-available > > There are 2 errors in BlockManager.java > https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6194/4/artifact/out/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04.txt > {code:java} > [ERROR] > /home/jenkins/jenkins-agent/workspace/hadoop-multibranch_PR-6194/ubuntu-focal/src/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java:153: > error: reference not found > [ERROR] * by {@link DFS_NAMENODE_MAINTENANCE_REPLICATION_MIN_KEY}. This > number has to <= > [ERROR] ^ > [ERROR] > /home/jenkins/jenkins-agent/workspace/hadoop-multibranch_PR-6194/ubuntu-focal/src/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java:154: > error: reference not found > [ERROR] * {@link DFS_NAMENODE_REPLICATION_MIN_KEY}. > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-17235) Fix javadoc errors in BlockManager
[ https://issues.apache.org/jira/browse/HDFS-17235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17778697#comment-17778697 ] Ayush Saxena commented on HDFS-17235: - Committed to trunk. Thanx [~haiyang Hu] for the contribution!!! > Fix javadoc errors in BlockManager > -- > > Key: HDFS-17235 > URL: https://issues.apache.org/jira/browse/HDFS-17235 > Project: Hadoop HDFS > Issue Type: Improvement > Components: documentation >Reporter: Haiyang Hu >Assignee: Haiyang Hu >Priority: Major > Labels: pull-request-available > > There are 2 errors in BlockManager.java > https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6194/4/artifact/out/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04.txt > {code:java} > [ERROR] > /home/jenkins/jenkins-agent/workspace/hadoop-multibranch_PR-6194/ubuntu-focal/src/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java:153: > error: reference not found > [ERROR] * by {@link DFS_NAMENODE_MAINTENANCE_REPLICATION_MIN_KEY}. This > number has to <= > [ERROR] ^ > [ERROR] > /home/jenkins/jenkins-agent/workspace/hadoop-multibranch_PR-6194/ubuntu-focal/src/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java:154: > error: reference not found > [ERROR] * {@link DFS_NAMENODE_REPLICATION_MIN_KEY}. > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-17228) Improve documentation related to BlockManager
[ https://issues.apache.org/jira/browse/HDFS-17228?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17778693#comment-17778693 ] ASF GitHub Bot commented on HDFS-17228: --- ayushtkn commented on PR #6195: URL: https://github.com/apache/hadoop/pull/6195#issuecomment-1775349963 Lemme handle, I will commit this again & merge that PR as well > Improve documentation related to BlockManager > - > > Key: HDFS-17228 > URL: https://issues.apache.org/jira/browse/HDFS-17228 > Project: Hadoop HDFS > Issue Type: Improvement > Components: block placement, documentation >Affects Versions: 3.3.3, 3.3.6 >Reporter: JiangHua Zhu >Assignee: JiangHua Zhu >Priority: Minor > Labels: pull-request-available > Fix For: 3.4.0 > > Attachments: image-2023-10-17-17-25-27-363.png > > > In the BlockManager file, some important comments are missing. > Happens here: > !image-2023-10-17-17-25-27-363.png! > If it is improved, the robustness of the distributed system can be increased. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-17228) Improve documentation related to BlockManager
[ https://issues.apache.org/jira/browse/HDFS-17228?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17778689#comment-17778689 ] ASF GitHub Bot commented on HDFS-17228: --- haiyang1987 commented on PR #6195: URL: https://github.com/apache/hadoop/pull/6195#issuecomment-1775316526 Hi @ayushtkn Sir, I sbmitted this https://github.com/apache/hadoop/pull/6214 before, to fix javadoc errors > Improve documentation related to BlockManager > - > > Key: HDFS-17228 > URL: https://issues.apache.org/jira/browse/HDFS-17228 > Project: Hadoop HDFS > Issue Type: Improvement > Components: block placement, documentation >Affects Versions: 3.3.3, 3.3.6 >Reporter: JiangHua Zhu >Assignee: JiangHua Zhu >Priority: Minor > Labels: pull-request-available > Fix For: 3.4.0 > > Attachments: image-2023-10-17-17-25-27-363.png > > > In the BlockManager file, some important comments are missing. > Happens here: > !image-2023-10-17-17-25-27-363.png! > If it is improved, the robustness of the distributed system can be increased. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-17228) Improve documentation related to BlockManager
[ https://issues.apache.org/jira/browse/HDFS-17228?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17778685#comment-17778685 ] ASF GitHub Bot commented on HDFS-17228: --- ayushtkn commented on PR #6195: URL: https://github.com/apache/hadoop/pull/6195#issuecomment-1775284308 This creates javadoc issues with jdk-11, I am reverting this. > Improve documentation related to BlockManager > - > > Key: HDFS-17228 > URL: https://issues.apache.org/jira/browse/HDFS-17228 > Project: Hadoop HDFS > Issue Type: Improvement > Components: block placement, documentation >Affects Versions: 3.3.3, 3.3.6 >Reporter: JiangHua Zhu >Assignee: JiangHua Zhu >Priority: Minor > Labels: pull-request-available > Fix For: 3.4.0 > > Attachments: image-2023-10-17-17-25-27-363.png > > > In the BlockManager file, some important comments are missing. > Happens here: > !image-2023-10-17-17-25-27-363.png! > If it is improved, the robustness of the distributed system can be increased. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-17237) Remove IPCLoggerChannel Metrics when the logger is closed
[ https://issues.apache.org/jira/browse/HDFS-17237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HDFS-17237: -- Labels: pull-request-available (was: ) > Remove IPCLoggerChannel Metrics when the logger is closed > - > > Key: HDFS-17237 > URL: https://issues.apache.org/jira/browse/HDFS-17237 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Reporter: Stephen O'Donnell >Assignee: Stephen O'Donnell >Priority: Major > Labels: pull-request-available > > When an IPCLoggerChannel is created (which is used to read from and write to > the Journal nodes) it also creates a metrics object. When the namenodes > failover, the IPC loggers are all closed and reopened in read mode on the new > SBNN or the read mode is closed on the SBNN and re-opened in write mode. The > closing frees the resources and discards the original IPCLoggerChannel object > and causes a new one to be created by the caller. > If a Journal node was down and added back to the cluster with the same > hostname, but a different IP, when the failover happens, you end up with 4 > metrics objects for the JNs: > 1. For for each of the original 3 IPs > 2. One for the new IP > The old stale metric will remain forever and will no longer be updated, > leading to confusing results in any tools that use the metrics for monitoring. > This change, ensures we un-register the metrics when the logger channel is > closed and a new metrics object gets created when the new channel is created. > I have added a small test to prove this, but also reproduced the original > issue on a docker cluster and validated it is resolved with this change in > place. > For info, the logger metrics look like: > {code} > { >"name" : "Hadoop:service=NameNode,name=IPCLoggerChannel-192.168.32.8-8485", > "modelerType" : "IPCLoggerChannel-192.168.32.8-8485", > "tag.Context" : "dfs", > "tag.IsOutOfSync" : "false", > "tag.Hostname" : "957e3e66f10b", > "QueuedEditsSize" : 0, > "LagTimeMillis" : 0, > "CurrentLagTxns" : 0 > } > {code} > Node the name includes the IP, rather than the hostname. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-17237) Remove IPCLoggerChannel Metrics when the logger is closed
[ https://issues.apache.org/jira/browse/HDFS-17237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17778637#comment-17778637 ] ASF GitHub Bot commented on HDFS-17237: --- sodonnel opened a new pull request, #6217: URL: https://github.com/apache/hadoop/pull/6217 ### Description of PR When an IPCLoggerChannel is created (which is used to read from and write to the Journal nodes) it also creates a metrics object. When the namenodes failover, the IPC loggers are all closed and reopened in read mode on the new SBNN or the read mode is closed on the SBNN and re-opened in write mode. The closing frees the resources and discards the original IPCLoggerChannel object and causes a new one to be created by the caller. If a Journal node was down and added back to the cluster with the same hostname, but a different IP, when the failover happens, you end up with 4 metrics objects for the JNs: 1. For for each of the original 3 IPs 2. One for the new IP The old stale metric will remain forever and will no longer be updated, leading to confusing results in any tools that use the metrics for monitoring. This change, ensures we un-register the metrics when the logger channel is closed and a new metrics object gets created when the new channel is created. For info, the logger metrics look like: ``` { "name" : "Hadoop:service=NameNode,name=IPCLoggerChannel-192.168.32.8-8485", "modelerType" : "IPCLoggerChannel-192.168.32.8-8485", "tag.Context" : "dfs", "tag.IsOutOfSync" : "false", "tag.Hostname" : "957e3e66f10b", "QueuedEditsSize" : 0, "LagTimeMillis" : 0, "CurrentLagTxns" : 0 } ``` Note the name includes the IP, rather than the hostname. ### How was this patch tested? I have added a small test to prove this, but also reproduced the original issue on a docker cluster and validated it is resolved with this change in place. > Remove IPCLoggerChannel Metrics when the logger is closed > - > > Key: HDFS-17237 > URL: https://issues.apache.org/jira/browse/HDFS-17237 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Reporter: Stephen O'Donnell >Assignee: Stephen O'Donnell >Priority: Major > > When an IPCLoggerChannel is created (which is used to read from and write to > the Journal nodes) it also creates a metrics object. When the namenodes > failover, the IPC loggers are all closed and reopened in read mode on the new > SBNN or the read mode is closed on the SBNN and re-opened in write mode. The > closing frees the resources and discards the original IPCLoggerChannel object > and causes a new one to be created by the caller. > If a Journal node was down and added back to the cluster with the same > hostname, but a different IP, when the failover happens, you end up with 4 > metrics objects for the JNs: > 1. For for each of the original 3 IPs > 2. One for the new IP > The old stale metric will remain forever and will no longer be updated, > leading to confusing results in any tools that use the metrics for monitoring. > This change, ensures we un-register the metrics when the logger channel is > closed and a new metrics object gets created when the new channel is created. > I have added a small test to prove this, but also reproduced the original > issue on a docker cluster and validated it is resolved with this change in > place. > For info, the logger metrics look like: > {code} > { >"name" : "Hadoop:service=NameNode,name=IPCLoggerChannel-192.168.32.8-8485", > "modelerType" : "IPCLoggerChannel-192.168.32.8-8485", > "tag.Context" : "dfs", > "tag.IsOutOfSync" : "false", > "tag.Hostname" : "957e3e66f10b", > "QueuedEditsSize" : 0, > "LagTimeMillis" : 0, > "CurrentLagTxns" : 0 > } > {code} > Node the name includes the IP, rather than the hostname. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-17237) Remove IPCLoggerChannel Metrics when the logger is closed
[ https://issues.apache.org/jira/browse/HDFS-17237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stephen O'Donnell updated HDFS-17237: - Summary: Remove IPCLoggerChannel Metrics when the logger is closed (was: Remove IPCLogger Metrics when the logger is closed) > Remove IPCLoggerChannel Metrics when the logger is closed > - > > Key: HDFS-17237 > URL: https://issues.apache.org/jira/browse/HDFS-17237 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Reporter: Stephen O'Donnell >Assignee: Stephen O'Donnell >Priority: Major > > When an IPCLoggerChannel is created (which is used to read from and write to > the Journal nodes) it also creates a metrics object. When the namenodes > failover, the IPC loggers are all closed and reopened in read mode on the new > SBNN or the read mode is closed on the SBNN and re-opened in write mode. The > closing frees the resources and discards the original IPCLoggerChannel object > and causes a new one to be created by the caller. > If a Journal node was down and added back to the cluster with the same > hostname, but a different IP, when the failover happens, you end up with 4 > metrics objects for the JNs: > 1. For for each of the original 3 IPs > 2. One for the new IP > The old stale metric will remain forever and will no longer be updated, > leading to confusing results in any tools that use the metrics for monitoring. > This change, ensures we un-register the metrics when the logger channel is > closed and a new metrics object gets created when the new channel is created. > I have added a small test to prove this, but also reproduced the original > issue on a docker cluster and validated it is resolved with this change in > place. > For info, the logger metrics look like: > {code} > { >"name" : "Hadoop:service=NameNode,name=IPCLoggerChannel-192.168.32.8-8485", > "modelerType" : "IPCLoggerChannel-192.168.32.8-8485", > "tag.Context" : "dfs", > "tag.IsOutOfSync" : "false", > "tag.Hostname" : "957e3e66f10b", > "QueuedEditsSize" : 0, > "LagTimeMillis" : 0, > "CurrentLagTxns" : 0 > } > {code} > Node the name includes the IP, rather than the hostname. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-17218) NameNode should remove its excess blocks from the ExcessRedundancyMap When a DN registers
[ https://issues.apache.org/jira/browse/HDFS-17218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17778636#comment-17778636 ] ASF GitHub Bot commented on HDFS-17218: --- haiyang1987 commented on PR #6176: URL: https://github.com/apache/hadoop/pull/6176#issuecomment-1775034700 > I think we can determine whether the replica in ExcessRedundancyMap has timed out based on the configured timeout paprameter. As for the scenario you mentioned, I think this can be done directly: NN determines that DN1 has timed out and sends it another delete command. Will this have any adverse effects? Thanks @zhangshuyan0 for your detailed suggestions. I think this should work, I will submit the MR as soon as possible, thanks again. > NameNode should remove its excess blocks from the ExcessRedundancyMap When a > DN registers > - > > Key: HDFS-17218 > URL: https://issues.apache.org/jira/browse/HDFS-17218 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namanode >Reporter: Haiyang Hu >Assignee: Haiyang Hu >Priority: Major > Labels: pull-request-available > Attachments: image-2023-10-12-15-52-52-336.png > > > Currently found that DN will lose all pending DNA_INVALIDATE blocks if it > restarts. > *Root case* > Current DN enables asynchronously deletion, it have many pending deletion > blocks in memory. > when DN restarts, these cached blocks may be lost. it causes some blocks in > the excess map in the namenode to be leaked and this will result in many > blocks having more replicas then expected. > *solution* > Consider NameNode should remove its excess blocks from the > ExcessRedundancyMap When a DN registers, > this approach will ensure that when processing the DN's full block report, > the 'processExtraRedundancy' can be performed according to the actual of the > blocks. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-17237) Remove IPCLogger Metrics when the logger is closed
Stephen O'Donnell created HDFS-17237: Summary: Remove IPCLogger Metrics when the logger is closed Key: HDFS-17237 URL: https://issues.apache.org/jira/browse/HDFS-17237 Project: Hadoop HDFS Issue Type: Bug Components: namenode Reporter: Stephen O'Donnell Assignee: Stephen O'Donnell When an IPCLoggerChannel is created (which is used to read from and write to the Journal nodes) it also creates a metrics object. When the namenodes failover, the IPC loggers are all closed and reopened in read mode on the new SBNN or the read mode is closed on the SBNN and re-opened in write mode. The closing frees the resources and discards the original IPCLoggerChannel object and causes a new one to be created by the caller. If a Journal node was down and added back to the cluster with the same hostname, but a different IP, when the failover happens, you end up with 4 metrics objects for the JNs: 1. For for each of the original 3 IPs 2. One for the new IP The old stale metric will remain forever and will no longer be updated, leading to confusing results in any tools that use the metrics for monitoring. This change, ensures we un-register the metrics when the logger channel is closed and a new metrics object gets created when the new channel is created. I have added a small test to prove this, but also reproduced the original issue on a docker cluster and validated it is resolved with this change in place. For info, the logger metrics look like: {code} { "name" : "Hadoop:service=NameNode,name=IPCLoggerChannel-192.168.32.8-8485", "modelerType" : "IPCLoggerChannel-192.168.32.8-8485", "tag.Context" : "dfs", "tag.IsOutOfSync" : "false", "tag.Hostname" : "957e3e66f10b", "QueuedEditsSize" : 0, "LagTimeMillis" : 0, "CurrentLagTxns" : 0 } {code} Node the name includes the IP, rather than the hostname. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-17235) Fix javadoc errors in BlockManager
[ https://issues.apache.org/jira/browse/HDFS-17235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17778634#comment-17778634 ] ASF GitHub Bot commented on HDFS-17235: --- haiyang1987 commented on PR #6214: URL: https://github.com/apache/hadoop/pull/6214#issuecomment-1775030003 Thanks @slfan1989 for your review. > Fix javadoc errors in BlockManager > -- > > Key: HDFS-17235 > URL: https://issues.apache.org/jira/browse/HDFS-17235 > Project: Hadoop HDFS > Issue Type: Improvement > Components: documentation >Reporter: Haiyang Hu >Assignee: Haiyang Hu >Priority: Major > Labels: pull-request-available > > There are 2 errors in BlockManager.java > https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6194/4/artifact/out/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04.txt > {code:java} > [ERROR] > /home/jenkins/jenkins-agent/workspace/hadoop-multibranch_PR-6194/ubuntu-focal/src/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java:153: > error: reference not found > [ERROR] * by {@link DFS_NAMENODE_MAINTENANCE_REPLICATION_MIN_KEY}. This > number has to <= > [ERROR] ^ > [ERROR] > /home/jenkins/jenkins-agent/workspace/hadoop-multibranch_PR-6194/ubuntu-focal/src/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java:154: > error: reference not found > [ERROR] * {@link DFS_NAMENODE_REPLICATION_MIN_KEY}. > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org