[jira] [Commented] (HDFS-16098) ERROR tools.DiskBalancerCLI: java.lang.IllegalArgumentException
[ https://issues.apache.org/jira/browse/HDFS-16098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17371810#comment-17371810 ] tomscut commented on HDFS-16098: Maybe you should post the stack so that others can analyze the problem. I tested it on Branch -3.1 and it worked fine. !on-branch-3.1.jpg|width=723,height=91! > ERROR tools.DiskBalancerCLI: java.lang.IllegalArgumentException > --- > > Key: HDFS-16098 > URL: https://issues.apache.org/jira/browse/HDFS-16098 > Project: Hadoop HDFS > Issue Type: Bug > Components: diskbalancer >Affects Versions: 2.6.0 > Environment: VERSION info: > Hadoop 2.6.0-cdh5.14.4 >Reporter: wangyanfu >Priority: Blocker > Labels: diskbalancer > Fix For: 2.6.0 > > Attachments: on-branch-3.1.jpg > > Original Estimate: 504h > Remaining Estimate: 504h > > when i tried to run > hdfs diskbalancer -plan $(hostname -f) > > > > i get this notice: > 21/06/30 11:30:41 ERROR tools.DiskBalancerCLI: > java.lang.IllegalArgumentException > > then i tried write the real hostname into my command , not work and same > error notice > i also tried use --plan instead of -plan , not work and same error notice > i found this > [link|https://community.cloudera.com/t5/Support-Questions/Error-trying-to-balance-disks-on-node/m-p/59989#M54850] > but there's no resolve solution , can somebody help me? -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-16098) ERROR tools.DiskBalancerCLI: java.lang.IllegalArgumentException
[ https://issues.apache.org/jira/browse/HDFS-16098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] tomscut updated HDFS-16098: --- Attachment: on-branch-3.1.jpg > ERROR tools.DiskBalancerCLI: java.lang.IllegalArgumentException > --- > > Key: HDFS-16098 > URL: https://issues.apache.org/jira/browse/HDFS-16098 > Project: Hadoop HDFS > Issue Type: Bug > Components: diskbalancer >Affects Versions: 2.6.0 > Environment: VERSION info: > Hadoop 2.6.0-cdh5.14.4 >Reporter: wangyanfu >Priority: Blocker > Labels: diskbalancer > Fix For: 2.6.0 > > Attachments: on-branch-3.1.jpg > > Original Estimate: 504h > Remaining Estimate: 504h > > when i tried to run > hdfs diskbalancer -plan $(hostname -f) > > > > i get this notice: > 21/06/30 11:30:41 ERROR tools.DiskBalancerCLI: > java.lang.IllegalArgumentException > > then i tried write the real hostname into my command , not work and same > error notice > i also tried use --plan instead of -plan , not work and same error notice > i found this > [link|https://community.cloudera.com/t5/Support-Questions/Error-trying-to-balance-disks-on-node/m-p/59989#M54850] > but there's no resolve solution , can somebody help me? -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-16098) ERROR tools.DiskBalancerCLI: java.lang.IllegalArgumentException
wangyanfu created HDFS-16098: Summary: ERROR tools.DiskBalancerCLI: java.lang.IllegalArgumentException Key: HDFS-16098 URL: https://issues.apache.org/jira/browse/HDFS-16098 Project: Hadoop HDFS Issue Type: Bug Components: diskbalancer Affects Versions: 2.6.0 Environment: VERSION info: Hadoop 2.6.0-cdh5.14.4 Reporter: wangyanfu Fix For: 2.6.0 when i tried to run hdfs diskbalancer -plan $(hostname -f) i get this notice: 21/06/30 11:30:41 ERROR tools.DiskBalancerCLI: java.lang.IllegalArgumentException then i tried write the real hostname into my command , not work and same error notice i also tried use --plan instead of -plan , not work and same error notice i found this [link|https://community.cloudera.com/t5/Support-Questions/Error-trying-to-balance-disks-on-node/m-p/59989#M54850] but there's no resolve solution , can somebody help me? -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16096) Delete useless method DirectoryWithQuotaFeature#setQuota
[ https://issues.apache.org/jira/browse/HDFS-16096?focusedWorklogId=616853=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-616853 ] ASF GitHub Bot logged work on HDFS-16096: - Author: ASF GitHub Bot Created on: 30/Jun/21 01:30 Start Date: 30/Jun/21 01:30 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3156: URL: https://github.com/apache/hadoop/pull/3156#issuecomment-871029590 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 4s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 37m 41s | | trunk passed | | +1 :green_heart: | compile | 1m 49s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 1m 30s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 1m 9s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 37s | | trunk passed | | +1 :green_heart: | javadoc | 1m 5s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 37s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 52s | | trunk passed | | +1 :green_heart: | shadedclient | 22m 14s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 15s | | the patch passed | | +1 :green_heart: | compile | 1m 19s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 1m 19s | | the patch passed | | +1 :green_heart: | compile | 1m 11s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 1m 11s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 54s | | hadoop-hdfs-project/hadoop-hdfs: The patch generated 0 new + 12 unchanged - 1 fixed = 12 total (was 13) | | +1 :green_heart: | mvnsite | 1m 17s | | the patch passed | | +1 :green_heart: | javadoc | 0m 49s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 22s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 21s | | the patch passed | | +1 :green_heart: | shadedclient | 18m 43s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 339m 47s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3156/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 37s | | The patch does not generate ASF License warnings. | | | | 441m 16s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.TestDFSShell | | | hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor | | | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby | | | hadoop.hdfs.server.namenode.TestDecommissioningStatus | | | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsVolumeList | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3156/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3156 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 83f46d7a0015 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / e87bb77f8d7ec648bb186bb7322ed1df2c18358a
[jira] [Work logged] (HDFS-15790) Make ProtobufRpcEngineProtos and ProtobufRpcEngineProtos2 Co-Exist
[ https://issues.apache.org/jira/browse/HDFS-15790?focusedWorklogId=616842=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-616842 ] ASF GitHub Bot logged work on HDFS-15790: - Author: ASF GitHub Bot Created on: 30/Jun/21 01:12 Start Date: 30/Jun/21 01:12 Worklog Time Spent: 10m Work Description: belugabehr closed pull request #2650: URL: https://github.com/apache/hadoop/pull/2650 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 616842) Time Spent: 4h 20m (was: 4h 10m) > Make ProtobufRpcEngineProtos and ProtobufRpcEngineProtos2 Co-Exist > -- > > Key: HDFS-15790 > URL: https://issues.apache.org/jira/browse/HDFS-15790 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: David Mollitor >Assignee: Vinayakumar B >Priority: Critical > Labels: pull-request-available, release-blocker > Fix For: 3.3.1, 3.4.0 > > Time Spent: 4h 20m > Remaining Estimate: 0h > > Changing from Protobuf 2 to Protobuf 3 broke some stuff in Apache Hive > project. This was not an awesome thing to do between minor versions in > regards to backwards compatibility for downstream projects. > Additionally, these two frameworks are not drop-in replacements, they have > some differences. Also, Protobuf 2 is not deprecated or anything so let us > have both protocols available at the same time. In Hadoop 4.x Protobuf 2 > support can be dropped. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16096) Delete useless method DirectoryWithQuotaFeature#setQuota
[ https://issues.apache.org/jira/browse/HDFS-16096?focusedWorklogId=616655=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-616655 ] ASF GitHub Bot logged work on HDFS-16096: - Author: ASF GitHub Bot Created on: 29/Jun/21 18:09 Start Date: 29/Jun/21 18:09 Worklog Time Spent: 10m Work Description: virajjasani commented on pull request #3156: URL: https://github.com/apache/hadoop/pull/3156#issuecomment-870807883 +1 (non-binding). Re-triggered the build. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 616655) Time Spent: 1h (was: 50m) > Delete useless method DirectoryWithQuotaFeature#setQuota > > > Key: HDFS-16096 > URL: https://issues.apache.org/jira/browse/HDFS-16096 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Reporter: Xiangyi Zhu >Assignee: Xiangyi Zhu >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 1h > Remaining Estimate: 0h > > Delete useless method DirectoryWithQuotaFeature#setQuota. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16095) Add lsQuotaList command and getQuotaListing api for hdfs quota
[ https://issues.apache.org/jira/browse/HDFS-16095?focusedWorklogId=616550=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-616550 ] ASF GitHub Bot logged work on HDFS-16095: - Author: ASF GitHub Bot Created on: 29/Jun/21 15:29 Start Date: 29/Jun/21 15:29 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3155: URL: https://github.com/apache/hadoop/pull/3155#issuecomment-870699723 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 48s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | buf | 0m 0s | | buf was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 12m 5s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 22m 58s | | trunk passed | | +1 :green_heart: | compile | 23m 12s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 19m 12s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 4m 6s | | trunk passed | | +1 :green_heart: | mvnsite | 4m 57s | | trunk passed | | +1 :green_heart: | javadoc | 3m 40s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 4m 58s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 9m 50s | | trunk passed | | +1 :green_heart: | shadedclient | 16m 59s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 24s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 31s | | the patch passed | | +1 :green_heart: | compile | 21m 54s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | -1 :x: | cc | 21m 54s | [/results-compile-cc-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3155/1/artifact/out/results-compile-cc-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt) | root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 generated 22 new + 301 unchanged - 22 fixed = 323 total (was 323) | | -1 :x: | javac | 21m 54s | [/results-compile-javac-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3155/1/artifact/out/results-compile-javac-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt) | root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 generated 1 new + 1981 unchanged - 0 fixed = 1982 total (was 1981) | | +1 :green_heart: | compile | 19m 23s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | -1 :x: | cc | 19m 23s | [/results-compile-cc-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3155/1/artifact/out/results-compile-cc-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt) | root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 generated 15 new + 308 unchanged - 15 fixed = 323 total (was 323) | | -1 :x: | javac | 19m 23s | [/results-compile-javac-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3155/1/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt) | root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 generated 3 new + 1855 unchanged - 2 fixed = 1858 total (was 1857) | | -1 :x: | blanks | 0m 0s | [/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3155/1/artifact/out/blanks-eol.txt) | The patch has 3 line(s) that end in blanks. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply | | -0 :warning: | checkstyle | 4m 1s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3155/1/artifact/out/results-checkstyle-root.txt) |
[jira] [Work logged] (HDFS-16096) Delete useless method DirectoryWithQuotaFeature#setQuota
[ https://issues.apache.org/jira/browse/HDFS-16096?focusedWorklogId=616532=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-616532 ] ASF GitHub Bot logged work on HDFS-16096: - Author: ASF GitHub Bot Created on: 29/Jun/21 15:01 Start Date: 29/Jun/21 15:01 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3156: URL: https://github.com/apache/hadoop/pull/3156#issuecomment-870677148 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 7s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 47m 8s | | trunk passed | | +1 :green_heart: | compile | 2m 7s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 1m 58s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 1m 19s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 57s | | trunk passed | | +1 :green_heart: | javadoc | 1m 17s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 54s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 4m 40s | | trunk passed | | -1 :x: | shadedclient | 23m 5s | | branch has errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | -1 :x: | mvninstall | 0m 24s | [/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3156/1/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch failed. | | -1 :x: | compile | 0m 23s | [/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3156/1/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt) | hadoop-hdfs in the patch failed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04. | | -1 :x: | javac | 0m 23s | [/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3156/1/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt) | hadoop-hdfs in the patch failed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04. | | +1 :green_heart: | compile | 1m 53s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 1m 53s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 1m 16s | | hadoop-hdfs-project/hadoop-hdfs: The patch generated 0 new + 12 unchanged - 1 fixed = 12 total (was 13) | | +1 :green_heart: | mvnsite | 1m 48s | | the patch passed | | +1 :green_heart: | javadoc | 1m 17s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 46s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 4m 49s | | the patch passed | | -1 :x: | shadedclient | 5m 52s | | patch has errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 344m 14s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3156/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 34s | | The patch does not generate ASF License warnings. | | | | 446m 52s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.TestDFSShell | | | hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor | | | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby | | |
[jira] [Work logged] (HDFS-16086) Add volume information to datanode log for tracing
[ https://issues.apache.org/jira/browse/HDFS-16086?focusedWorklogId=616489=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-616489 ] ASF GitHub Bot logged work on HDFS-16086: - Author: ASF GitHub Bot Created on: 29/Jun/21 14:06 Start Date: 29/Jun/21 14:06 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3136: URL: https://github.com/apache/hadoop/pull/3136#issuecomment-870507562 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 48s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 3 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 33m 31s | | trunk passed | | +1 :green_heart: | compile | 1m 25s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 1m 16s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 1m 2s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 25s | | trunk passed | | +1 :green_heart: | javadoc | 0m 55s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 26s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 15s | | trunk passed | | +1 :green_heart: | shadedclient | 18m 58s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 16s | | the patch passed | | +1 :green_heart: | compile | 1m 18s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | -1 :x: | javac | 1m 18s | [/results-compile-javac-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3136/3/artifact/out/results-compile-javac-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt) | hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 generated 36 new + 467 unchanged - 36 fixed = 503 total (was 503) | | +1 :green_heart: | compile | 1m 9s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 1m 9s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 54s | [/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3136/3/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 406 unchanged - 0 fixed = 407 total (was 406) | | +1 :green_heart: | mvnsite | 1m 16s | | the patch passed | | +1 :green_heart: | javadoc | 0m 49s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 19s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 21s | | the patch passed | | +1 :green_heart: | shadedclient | 18m 51s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 347m 56s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3136/3/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 38s | | The patch does not generate ASF License warnings. | | | | 440m 13s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor | | | hadoop.hdfs.server.namenode.ha.TestEditLogTailer | | | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby | | | hadoop.hdfs.TestDFSShell | | | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsVolumeList | | | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl | | | hadoop.hdfs.server.namenode.TestDecommissioningStatus | | Subsystem | Report/Notes
[jira] [Commented] (HDFS-16083) Forbid Observer NameNode trigger active namenode log roll
[ https://issues.apache.org/jira/browse/HDFS-16083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17371403#comment-17371403 ] lei w commented on HDFS-16083: -- Add test in HDFS-16083.003.patch > Forbid Observer NameNode trigger active namenode log roll > -- > > Key: HDFS-16083 > URL: https://issues.apache.org/jira/browse/HDFS-16083 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namanode >Reporter: lei w >Assignee: lei w >Priority: Minor > Attachments: HDFS-16083.001.patch, HDFS-16083.002.patch, > HDFS-16083.003.patch, activeRollEdits.png > > > When the Observer NameNode is turned on in the cluster, the Active NameNode > will receive rollEditLog RPC requests from the Standby NameNode and Observer > NameNode in a short time. Observer NameNode's rollEditLog request is a > repetitive operation, so should we forbid Observer NameNode trigger active > namenode log roll ? We 'dfs.ha.log-roll.period' configured is 300( 5 > minutes) and active NameNode receives rollEditLog RPC as shown in > activeRollEdits.png -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-16083) Forbid Observer NameNode trigger active namenode log roll
[ https://issues.apache.org/jira/browse/HDFS-16083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] lei w updated HDFS-16083: - Attachment: HDFS-16083.003.patch > Forbid Observer NameNode trigger active namenode log roll > -- > > Key: HDFS-16083 > URL: https://issues.apache.org/jira/browse/HDFS-16083 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namanode >Reporter: lei w >Assignee: lei w >Priority: Minor > Attachments: HDFS-16083.001.patch, HDFS-16083.002.patch, > HDFS-16083.003.patch, activeRollEdits.png > > > When the Observer NameNode is turned on in the cluster, the Active NameNode > will receive rollEditLog RPC requests from the Standby NameNode and Observer > NameNode in a short time. Observer NameNode's rollEditLog request is a > repetitive operation, so should we forbid Observer NameNode trigger active > namenode log roll ? We 'dfs.ha.log-roll.period' configured is 300( 5 > minutes) and active NameNode receives rollEditLog RPC as shown in > activeRollEdits.png -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16089) EC: Add metric EcReconstructionValidateTimeMillis for StripedBlockReconstructor
[ https://issues.apache.org/jira/browse/HDFS-16089?focusedWorklogId=616439=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-616439 ] ASF GitHub Bot logged work on HDFS-16089: - Author: ASF GitHub Bot Created on: 29/Jun/21 13:59 Start Date: 29/Jun/21 13:59 Worklog Time Spent: 10m Work Description: tomscut commented on pull request #3146: URL: https://github.com/apache/hadoop/pull/3146#issuecomment-870478949 > Merged. Thanks @tomscut Thanks @jojochuang again. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 616439) Time Spent: 1h 40m (was: 1.5h) > EC: Add metric EcReconstructionValidateTimeMillis for > StripedBlockReconstructor > --- > > Key: HDFS-16089 > URL: https://issues.apache.org/jira/browse/HDFS-16089 > Project: Hadoop HDFS > Issue Type: Wish >Reporter: tomscut >Assignee: tomscut >Priority: Minor > Labels: pull-request-available > Fix For: 3.4.0, 3.3.2 > > Time Spent: 1h 40m > Remaining Estimate: 0h > > Add metric EcReconstructionValidateTimeMillis for StripedBlockReconstructor, > so that we can count the elapsed time for striped block reconstructing. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16092) Avoid creating LayoutFlags redundant objects
[ https://issues.apache.org/jira/browse/HDFS-16092?focusedWorklogId=616432=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-616432 ] ASF GitHub Bot logged work on HDFS-16092: - Author: ASF GitHub Bot Created on: 29/Jun/21 13:58 Start Date: 29/Jun/21 13:58 Worklog Time Spent: 10m Work Description: jojochuang merged pull request #3150: URL: https://github.com/apache/hadoop/pull/3150 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 616432) Time Spent: 1h 40m (was: 1.5h) > Avoid creating LayoutFlags redundant objects > > > Key: HDFS-16092 > URL: https://issues.apache.org/jira/browse/HDFS-16092 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0, 3.2.3, 3.3.2 > > Time Spent: 1h 40m > Remaining Estimate: 0h > > We use LayoutFlags to represent features that EditLog/FSImage can support. > The utility helps write int (0) to given OutputStream and if EditLog/FSImage > supports Layout flags, they read the value from InputStream to confirm > whether there are unsupported feature flags (non zero int). However, we also > create and return new object of LayoutFlags, which is not used anywhere > because it's just a utility to read/write to/from given stream. We should > remove such redundant objects from getting created while reading from > InputStream using LayoutFlags#read utility. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16086) Add volume information to datanode log for tracing
[ https://issues.apache.org/jira/browse/HDFS-16086?focusedWorklogId=616429=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-616429 ] ASF GitHub Bot logged work on HDFS-16086: - Author: ASF GitHub Bot Created on: 29/Jun/21 13:57 Start Date: 29/Jun/21 13:57 Worklog Time Spent: 10m Work Description: tomscut commented on pull request #3136: URL: https://github.com/apache/hadoop/pull/3136#issuecomment-870513731 These failed UTs work fine locally. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 616429) Time Spent: 2h 20m (was: 2h 10m) > Add volume information to datanode log for tracing > -- > > Key: HDFS-16086 > URL: https://issues.apache.org/jira/browse/HDFS-16086 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: tomscut >Assignee: tomscut >Priority: Minor > Labels: pull-request-available > Attachments: CreatingRbw.jpg, Received.jpg > > Time Spent: 2h 20m > Remaining Estimate: 0h > > To keep track of the block in volume, we can add the volume information to > the datanode log. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16092) Avoid creating LayoutFlags redundant objects
[ https://issues.apache.org/jira/browse/HDFS-16092?focusedWorklogId=616424=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-616424 ] ASF GitHub Bot logged work on HDFS-16092: - Author: ASF GitHub Bot Created on: 29/Jun/21 13:57 Start Date: 29/Jun/21 13:57 Worklog Time Spent: 10m Work Description: virajjasani commented on pull request #3150: URL: https://github.com/apache/hadoop/pull/3150#issuecomment-870269640 Thanks for the review @jojochuang. The failed tests don't seem related, they are mostly timeout and OOM related flakies. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 616424) Time Spent: 1.5h (was: 1h 20m) > Avoid creating LayoutFlags redundant objects > > > Key: HDFS-16092 > URL: https://issues.apache.org/jira/browse/HDFS-16092 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0, 3.2.3, 3.3.2 > > Time Spent: 1.5h > Remaining Estimate: 0h > > We use LayoutFlags to represent features that EditLog/FSImage can support. > The utility helps write int (0) to given OutputStream and if EditLog/FSImage > supports Layout flags, they read the value from InputStream to confirm > whether there are unsupported feature flags (non zero int). However, we also > create and return new object of LayoutFlags, which is not used anywhere > because it's just a utility to read/write to/from given stream. We should > remove such redundant objects from getting created while reading from > InputStream using LayoutFlags#read utility. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16089) EC: Add metric EcReconstructionValidateTimeMillis for StripedBlockReconstructor
[ https://issues.apache.org/jira/browse/HDFS-16089?focusedWorklogId=616409=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-616409 ] ASF GitHub Bot logged work on HDFS-16089: - Author: ASF GitHub Bot Created on: 29/Jun/21 13:55 Start Date: 29/Jun/21 13:55 Worklog Time Spent: 10m Work Description: jojochuang commented on pull request #3146: URL: https://github.com/apache/hadoop/pull/3146#issuecomment-870466195 Merged. Thanks @tomscut -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 616409) Time Spent: 1h 20m (was: 1h 10m) > EC: Add metric EcReconstructionValidateTimeMillis for > StripedBlockReconstructor > --- > > Key: HDFS-16089 > URL: https://issues.apache.org/jira/browse/HDFS-16089 > Project: Hadoop HDFS > Issue Type: Wish >Reporter: tomscut >Assignee: tomscut >Priority: Minor > Labels: pull-request-available > Fix For: 3.4.0, 3.3.2 > > Time Spent: 1h 20m > Remaining Estimate: 0h > > Add metric EcReconstructionValidateTimeMillis for StripedBlockReconstructor, > so that we can count the elapsed time for striped block reconstructing. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16089) EC: Add metric EcReconstructionValidateTimeMillis for StripedBlockReconstructor
[ https://issues.apache.org/jira/browse/HDFS-16089?focusedWorklogId=616413=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-616413 ] ASF GitHub Bot logged work on HDFS-16089: - Author: ASF GitHub Bot Created on: 29/Jun/21 13:55 Start Date: 29/Jun/21 13:55 Worklog Time Spent: 10m Work Description: jojochuang merged pull request #3146: URL: https://github.com/apache/hadoop/pull/3146 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 616413) Time Spent: 1.5h (was: 1h 20m) > EC: Add metric EcReconstructionValidateTimeMillis for > StripedBlockReconstructor > --- > > Key: HDFS-16089 > URL: https://issues.apache.org/jira/browse/HDFS-16089 > Project: Hadoop HDFS > Issue Type: Wish >Reporter: tomscut >Assignee: tomscut >Priority: Minor > Labels: pull-request-available > Fix For: 3.4.0, 3.3.2 > > Time Spent: 1.5h > Remaining Estimate: 0h > > Add metric EcReconstructionValidateTimeMillis for StripedBlockReconstructor, > so that we can count the elapsed time for striped block reconstructing. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16086) Add volume information to datanode log for tracing
[ https://issues.apache.org/jira/browse/HDFS-16086?focusedWorklogId=616403=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-616403 ] ASF GitHub Bot logged work on HDFS-16086: - Author: ASF GitHub Bot Created on: 29/Jun/21 13:54 Start Date: 29/Jun/21 13:54 Worklog Time Spent: 10m Work Description: jojochuang commented on a change in pull request #3136: URL: https://github.com/apache/hadoop/pull/3136#discussion_r660246410 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/Replica.java ## @@ -19,49 +19,56 @@ import org.apache.hadoop.classification.InterfaceAudience; import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.ReplicaState; +import org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeSpi; /** * This represents block replicas which are stored in DataNode. */ @InterfaceAudience.Private public interface Replica { /** Get the block ID */ - public long getBlockId(); + long getBlockId(); /** Get the generation stamp */ - public long getGenerationStamp(); + long getGenerationStamp(); /** * Get the replica state * @return the replica state */ - public ReplicaState getState(); + ReplicaState getState(); /** * Get the number of bytes received * @return the number of bytes that have been received */ - public long getNumBytes(); + long getNumBytes(); /** * Get the number of bytes that have written to disk * @return the number of bytes that have written to disk */ - public long getBytesOnDisk(); + long getBytesOnDisk(); /** * Get the number of bytes that are visible to readers * @return the number of bytes that are visible to readers */ - public long getVisibleLength(); + long getVisibleLength(); Review comment: please do not change these interface methods. These changes are not required and makes backport harder. ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java ## @@ -587,7 +587,7 @@ public void readBlock(final ExtendedBlock block, final String clientTraceFmt = clientName.length() > 0 && ClientTraceLog.isInfoEnabled() ? String.format(DN_CLIENTTRACE_FORMAT, localAddress, remoteAddress, -"%d", "HDFS_READ", clientName, "%d", +"", "%d", "HDFS_READ", clientName, "%d", Review comment: looks like redundant change? ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java ## @@ -1631,6 +1633,7 @@ public ReplicaHandler createRbw( if (ref == null) { ref = volumes.getNextVolume(storageType, storageId, b.getNumBytes()); } + LOG.info("Creating Rbw, block: {} on volume: {}", b, ref.getVolume()); Review comment: is this really necessary? IMO logging one message for every rbw is just too much. ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java ## @@ -929,7 +929,7 @@ public void writeBlock(final ExtendedBlock block, if (isDatanode || stage == BlockConstructionStage.PIPELINE_CLOSE_RECOVERY) { datanode.closeBlock(block, null, storageUuid, isOnTransientStorage); -LOG.info("Received {} src: {} dest: {} of size {}", +LOG.info("Received {} src: {} dest: {} volume: {} of size {}", Review comment: missing the parameter for volume. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 616403) Time Spent: 2h 10m (was: 2h) > Add volume information to datanode log for tracing > -- > > Key: HDFS-16086 > URL: https://issues.apache.org/jira/browse/HDFS-16086 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: tomscut >Assignee: tomscut >Priority: Minor > Labels: pull-request-available > Attachments: CreatingRbw.jpg, Received.jpg > > Time Spent: 2h 10m > Remaining Estimate: 0h > > To keep track of the block in volume, we can add the volume information to > the datanode log. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail:
[jira] [Work logged] (HDFS-16086) Add volume information to datanode log for tracing
[ https://issues.apache.org/jira/browse/HDFS-16086?focusedWorklogId=616395=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-616395 ] ASF GitHub Bot logged work on HDFS-16086: - Author: ASF GitHub Bot Created on: 29/Jun/21 13:53 Start Date: 29/Jun/21 13:53 Worklog Time Spent: 10m Work Description: tomscut commented on a change in pull request #3136: URL: https://github.com/apache/hadoop/pull/3136#discussion_r660249250 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/Replica.java ## @@ -19,49 +19,56 @@ import org.apache.hadoop.classification.InterfaceAudience; import org.apache.hadoop.hdfs.server.common.HdfsServerConstants.ReplicaState; +import org.apache.hadoop.hdfs.server.datanode.fsdataset.FsVolumeSpi; /** * This represents block replicas which are stored in DataNode. */ @InterfaceAudience.Private public interface Replica { /** Get the block ID */ - public long getBlockId(); + long getBlockId(); /** Get the generation stamp */ - public long getGenerationStamp(); + long getGenerationStamp(); /** * Get the replica state * @return the replica state */ - public ReplicaState getState(); + ReplicaState getState(); /** * Get the number of bytes received * @return the number of bytes that have been received */ - public long getNumBytes(); + long getNumBytes(); /** * Get the number of bytes that have written to disk * @return the number of bytes that have written to disk */ - public long getBytesOnDisk(); + long getBytesOnDisk(); /** * Get the number of bytes that are visible to readers * @return the number of bytes that are visible to readers */ - public long getVisibleLength(); + long getVisibleLength(); Review comment: Thanks @jojochuang for your review. This change is to fix checkstyle. I will restore it. ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java ## @@ -587,7 +587,7 @@ public void readBlock(final ExtendedBlock block, final String clientTraceFmt = clientName.length() > 0 && ClientTraceLog.isInfoEnabled() ? String.format(DN_CLIENTTRACE_FORMAT, localAddress, remoteAddress, -"%d", "HDFS_READ", clientName, "%d", +"", "%d", "HDFS_READ", clientName, "%d", Review comment: Because volume has been added to DN_CLIENTTRACE_FORMAT, some adaptations have been made. ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java ## @@ -1631,6 +1633,7 @@ public ReplicaHandler createRbw( if (ref == null) { ref = volumes.getNextVolume(storageType, storageId, b.getNumBytes()); } + LOG.info("Creating Rbw, block: {} on volume: {}", b, ref.getVolume()); Review comment: > is this really necessary? IMO logging one message for every rbw is just too much. I will change this to DEBUG level, do you think it is OK? ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataXceiver.java ## @@ -929,7 +929,7 @@ public void writeBlock(final ExtendedBlock block, if (isDatanode || stage == BlockConstructionStage.PIPELINE_CLOSE_RECOVERY) { datanode.closeBlock(block, null, storageUuid, isOnTransientStorage); -LOG.info("Received {} src: {} dest: {} of size {}", +LOG.info("Received {} src: {} dest: {} volume: {} of size {}", Review comment: Thanks for pointing this, I fixed it. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 616395) Time Spent: 2h (was: 1h 50m) > Add volume information to datanode log for tracing > -- > > Key: HDFS-16086 > URL: https://issues.apache.org/jira/browse/HDFS-16086 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: tomscut >Assignee: tomscut >Priority: Minor > Labels: pull-request-available > Attachments: CreatingRbw.jpg, Received.jpg > > Time Spent: 2h > Remaining Estimate: 0h > > To keep track of the block in volume, we can add the volume information to > the datanode log. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To
[jira] [Work logged] (HDFS-16096) Delete useless method DirectoryWithQuotaFeature#setQuota
[ https://issues.apache.org/jira/browse/HDFS-16096?focusedWorklogId=616392=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-616392 ] ASF GitHub Bot logged work on HDFS-16096: - Author: ASF GitHub Bot Created on: 29/Jun/21 13:53 Start Date: 29/Jun/21 13:53 Worklog Time Spent: 10m Work Description: zhuxiangyi commented on pull request #3156: URL: https://github.com/apache/hadoop/pull/3156#issuecomment-870362633 Thanks @jojochuang for your review. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 616392) Time Spent: 40m (was: 0.5h) > Delete useless method DirectoryWithQuotaFeature#setQuota > > > Key: HDFS-16096 > URL: https://issues.apache.org/jira/browse/HDFS-16096 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Reporter: Xiangyi Zhu >Assignee: Xiangyi Zhu >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 40m > Remaining Estimate: 0h > > Delete useless method DirectoryWithQuotaFeature#setQuota. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16090) Fine grained locking for datanodeNetworkCounts
[ https://issues.apache.org/jira/browse/HDFS-16090?focusedWorklogId=616387=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-616387 ] ASF GitHub Bot logged work on HDFS-16090: - Author: ASF GitHub Bot Created on: 29/Jun/21 13:52 Start Date: 29/Jun/21 13:52 Worklog Time Spent: 10m Work Description: aajisaka commented on a change in pull request #3148: URL: https://github.com/apache/hadoop/pull/3148#discussion_r660477088 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java ## @@ -340,8 +343,7 @@ public static InetSocketAddress createSocketAddr(String target) { private DataNodePeerMetrics peerMetrics; private DataNodeDiskMetrics diskMetrics; private InetSocketAddress streamingAddr; - - // See the note below in incrDatanodeNetworkErrors re: concurrency. + private LoadingCache> datanodeNetworkCounts; Review comment: I think `HashMap` is more efficient than `ConcurrentHashMap` because the LongAdder instance in the Map is never replaced in this case. ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java ## @@ -340,8 +343,7 @@ public static InetSocketAddress createSocketAddr(String target) { private DataNodePeerMetrics peerMetrics; private DataNodeDiskMetrics diskMetrics; private InetSocketAddress streamingAddr; - - // See the note below in incrDatanodeNetworkErrors re: concurrency. + private LoadingCache> datanodeNetworkCounts; Review comment: Oh I found the interface ``` @Override // DataNodeMXBean public Map> getDatanodeNetworkCounts() { return datanodeNetworkCounts.asMap(); } ``` Since the interface cannot be changed, it's okay to use ConcurrentHashMap. +1. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 616387) Time Spent: 2h 10m (was: 2h) > Fine grained locking for datanodeNetworkCounts > -- > > Key: HDFS-16090 > URL: https://issues.apache.org/jira/browse/HDFS-16090 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > Time Spent: 2h 10m > Remaining Estimate: 0h > > While incrementing DataNode network error count, we lock entire LoadingCache > in order to increment network count of specific host. We should provide fine > grained concurrency for this update because locking entire cache is redundant > and could impact performance while incrementing network count for multiple > hosts. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16095) Add lsQuotaList command and getQuotaListing api for hdfs quota
[ https://issues.apache.org/jira/browse/HDFS-16095?focusedWorklogId=616386=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-616386 ] ASF GitHub Bot logged work on HDFS-16095: - Author: ASF GitHub Bot Created on: 29/Jun/21 13:52 Start Date: 29/Jun/21 13:52 Worklog Time Spent: 10m Work Description: zhuxiangyi opened a new pull request #3155: URL: https://github.com/apache/hadoop/pull/3155 …quota. ## NOTICE Please create an issue in ASF JIRA before opening a pull request, and you need to set the title of the pull request which starts with the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.) For more details, please see https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 616386) Time Spent: 20m (was: 10m) > Add lsQuotaList command and getQuotaListing api for hdfs quota > -- > > Key: HDFS-16095 > URL: https://issues.apache.org/jira/browse/HDFS-16095 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.4.0 >Reporter: Xiangyi Zhu >Assignee: Xiangyi Zhu >Priority: Major > Labels: pull-request-available > Time Spent: 20m > Remaining Estimate: 0h > > Currently hdfs does not support obtaining all quota information. The > administrator may need to check which quotas have been added to a certain > directory, or the quotas of the entire cluster. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16096) Delete useless method DirectoryWithQuotaFeature#setQuota
[ https://issues.apache.org/jira/browse/HDFS-16096?focusedWorklogId=616370=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-616370 ] ASF GitHub Bot logged work on HDFS-16096: - Author: ASF GitHub Bot Created on: 29/Jun/21 13:49 Start Date: 29/Jun/21 13:49 Worklog Time Spent: 10m Work Description: zhuxiangyi opened a new pull request #3156: URL: https://github.com/apache/hadoop/pull/3156 ## NOTICE Please create an issue in ASF JIRA before opening a pull request, and you need to set the title of the pull request which starts with the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.) For more details, please see https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 616370) Time Spent: 0.5h (was: 20m) > Delete useless method DirectoryWithQuotaFeature#setQuota > > > Key: HDFS-16096 > URL: https://issues.apache.org/jira/browse/HDFS-16096 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Reporter: Xiangyi Zhu >Assignee: Xiangyi Zhu >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 0.5h > Remaining Estimate: 0h > > Delete useless method DirectoryWithQuotaFeature#setQuota. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-15329) Provide FileContext based ViewFSOverloadScheme implementation
[ https://issues.apache.org/jira/browse/HDFS-15329?focusedWorklogId=616320=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-616320 ] ASF GitHub Bot logged work on HDFS-15329: - Author: ASF GitHub Bot Created on: 29/Jun/21 13:43 Start Date: 29/Jun/21 13:43 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #2225: URL: https://github.com/apache/hadoop/pull/2225#issuecomment-869944467 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 35s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 13m 7s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 20m 10s | | trunk passed | | +1 :green_heart: | compile | 20m 56s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 18m 15s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 3m 53s | | trunk passed | | +1 :green_heart: | mvnsite | 3m 12s | | trunk passed | | +1 :green_heart: | javadoc | 2m 18s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 3m 26s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 5m 45s | | trunk passed | | +1 :green_heart: | shadedclient | 16m 27s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 29s | | Maven dependency ordering for patch | | -1 :x: | mvninstall | 0m 32s | [/patch-mvninstall-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2225/1/artifact/out/patch-mvninstall-hadoop-common-project_hadoop-common.txt) | hadoop-common in the patch failed. | | -1 :x: | mvninstall | 1m 7s | [/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2225/1/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch failed. | | -1 :x: | compile | 0m 59s | [/patch-compile-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2225/1/artifact/out/patch-compile-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt) | root in the patch failed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04. | | -1 :x: | javac | 0m 59s | [/patch-compile-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2225/1/artifact/out/patch-compile-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt) | root in the patch failed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04. | | -1 :x: | compile | 0m 54s | [/patch-compile-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2225/1/artifact/out/patch-compile-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt) | root in the patch failed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10. | | -1 :x: | javac | 0m 54s | [/patch-compile-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2225/1/artifact/out/patch-compile-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt) | root in the patch failed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10. | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 3m 24s | | root: The patch generated 0 new + 51 unchanged - 1 fixed = 51 total (was 52) | | -1 :x: | mvnsite | 0m 38s | [/patch-mvnsite-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2225/1/artifact/out/patch-mvnsite-hadoop-common-project_hadoop-common.txt) | hadoop-common in the patch failed. | | -1 :x: | mvnsite | 1m 9s | [/patch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2225/1/artifact/out/patch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the
[jira] [Work logged] (HDFS-16092) Avoid creating LayoutFlags redundant objects
[ https://issues.apache.org/jira/browse/HDFS-16092?focusedWorklogId=616291=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-616291 ] ASF GitHub Bot logged work on HDFS-16092: - Author: ASF GitHub Bot Created on: 29/Jun/21 13:40 Start Date: 29/Jun/21 13:40 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3150: URL: https://github.com/apache/hadoop/pull/3150#issuecomment-869691737 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 616291) Time Spent: 1h 20m (was: 1h 10m) > Avoid creating LayoutFlags redundant objects > > > Key: HDFS-16092 > URL: https://issues.apache.org/jira/browse/HDFS-16092 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0, 3.2.3, 3.3.2 > > Time Spent: 1h 20m > Remaining Estimate: 0h > > We use LayoutFlags to represent features that EditLog/FSImage can support. > The utility helps write int (0) to given OutputStream and if EditLog/FSImage > supports Layout flags, they read the value from InputStream to confirm > whether there are unsupported feature flags (non zero int). However, we also > create and return new object of LayoutFlags, which is not used anywhere > because it's just a utility to read/write to/from given stream. We should > remove such redundant objects from getting created while reading from > InputStream using LayoutFlags#read utility. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-15936) Solve BlockSender#sendPacket() does not record SocketTimeout exception
[ https://issues.apache.org/jira/browse/HDFS-15936?focusedWorklogId=616263=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-616263 ] ASF GitHub Bot logged work on HDFS-15936: - Author: ASF GitHub Bot Created on: 29/Jun/21 13:37 Start Date: 29/Jun/21 13:37 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #2836: URL: https://github.com/apache/hadoop/pull/2836#issuecomment-870082325 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 33s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 31m 15s | | trunk passed | | +1 :green_heart: | compile | 1m 23s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 1m 17s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 1m 3s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 26s | | trunk passed | | +1 :green_heart: | javadoc | 0m 58s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 31s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 6s | | trunk passed | | +1 :green_heart: | shadedclient | 16m 22s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 10s | | the patch passed | | +1 :green_heart: | compile | 1m 15s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 1m 15s | | the patch passed | | +1 :green_heart: | compile | 1m 7s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 1m 7s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 54s | | hadoop-hdfs-project/hadoop-hdfs: The patch generated 0 new + 36 unchanged - 1 fixed = 36 total (was 37) | | +1 :green_heart: | mvnsite | 1m 16s | | the patch passed | | +1 :green_heart: | javadoc | 0m 48s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 19s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 9s | | the patch passed | | +1 :green_heart: | shadedclient | 16m 12s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 235m 47s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2836/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 46s | | The patch does not generate ASF License warnings. | | | | 320m 24s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2836/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2836 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 1741da014125 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / ee238561d9cb3c1a5c14ab69610fd420ab376319 | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private
[jira] [Work logged] (HDFS-14839) Use Java Concurrent BlockingQueue instead of Internal BlockQueue
[ https://issues.apache.org/jira/browse/HDFS-14839?focusedWorklogId=616244=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-616244 ] ASF GitHub Bot logged work on HDFS-14839: - Author: ASF GitHub Bot Created on: 29/Jun/21 13:35 Start Date: 29/Jun/21 13:35 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #1422: URL: https://github.com/apache/hadoop/pull/1422#issuecomment-869850514 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 0s | | Docker mode activated. | | -1 :x: | patch | 0m 26s | | https://github.com/apache/hadoop/pull/1422 does not apply to trunk. Rebase required? Wrong Branch? See https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute for help. | | Subsystem | Report/Notes | |--:|:-| | GITHUB PR | https://github.com/apache/hadoop/pull/1422 | | JIRA Issue | HDFS-14839 | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-1422/1/console | | versions | git=2.17.1 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 616244) Time Spent: 20m (was: 10m) > Use Java Concurrent BlockingQueue instead of Internal BlockQueue > > > Key: HDFS-14839 > URL: https://issues.apache.org/jira/browse/HDFS-14839 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.2.0 >Reporter: David Mollitor >Assignee: David Mollitor >Priority: Minor > Labels: pull-request-available > Attachments: HDFS-14839.1.patch > > Time Spent: 20m > Remaining Estimate: 0h > > Replace... > https://github.com/apache/hadoop/blob/d8bac50e12d243ef8fd2c7e0ce5c9997131dee74/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java#L86 > With... > https://docs.oracle.com/javase/7/docs/api/java/util/concurrent/BlockingQueue.html -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-15650) Make the socket timeout for computing checksum of striped blocks configurable
[ https://issues.apache.org/jira/browse/HDFS-15650?focusedWorklogId=616206=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-616206 ] ASF GitHub Bot logged work on HDFS-15650: - Author: ASF GitHub Bot Created on: 29/Jun/21 13:31 Start Date: 29/Jun/21 13:31 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #2414: URL: https://github.com/apache/hadoop/pull/2414#issuecomment-869917943 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 53s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 46m 38s | | trunk passed | | +1 :green_heart: | compile | 1m 57s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 1m 58s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 1m 40s | | trunk passed | | +1 :green_heart: | mvnsite | 2m 10s | | trunk passed | | +1 :green_heart: | javadoc | 1m 26s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 2m 5s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 4m 34s | | trunk passed | | -1 :x: | shadedclient | 23m 1s | | branch has errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | -1 :x: | mvninstall | 0m 21s | [/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2414/1/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch failed. | | -1 :x: | compile | 0m 23s | [/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2414/1/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt) | hadoop-hdfs in the patch failed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04. | | -1 :x: | javac | 0m 23s | [/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2414/1/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt) | hadoop-hdfs in the patch failed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04. | | -1 :x: | compile | 0m 23s | [/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2414/1/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt) | hadoop-hdfs in the patch failed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10. | | -1 :x: | javac | 0m 23s | [/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2414/1/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt) | hadoop-hdfs in the patch failed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10. | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 21s | [/buildtool-patch-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2414/1/artifact/out/buildtool-patch-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt) | The patch fails to run checkstyle in hadoop-hdfs | | -1 :x: | mvnsite | 0m 23s | [/patch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2414/1/artifact/out/patch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch failed. | | -1 :x: | javadoc | 0m 23s |
[jira] [Work logged] (HDFS-16090) Fine grained locking for datanodeNetworkCounts
[ https://issues.apache.org/jira/browse/HDFS-16090?focusedWorklogId=616179=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-616179 ] ASF GitHub Bot logged work on HDFS-16090: - Author: ASF GitHub Bot Created on: 29/Jun/21 13:24 Start Date: 29/Jun/21 13:24 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3148: URL: https://github.com/apache/hadoop/pull/3148#issuecomment-869728986 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 34s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 30m 58s | | trunk passed | | +1 :green_heart: | compile | 1m 22s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 1m 18s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 1m 3s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 26s | | trunk passed | | +1 :green_heart: | javadoc | 0m 56s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 30s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 8s | | trunk passed | | +1 :green_heart: | shadedclient | 16m 13s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 13s | | the patch passed | | +1 :green_heart: | compile | 1m 13s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 1m 13s | | the patch passed | | +1 :green_heart: | compile | 1m 7s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 1m 7s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 55s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 15s | | the patch passed | | +1 :green_heart: | javadoc | 0m 47s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 22s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 6s | | the patch passed | | +1 :green_heart: | shadedclient | 15m 58s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 240m 44s | | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 45s | | The patch does not generate ASF License warnings. | | | | 324m 56s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3148/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3148 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 6ce88f807485 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 3cfc29998e305c5ce60bf11cad8fb42a04cf03ea | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3148/3/testReport/ | | Max. process+thread count | 3369 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3148/3/console | | versions | git=2.25.1
[jira] [Work logged] (HDFS-16028) Add a configuration item for special trash dir
[ https://issues.apache.org/jira/browse/HDFS-16028?focusedWorklogId=616164=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-616164 ] ASF GitHub Bot logged work on HDFS-16028: - Author: ASF GitHub Bot Created on: 29/Jun/21 13:19 Start Date: 29/Jun/21 13:19 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3023: URL: https://github.com/apache/hadoop/pull/3023#issuecomment-869984905 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 41s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 31m 8s | | trunk passed | | +1 :green_heart: | compile | 20m 58s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 18m 12s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 1m 9s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 34s | | trunk passed | | +1 :green_heart: | javadoc | 1m 8s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 40s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 2m 27s | | trunk passed | | +1 :green_heart: | shadedclient | 15m 43s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 55s | | the patch passed | | +1 :green_heart: | compile | 20m 3s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 20m 3s | | the patch passed | | +1 :green_heart: | compile | 18m 5s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 18m 5s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 1m 8s | [/results-checkstyle-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3023/1/artifact/out/results-checkstyle-hadoop-common-project_hadoop-common.txt) | hadoop-common-project/hadoop-common: The patch generated 1 new + 225 unchanged - 0 fixed = 226 total (was 225) | | +1 :green_heart: | mvnsite | 1m 33s | | the patch passed | | +1 :green_heart: | xml | 0m 2s | | The patch has no ill-formed XML file. | | +1 :green_heart: | javadoc | 1m 5s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 39s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 2m 33s | | the patch passed | | +1 :green_heart: | shadedclient | 16m 0s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 17m 1s | | hadoop-common in the patch passed. | | +1 :green_heart: | asflicense | 0m 59s | | The patch does not generate ASF License warnings. | | | | 176m 3s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3023/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3023 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell xml | | uname | Linux c797495d90fe 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 48a439494ba7ca181237e0271f41b28ef477683b | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3023/1/testReport/ | | Max.
[jira] [Updated] (HDFS-16097) Datanode receives ipc requests will throw NPE when datanode quickly restart
[ https://issues.apache.org/jira/browse/HDFS-16097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] lei w updated HDFS-16097: - Attachment: HDFS-16097.001.patch > Datanode receives ipc requests will throw NPE when datanode quickly restart > > > Key: HDFS-16097 > URL: https://issues.apache.org/jira/browse/HDFS-16097 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode > Environment: >Reporter: lei w >Priority: Major > Attachments: HDFS-16097.001.patch > > > Datanode receives ipc requests will throw NPE when datanode quickly restart. > This is because when DN is reStarted, BlockPool is first registered with > blockPoolManager and then fsdataset is initialized. When BlockPool is > registered to blockPoolManager without initializing fsdataset, DataNode > receives an IPC request will throw NPE, because it will call related methods > provided by fsdataset. The stack exception is as follows: > {code:java} > java.lang.NullPointerException > at > org.apache.hadoop.hdfs.server.datanode.DataNode.initReplicaRecovery(DataNode.java:3468) > at > org.apache.hadoop.hdfs.protocolPB.InterDatanodeProtocolServerSideTranslatorPB.initReplicaRecovery(InterDatanodeProtocolServerSideTranslatorPB.java:55) > at > org.apache.hadoop.hdfs.protocol.proto.InterDatanodeProtocolProtos$InterDatanodeProtocolService$2.callBlockingMethod(InterDatanodeProtocolProtos.java:3105) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991) > at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:916) > at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:862) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-16093) DataNodes under decommission will still be returned to the client via getLocatedBlocks, so the client may request decommissioning datanodes to read which will caus
[ https://issues.apache.org/jira/browse/HDFS-16093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17371306#comment-17371306 ] tomscut edited comment on HDFS-16093 at 6/29/21, 11:47 AM: --- IMO, abnormal nodes cannot be removed directly. Maybe sorting is a better choice, because it makes it easier to degrade read when there are not enough normal nodes. Just like the example [~sodonnell] gave. We can see org.apache.hadoop.hdfs.DFSInputStream#getBestNodeDNAddrPair(). was (Author: tomscut): IMO, abnormal nodes cannot be removed directly. Maybe sorting is a better choice, because it makes it easier to degrade read when there are not enough normal nodes. Just like the example [~sodonnell] gave. See org.apache.hadoop.hdfs.DFSInputStream#getBestNodeDNAddrPair(). > DataNodes under decommission will still be returned to the client via > getLocatedBlocks, so the client may request decommissioning datanodes to read > which will cause badly competation on disk IO. > -- > > Key: HDFS-16093 > URL: https://issues.apache.org/jira/browse/HDFS-16093 > Project: Hadoop HDFS > Issue Type: Improvement >Affects Versions: 3.3.1 >Reporter: Daniel Ma >Priority: Critical > > DataNodes under decommission will still be returned to the client via > getLocatedBlocks, so the client may request decommissioning datanodes to read > which will cause badly competation on disk IO. > Therefore, datanodes under decommission should be removed from the return > list of getLocatedBlocks api. > !image-2021-06-29-10-50-44-739.png! -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-16093) DataNodes under decommission will still be returned to the client via getLocatedBlocks, so the client may request decommissioning datanodes to read which will caus
[ https://issues.apache.org/jira/browse/HDFS-16093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17371306#comment-17371306 ] tomscut edited comment on HDFS-16093 at 6/29/21, 11:46 AM: --- IMO, abnormal nodes cannot be removed directly. Maybe sorting is a better choice, because it makes it easier to degrade read when there are not enough normal nodes. Just like the example [~sodonnell] gave. See org.apache.hadoop.hdfs.DFSInputStream#getBestNodeDNAddrPair(). was (Author: tomscut): IMO, abnormal nodes cannot be removed directly. Maybe sorting is a better choice, because it makes it easier to degrade read when there are not enough normal nodes. Just like the example [~sodonnell] gave. > DataNodes under decommission will still be returned to the client via > getLocatedBlocks, so the client may request decommissioning datanodes to read > which will cause badly competation on disk IO. > -- > > Key: HDFS-16093 > URL: https://issues.apache.org/jira/browse/HDFS-16093 > Project: Hadoop HDFS > Issue Type: Improvement >Affects Versions: 3.3.1 >Reporter: Daniel Ma >Priority: Critical > > DataNodes under decommission will still be returned to the client via > getLocatedBlocks, so the client may request decommissioning datanodes to read > which will cause badly competation on disk IO. > Therefore, datanodes under decommission should be removed from the return > list of getLocatedBlocks api. > !image-2021-06-29-10-50-44-739.png! -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-16093) DataNodes under decommission will still be returned to the client via getLocatedBlocks, so the client may request decommissioning datanodes to read which will cause bad
[ https://issues.apache.org/jira/browse/HDFS-16093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17371306#comment-17371306 ] tomscut commented on HDFS-16093: IMO, abnormal nodes cannot be removed directly. Maybe sorting is a better choice, because it makes it easier to degrade read when there are not enough normal nodes. Just like the example [~sodonnell] gave. > DataNodes under decommission will still be returned to the client via > getLocatedBlocks, so the client may request decommissioning datanodes to read > which will cause badly competation on disk IO. > -- > > Key: HDFS-16093 > URL: https://issues.apache.org/jira/browse/HDFS-16093 > Project: Hadoop HDFS > Issue Type: Improvement >Affects Versions: 3.3.1 >Reporter: Daniel Ma >Priority: Critical > > DataNodes under decommission will still be returned to the client via > getLocatedBlocks, so the client may request decommissioning datanodes to read > which will cause badly competation on disk IO. > Therefore, datanodes under decommission should be removed from the return > list of getLocatedBlocks api. > !image-2021-06-29-10-50-44-739.png! -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16086) Add volume information to datanode log for tracing
[ https://issues.apache.org/jira/browse/HDFS-16086?focusedWorklogId=616099=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-616099 ] ASF GitHub Bot logged work on HDFS-16086: - Author: ASF GitHub Bot Created on: 29/Jun/21 11:29 Start Date: 29/Jun/21 11:29 Worklog Time Spent: 10m Work Description: tomscut commented on pull request #3136: URL: https://github.com/apache/hadoop/pull/3136#issuecomment-870513731 These failed UTs work fine locally. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 616099) Time Spent: 1h 50m (was: 1h 40m) > Add volume information to datanode log for tracing > -- > > Key: HDFS-16086 > URL: https://issues.apache.org/jira/browse/HDFS-16086 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: tomscut >Assignee: tomscut >Priority: Minor > Labels: pull-request-available > Attachments: CreatingRbw.jpg, Received.jpg > > Time Spent: 1h 50m > Remaining Estimate: 0h > > To keep track of the block in volume, we can add the volume information to > the datanode log. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-16093) DataNodes under decommission will still be returned to the client via getLocatedBlocks, so the client may request decommissioning datanodes to read which will cause bad
[ https://issues.apache.org/jira/browse/HDFS-16093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17371298#comment-17371298 ] Stephen O'Donnell commented on HDFS-16093: -- I'm not sure it we can simply removed them. There is also a distinction between DECOMMISSIONING and DECOMMISSIONED. It is possible for all 3 replicas of a file to be on DECOMMISSIONING host, and therefore it can only be read if those hosts are returned. For DECOMMISSIONED hosts which are alive and not stale, I think they can be used for reads in some circumstances. I recall seeing some comments in the code suggesting DECOMMISSIONED replicas can be used as a "last resort". > DataNodes under decommission will still be returned to the client via > getLocatedBlocks, so the client may request decommissioning datanodes to read > which will cause badly competation on disk IO. > -- > > Key: HDFS-16093 > URL: https://issues.apache.org/jira/browse/HDFS-16093 > Project: Hadoop HDFS > Issue Type: Improvement >Affects Versions: 3.3.1 >Reporter: Daniel Ma >Priority: Critical > > DataNodes under decommission will still be returned to the client via > getLocatedBlocks, so the client may request decommissioning datanodes to read > which will cause badly competation on disk IO. > Therefore, datanodes under decommission should be removed from the return > list of getLocatedBlocks api. > !image-2021-06-29-10-50-44-739.png! -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16086) Add volume information to datanode log for tracing
[ https://issues.apache.org/jira/browse/HDFS-16086?focusedWorklogId=616097=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-616097 ] ASF GitHub Bot logged work on HDFS-16086: - Author: ASF GitHub Bot Created on: 29/Jun/21 11:19 Start Date: 29/Jun/21 11:19 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3136: URL: https://github.com/apache/hadoop/pull/3136#issuecomment-870507562 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 48s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 3 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 33m 31s | | trunk passed | | +1 :green_heart: | compile | 1m 25s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 1m 16s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 1m 2s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 25s | | trunk passed | | +1 :green_heart: | javadoc | 0m 55s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 26s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 15s | | trunk passed | | +1 :green_heart: | shadedclient | 18m 58s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 16s | | the patch passed | | +1 :green_heart: | compile | 1m 18s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | -1 :x: | javac | 1m 18s | [/results-compile-javac-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3136/3/artifact/out/results-compile-javac-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt) | hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 generated 36 new + 467 unchanged - 36 fixed = 503 total (was 503) | | +1 :green_heart: | compile | 1m 9s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 1m 9s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 54s | [/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3136/3/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 406 unchanged - 0 fixed = 407 total (was 406) | | +1 :green_heart: | mvnsite | 1m 16s | | the patch passed | | +1 :green_heart: | javadoc | 0m 49s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 19s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 21s | | the patch passed | | +1 :green_heart: | shadedclient | 18m 51s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 347m 56s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3136/3/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 38s | | The patch does not generate ASF License warnings. | | | | 440m 13s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor | | | hadoop.hdfs.server.namenode.ha.TestEditLogTailer | | | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby | | | hadoop.hdfs.TestDFSShell | | | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsVolumeList | | | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl | | | hadoop.hdfs.server.namenode.TestDecommissioningStatus | | Subsystem | Report/Notes
[jira] [Work logged] (HDFS-16089) EC: Add metric EcReconstructionValidateTimeMillis for StripedBlockReconstructor
[ https://issues.apache.org/jira/browse/HDFS-16089?focusedWorklogId=616085=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-616085 ] ASF GitHub Bot logged work on HDFS-16089: - Author: ASF GitHub Bot Created on: 29/Jun/21 10:34 Start Date: 29/Jun/21 10:34 Worklog Time Spent: 10m Work Description: tomscut commented on pull request #3146: URL: https://github.com/apache/hadoop/pull/3146#issuecomment-870478949 > Merged. Thanks @tomscut Thanks @jojochuang again. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 616085) Time Spent: 1h 10m (was: 1h) > EC: Add metric EcReconstructionValidateTimeMillis for > StripedBlockReconstructor > --- > > Key: HDFS-16089 > URL: https://issues.apache.org/jira/browse/HDFS-16089 > Project: Hadoop HDFS > Issue Type: Wish >Reporter: tomscut >Assignee: tomscut >Priority: Minor > Labels: pull-request-available > Fix For: 3.4.0, 3.3.2 > > Time Spent: 1h 10m > Remaining Estimate: 0h > > Add metric EcReconstructionValidateTimeMillis for StripedBlockReconstructor, > so that we can count the elapsed time for striped block reconstructing. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16090) Fine grained locking for datanodeNetworkCounts
[ https://issues.apache.org/jira/browse/HDFS-16090?focusedWorklogId=616072=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-616072 ] ASF GitHub Bot logged work on HDFS-16090: - Author: ASF GitHub Bot Created on: 29/Jun/21 10:16 Start Date: 29/Jun/21 10:16 Worklog Time Spent: 10m Work Description: aajisaka commented on a change in pull request #3148: URL: https://github.com/apache/hadoop/pull/3148#discussion_r660478422 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java ## @@ -340,8 +343,7 @@ public static InetSocketAddress createSocketAddr(String target) { private DataNodePeerMetrics peerMetrics; private DataNodeDiskMetrics diskMetrics; private InetSocketAddress streamingAddr; - - // See the note below in incrDatanodeNetworkErrors re: concurrency. + private LoadingCache> datanodeNetworkCounts; Review comment: Oh I found the interface ``` @Override // DataNodeMXBean public Map> getDatanodeNetworkCounts() { return datanodeNetworkCounts.asMap(); } ``` Since the interface cannot be changed, it's okay to use ConcurrentHashMap. +1. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 616072) Time Spent: 1h 50m (was: 1h 40m) > Fine grained locking for datanodeNetworkCounts > -- > > Key: HDFS-16090 > URL: https://issues.apache.org/jira/browse/HDFS-16090 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > Time Spent: 1h 50m > Remaining Estimate: 0h > > While incrementing DataNode network error count, we lock entire LoadingCache > in order to increment network count of specific host. We should provide fine > grained concurrency for this update because locking entire cache is redundant > and could impact performance while incrementing network count for multiple > hosts. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-16089) EC: Add metric EcReconstructionValidateTimeMillis for StripedBlockReconstructor
[ https://issues.apache.org/jira/browse/HDFS-16089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang resolved HDFS-16089. Fix Version/s: 3.3.2 3.4.0 Resolution: Fixed > EC: Add metric EcReconstructionValidateTimeMillis for > StripedBlockReconstructor > --- > > Key: HDFS-16089 > URL: https://issues.apache.org/jira/browse/HDFS-16089 > Project: Hadoop HDFS > Issue Type: Wish >Reporter: tomscut >Assignee: tomscut >Priority: Minor > Labels: pull-request-available > Fix For: 3.4.0, 3.3.2 > > Time Spent: 1h > Remaining Estimate: 0h > > Add metric EcReconstructionValidateTimeMillis for StripedBlockReconstructor, > so that we can count the elapsed time for striped block reconstructing. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16089) EC: Add metric EcReconstructionValidateTimeMillis for StripedBlockReconstructor
[ https://issues.apache.org/jira/browse/HDFS-16089?focusedWorklogId=616070=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-616070 ] ASF GitHub Bot logged work on HDFS-16089: - Author: ASF GitHub Bot Created on: 29/Jun/21 10:15 Start Date: 29/Jun/21 10:15 Worklog Time Spent: 10m Work Description: jojochuang merged pull request #3146: URL: https://github.com/apache/hadoop/pull/3146 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 616070) Time Spent: 50m (was: 40m) > EC: Add metric EcReconstructionValidateTimeMillis for > StripedBlockReconstructor > --- > > Key: HDFS-16089 > URL: https://issues.apache.org/jira/browse/HDFS-16089 > Project: Hadoop HDFS > Issue Type: Wish >Reporter: tomscut >Assignee: tomscut >Priority: Minor > Labels: pull-request-available > Time Spent: 50m > Remaining Estimate: 0h > > Add metric EcReconstructionValidateTimeMillis for StripedBlockReconstructor, > so that we can count the elapsed time for striped block reconstructing. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16089) EC: Add metric EcReconstructionValidateTimeMillis for StripedBlockReconstructor
[ https://issues.apache.org/jira/browse/HDFS-16089?focusedWorklogId=616071=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-616071 ] ASF GitHub Bot logged work on HDFS-16089: - Author: ASF GitHub Bot Created on: 29/Jun/21 10:15 Start Date: 29/Jun/21 10:15 Worklog Time Spent: 10m Work Description: jojochuang commented on pull request #3146: URL: https://github.com/apache/hadoop/pull/3146#issuecomment-870466195 Merged. Thanks @tomscut -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 616071) Time Spent: 1h (was: 50m) > EC: Add metric EcReconstructionValidateTimeMillis for > StripedBlockReconstructor > --- > > Key: HDFS-16089 > URL: https://issues.apache.org/jira/browse/HDFS-16089 > Project: Hadoop HDFS > Issue Type: Wish >Reporter: tomscut >Assignee: tomscut >Priority: Minor > Labels: pull-request-available > Time Spent: 1h > Remaining Estimate: 0h > > Add metric EcReconstructionValidateTimeMillis for StripedBlockReconstructor, > so that we can count the elapsed time for striped block reconstructing. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16090) Fine grained locking for datanodeNetworkCounts
[ https://issues.apache.org/jira/browse/HDFS-16090?focusedWorklogId=616069=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-616069 ] ASF GitHub Bot logged work on HDFS-16090: - Author: ASF GitHub Bot Created on: 29/Jun/21 10:14 Start Date: 29/Jun/21 10:14 Worklog Time Spent: 10m Work Description: aajisaka commented on a change in pull request #3148: URL: https://github.com/apache/hadoop/pull/3148#discussion_r660477088 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java ## @@ -340,8 +343,7 @@ public static InetSocketAddress createSocketAddr(String target) { private DataNodePeerMetrics peerMetrics; private DataNodeDiskMetrics diskMetrics; private InetSocketAddress streamingAddr; - - // See the note below in incrDatanodeNetworkErrors re: concurrency. + private LoadingCache> datanodeNetworkCounts; Review comment: I think `HashMap` is more efficient than `ConcurrentHashMap` because the LongAdder instance in the Map is never replaced in this case. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 616069) Time Spent: 1h 40m (was: 1.5h) > Fine grained locking for datanodeNetworkCounts > -- > > Key: HDFS-16090 > URL: https://issues.apache.org/jira/browse/HDFS-16090 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > Time Spent: 1h 40m > Remaining Estimate: 0h > > While incrementing DataNode network error count, we lock entire LoadingCache > in order to increment network count of specific host. We should provide fine > grained concurrency for this update because locking entire cache is redundant > and could impact performance while incrementing network count for multiple > hosts. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-16092) Avoid creating LayoutFlags redundant objects
[ https://issues.apache.org/jira/browse/HDFS-16092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HDFS-16092: --- Fix Version/s: 3.2.3 > Avoid creating LayoutFlags redundant objects > > > Key: HDFS-16092 > URL: https://issues.apache.org/jira/browse/HDFS-16092 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0, 3.2.3, 3.3.2 > > Time Spent: 1h 10m > Remaining Estimate: 0h > > We use LayoutFlags to represent features that EditLog/FSImage can support. > The utility helps write int (0) to given OutputStream and if EditLog/FSImage > supports Layout flags, they read the value from InputStream to confirm > whether there are unsupported feature flags (non zero int). However, we also > create and return new object of LayoutFlags, which is not used anywhere > because it's just a utility to read/write to/from given stream. We should > remove such redundant objects from getting created while reading from > InputStream using LayoutFlags#read utility. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-16092) Avoid creating LayoutFlags redundant objects
[ https://issues.apache.org/jira/browse/HDFS-16092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HDFS-16092: --- Fix Version/s: 3.3.2 3.4.0 Resolution: Fixed Status: Resolved (was: Patch Available) Thanks [~vjasani] > Avoid creating LayoutFlags redundant objects > > > Key: HDFS-16092 > URL: https://issues.apache.org/jira/browse/HDFS-16092 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0, 3.3.2 > > Time Spent: 1h 10m > Remaining Estimate: 0h > > We use LayoutFlags to represent features that EditLog/FSImage can support. > The utility helps write int (0) to given OutputStream and if EditLog/FSImage > supports Layout flags, they read the value from InputStream to confirm > whether there are unsupported feature flags (non zero int). However, we also > create and return new object of LayoutFlags, which is not used anywhere > because it's just a utility to read/write to/from given stream. We should > remove such redundant objects from getting created while reading from > InputStream using LayoutFlags#read utility. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16092) Avoid creating LayoutFlags redundant objects
[ https://issues.apache.org/jira/browse/HDFS-16092?focusedWorklogId=616059=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-616059 ] ASF GitHub Bot logged work on HDFS-16092: - Author: ASF GitHub Bot Created on: 29/Jun/21 09:31 Start Date: 29/Jun/21 09:31 Worklog Time Spent: 10m Work Description: jojochuang merged pull request #3150: URL: https://github.com/apache/hadoop/pull/3150 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 616059) Time Spent: 1h 10m (was: 1h) > Avoid creating LayoutFlags redundant objects > > > Key: HDFS-16092 > URL: https://issues.apache.org/jira/browse/HDFS-16092 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > Time Spent: 1h 10m > Remaining Estimate: 0h > > We use LayoutFlags to represent features that EditLog/FSImage can support. > The utility helps write int (0) to given OutputStream and if EditLog/FSImage > supports Layout flags, they read the value from InputStream to confirm > whether there are unsupported feature flags (non zero int). However, we also > create and return new object of LayoutFlags, which is not used anywhere > because it's just a utility to read/write to/from given stream. We should > remove such redundant objects from getting created while reading from > InputStream using LayoutFlags#read utility. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-16097) Datanode receives ipc requests will throw NPE when datanode quickly restart
lei w created HDFS-16097: Summary: Datanode receives ipc requests will throw NPE when datanode quickly restart Key: HDFS-16097 URL: https://issues.apache.org/jira/browse/HDFS-16097 Project: Hadoop HDFS Issue Type: Bug Components: datanode Environment: Reporter: lei w Datanode receives ipc requests will throw NPE when datanode quickly restart. This is because when DN is reStarted, BlockPool is first registered with blockPoolManager and then fsdataset is initialized. When BlockPool is registered to blockPoolManager without initializing fsdataset, DataNode receives an IPC request will throw NPE, because it will call related methods provided by fsdataset. The stack exception is as follows: {code:java} java.lang.NullPointerException at org.apache.hadoop.hdfs.server.datanode.DataNode.initReplicaRecovery(DataNode.java:3468) at org.apache.hadoop.hdfs.protocolPB.InterDatanodeProtocolServerSideTranslatorPB.initReplicaRecovery(InterDatanodeProtocolServerSideTranslatorPB.java:55) at org.apache.hadoop.hdfs.protocol.proto.InterDatanodeProtocolProtos$InterDatanodeProtocolService$2.callBlockingMethod(InterDatanodeProtocolProtos.java:3105) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:916) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:862) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16096) Delete useless method DirectoryWithQuotaFeature#setQuota
[ https://issues.apache.org/jira/browse/HDFS-16096?focusedWorklogId=616037=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-616037 ] ASF GitHub Bot logged work on HDFS-16096: - Author: ASF GitHub Bot Created on: 29/Jun/21 07:52 Start Date: 29/Jun/21 07:52 Worklog Time Spent: 10m Work Description: zhuxiangyi commented on pull request #3156: URL: https://github.com/apache/hadoop/pull/3156#issuecomment-870362633 Thanks @jojochuang for your review. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 616037) Time Spent: 20m (was: 10m) > Delete useless method DirectoryWithQuotaFeature#setQuota > > > Key: HDFS-16096 > URL: https://issues.apache.org/jira/browse/HDFS-16096 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Reporter: Xiangyi Zhu >Assignee: Xiangyi Zhu >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 20m > Remaining Estimate: 0h > > Delete useless method DirectoryWithQuotaFeature#setQuota. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-16095) Add lsQuotaList command and getQuotaListing api for hdfs quota
[ https://issues.apache.org/jira/browse/HDFS-16095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17371193#comment-17371193 ] Wei-Chiu Chuang commented on HDFS-16095: Thanks for opening the jira. Added you as the jira assignee. > Add lsQuotaList command and getQuotaListing api for hdfs quota > -- > > Key: HDFS-16095 > URL: https://issues.apache.org/jira/browse/HDFS-16095 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.4.0 >Reporter: Xiangyi Zhu >Assignee: Xiangyi Zhu >Priority: Major > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > Currently hdfs does not support obtaining all quota information. The > administrator may need to check which quotas have been added to a certain > directory, or the quotas of the entire cluster. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-16096) Delete useless method DirectoryWithQuotaFeature#setQuota
[ https://issues.apache.org/jira/browse/HDFS-16096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang reassigned HDFS-16096: -- Assignee: Xiangyi Zhu > Delete useless method DirectoryWithQuotaFeature#setQuota > > > Key: HDFS-16096 > URL: https://issues.apache.org/jira/browse/HDFS-16096 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Reporter: Xiangyi Zhu >Assignee: Xiangyi Zhu >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 10m > Remaining Estimate: 0h > > Delete useless method DirectoryWithQuotaFeature#setQuota. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-16095) Add lsQuotaList command and getQuotaListing api for hdfs quota
[ https://issues.apache.org/jira/browse/HDFS-16095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang reassigned HDFS-16095: -- Assignee: Xiangyi Zhu > Add lsQuotaList command and getQuotaListing api for hdfs quota > -- > > Key: HDFS-16095 > URL: https://issues.apache.org/jira/browse/HDFS-16095 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.4.0 >Reporter: Xiangyi Zhu >Assignee: Xiangyi Zhu >Priority: Major > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > Currently hdfs does not support obtaining all quota information. The > administrator may need to check which quotas have been added to a certain > directory, or the quotas of the entire cluster. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-16096) Delete useless method DirectoryWithQuotaFeature#setQuota
[ https://issues.apache.org/jira/browse/HDFS-16096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HDFS-16096: -- Labels: pull-request-available (was: ) > Delete useless method DirectoryWithQuotaFeature#setQuota > > > Key: HDFS-16096 > URL: https://issues.apache.org/jira/browse/HDFS-16096 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Reporter: Xiangyi Zhu >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 10m > Remaining Estimate: 0h > > Delete useless method DirectoryWithQuotaFeature#setQuota. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16096) Delete useless method DirectoryWithQuotaFeature#setQuota
[ https://issues.apache.org/jira/browse/HDFS-16096?focusedWorklogId=616033=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-616033 ] ASF GitHub Bot logged work on HDFS-16096: - Author: ASF GitHub Bot Created on: 29/Jun/21 07:32 Start Date: 29/Jun/21 07:32 Worklog Time Spent: 10m Work Description: zhuxiangyi opened a new pull request #3156: URL: https://github.com/apache/hadoop/pull/3156 ## NOTICE Please create an issue in ASF JIRA before opening a pull request, and you need to set the title of the pull request which starts with the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.) For more details, please see https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 616033) Remaining Estimate: 0h Time Spent: 10m > Delete useless method DirectoryWithQuotaFeature#setQuota > > > Key: HDFS-16096 > URL: https://issues.apache.org/jira/browse/HDFS-16096 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Reporter: Xiangyi Zhu >Priority: Major > Fix For: 3.4.0 > > Time Spent: 10m > Remaining Estimate: 0h > > Delete useless method DirectoryWithQuotaFeature#setQuota. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-16096) Delete useless method DirectoryWithQuotaFeature#setQuota
Xiangyi Zhu created HDFS-16096: -- Summary: Delete useless method DirectoryWithQuotaFeature#setQuota Key: HDFS-16096 URL: https://issues.apache.org/jira/browse/HDFS-16096 Project: Hadoop HDFS Issue Type: Improvement Components: hdfs Reporter: Xiangyi Zhu Fix For: 3.4.0 Delete useless method DirectoryWithQuotaFeature#setQuota. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-16067) Support Append API in NNThroughputBenchmark
[ https://issues.apache.org/jira/browse/HDFS-16067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17371175#comment-17371175 ] Renukaprasad C commented on HDFS-16067: --- Thanks [~ayushtkn] for review & feedback. {code:java} HdfsFileStatus status = blkWithStatus.getFileStatus(); {code} This i have added as read API after the APPEND operation, aprt from this it is not related to it. Other comments i will address & update the patch soon. Thank you. > Support Append API in NNThroughputBenchmark > --- > > Key: HDFS-16067 > URL: https://issues.apache.org/jira/browse/HDFS-16067 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Renukaprasad C >Assignee: Renukaprasad C >Priority: Minor > Attachments: HDFS-16067.001.patch, HDFS-16067.002.patch, > HDFS-16067.003.patch, HDFS-16067.004.patch > > > Append API needs to be added into NNThroughputBenchmark tool. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15650) Make the socket timeout for computing checksum of striped blocks configurable
[ https://issues.apache.org/jira/browse/HDFS-15650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17371155#comment-17371155 ] Hongbing Wang commented on HDFS-15650: -- [~yhaya] [~weichiu] Hi! In our practice, when there are a large number of ec checksum scenarios (such as distcp with checksum), there will be many socket timeout, and generally retrying is normal. (Note: -HDFS-15709- has been merged). I think it makes sense to fix the hard-code. New config `dfs.checksum.ec.socket-timeout` looks good. Do you have any plan to fix this issue? Thanks! > Make the socket timeout for computing checksum of striped blocks configurable > - > > Key: HDFS-15650 > URL: https://issues.apache.org/jira/browse/HDFS-15650 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode, ec, erasure-coding >Reporter: Yushi Hayasaka >Priority: Minor > Labels: pull-request-available > Time Spent: 0.5h > Remaining Estimate: 0h > > Regarding the DataNode tries to get the checksum of EC internal blocks from > another DataNode for computing the checksum of striped blocks, the timeout is > hard-coded now, but it should be configurable. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDFS-16092) Avoid creating LayoutFlags redundant objects
[ https://issues.apache.org/jira/browse/HDFS-16092?focusedWorklogId=616001=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-616001 ] ASF GitHub Bot logged work on HDFS-16092: - Author: ASF GitHub Bot Created on: 29/Jun/21 06:15 Start Date: 29/Jun/21 06:15 Worklog Time Spent: 10m Work Description: virajjasani commented on pull request #3150: URL: https://github.com/apache/hadoop/pull/3150#issuecomment-870269640 Thanks for the review @jojochuang. The failed tests don't seem related, they are mostly timeout and OOM related flakies. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 616001) Time Spent: 1h (was: 50m) > Avoid creating LayoutFlags redundant objects > > > Key: HDFS-16092 > URL: https://issues.apache.org/jira/browse/HDFS-16092 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > Time Spent: 1h > Remaining Estimate: 0h > > We use LayoutFlags to represent features that EditLog/FSImage can support. > The utility helps write int (0) to given OutputStream and if EditLog/FSImage > supports Layout flags, they read the value from InputStream to confirm > whether there are unsupported feature flags (non zero int). However, we also > create and return new object of LayoutFlags, which is not used anywhere > because it's just a utility to read/write to/from given stream. We should > remove such redundant objects from getting created while reading from > InputStream using LayoutFlags#read utility. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org