Re: [PR] YARN-11548. [Federation] Router Supports Format FederationStateStore. [hadoop]
hadoop-yetus commented on PR #6116: URL: https://github.com/apache/hadoop/pull/6116#issuecomment-1793352176 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 40s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 14m 54s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 32m 16s | | trunk passed | | +1 :green_heart: | compile | 2m 32s | | trunk passed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 2m 17s | | trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | checkstyle | 1m 22s | | trunk passed | | +1 :green_heart: | mvnsite | 2m 20s | | trunk passed | | +1 :green_heart: | javadoc | 2m 14s | | trunk passed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 2m 28s | | trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | spotbugs | 4m 17s | | trunk passed | | +1 :green_heart: | shadedclient | 34m 42s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 32s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 51s | | the patch passed | | +1 :green_heart: | compile | 2m 36s | | the patch passed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 2m 36s | | the patch passed | | +1 :green_heart: | compile | 2m 22s | | the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | javac | 2m 22s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 1m 15s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 57s | | the patch passed | | +1 :green_heart: | javadoc | 1m 51s | | the patch passed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 45s | | the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | spotbugs | 4m 49s | | the patch passed | | +1 :green_heart: | shadedclient | 35m 17s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 3m 40s | | hadoop-yarn-server-common in the patch passed. | | +1 :green_heart: | unit | 101m 12s | | hadoop-yarn-server-resourcemanager in the patch passed. | | +1 :green_heart: | unit | 0m 29s | | hadoop-yarn-server-router in the patch passed. | | +1 :green_heart: | asflicense | 0m 40s | | The patch does not generate ASF License warnings. | | | | 266m 8s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6116/7/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6116 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 89a6d66a71c9 4.15.0-213-generic #224-Ubuntu SMP Mon Jun 19 13:30:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 36c8730a97643f9047a0b4cfb57b617d1f581d4d | | Default Java | Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6116/7/testReport/ | | Max. process+thread count | 967 (vs. ulimit of 5500) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router U:
Re: [PR] YARN-11595. [BackPort] Fix hadoop-yarn-client#java.lang.NoClassDefFoundError. [hadoop]
hadoop-yetus commented on PR #6253: URL: https://github.com/apache/hadoop/pull/6253#issuecomment-1793346230 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 11m 12s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 0s | | xmllint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ branch-3.3 Compile Tests _ | | +1 :green_heart: | mvninstall | 52m 19s | | branch-3.3 passed | | +1 :green_heart: | compile | 0m 30s | | branch-3.3 passed | | +1 :green_heart: | mvnsite | 0m 39s | | branch-3.3 passed | | +1 :green_heart: | javadoc | 0m 30s | | branch-3.3 passed | | +1 :green_heart: | shadedclient | 94m 0s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 29s | | the patch passed | | +1 :green_heart: | compile | 0m 24s | | the patch passed | | +1 :green_heart: | javac | 0m 24s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | mvnsite | 0m 27s | | the patch passed | | +1 :green_heart: | javadoc | 0m 19s | | the patch passed | | +1 :green_heart: | shadedclient | 40m 15s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 27m 9s | | hadoop-yarn-client in the patch passed. | | +1 :green_heart: | asflicense | 0m 38s | | The patch does not generate ASF License warnings. | | | | 179m 9s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6253/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6253 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient codespell detsecrets xmllint | | uname | Linux 439ae5c050d7 4.15.0-213-generic #224-Ubuntu SMP Mon Jun 19 13:30:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | branch-3.3 / 57a07ab4c3bc2022f8dd65549966247ff98822cd | | Default Java | Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~18.04-b09 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6253/1/testReport/ | | Max. process+thread count | 545 (vs. ulimit of 5500) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6253/1/console | | versions | git=2.17.1 maven=3.6.0 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] Update plugin for SBOM generation to 2.7.10 [hadoop]
hadoop-yetus commented on PR #6235: URL: https://github.com/apache/hadoop/pull/6235#issuecomment-1793332711 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 12m 38s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 1s | | xmllint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 43m 53s | | trunk passed | | +1 :green_heart: | compile | 17m 11s | | trunk passed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 16m 11s | | trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | mvnsite | 20m 19s | | trunk passed | | +1 :green_heart: | javadoc | 8m 38s | | trunk passed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 7m 30s | | trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | shadedclient | 149m 4s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 30m 26s | | the patch passed | | +1 :green_heart: | compile | 17m 34s | | the patch passed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 17m 34s | | the patch passed | | +1 :green_heart: | compile | 15m 48s | | the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | javac | 15m 48s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | mvnsite | 14m 9s | | the patch passed | | +1 :green_heart: | javadoc | 8m 57s | | the patch passed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 7m 33s | | the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | shadedclient | 67m 0s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 752m 3s | [/patch-unit-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6235/3/artifact/out/patch-unit-root.txt) | root in the patch passed. | | +1 :green_heart: | asflicense | 1m 47s | | The patch does not generate ASF License warnings. | | | | 1051m 41s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.TestDFSUtil | | | hadoop.fs.http.client.TestHttpFSFWithWebhdfsFileSystem | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6235/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6235 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient codespell detsecrets xmllint | | uname | Linux 3e0f0be35d16 4.15.0-213-generic #224-Ubuntu SMP Mon Jun 19 13:30:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / bb592db46450246d7b63d3954f7e41d95816284a | | Default Java | Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6235/3/testReport/ | | Max. process+thread count | 3078 (vs. ulimit of 5500) | | modules | C: . U: . | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6235/3/console | | versions | git=2.25.1 maven=3.6.3 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail:
Re: [PR] HDFS-16791 Add getEnclosingRoot API to filesystem interface and all implementations [hadoop]
hadoop-yetus commented on PR #6198: URL: https://github.com/apache/hadoop/pull/6198#issuecomment-1793326233 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 35s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +0 :ok: | markdownlint | 0m 1s | | markdownlint was not available. | | +0 :ok: | buf | 0m 1s | | buf was not available. | | +0 :ok: | buf | 0m 1s | | buf was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 11 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 14m 44s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 31m 56s | | trunk passed | | +1 :green_heart: | compile | 17m 16s | | trunk passed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 15m 34s | | trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | checkstyle | 4m 35s | | trunk passed | | +1 :green_heart: | mvnsite | 6m 1s | | trunk passed | | +1 :green_heart: | javadoc | 4m 46s | | trunk passed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 4m 57s | | trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | spotbugs | 10m 27s | | trunk passed | | +1 :green_heart: | shadedclient | 34m 37s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 34s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 35s | | the patch passed | | +1 :green_heart: | compile | 16m 1s | | the patch passed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | cc | 16m 1s | | the patch passed | | +1 :green_heart: | javac | 16m 1s | | the patch passed | | +1 :green_heart: | compile | 15m 57s | | the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | cc | 15m 57s | | the patch passed | | +1 :green_heart: | javac | 15m 57s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 4m 30s | | the patch passed | | +1 :green_heart: | mvnsite | 5m 51s | | the patch passed | | +1 :green_heart: | javadoc | 4m 45s | | the patch passed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 4m 55s | | the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | spotbugs | 11m 11s | | the patch passed | | +1 :green_heart: | shadedclient | 34m 53s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 19m 22s | | hadoop-common in the patch passed. | | +1 :green_heart: | unit | 2m 53s | | hadoop-hdfs-client in the patch passed. | | -1 :x: | unit | 223m 37s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6198/7/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | -1 :x: | unit | 1m 30s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6198/7/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt) | hadoop-hdfs-rbf in the patch failed. | | +1 :green_heart: | asflicense | 1m 20s | | The patch does not generate ASF License warnings. | | | | 504m 40s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.datanode.TestDirectoryScanner | | | hadoop.hdfs.TestDFSUtil | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6198/7/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6198 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets markdownlint cc buflint bufcompat | | uname | Linux 7f7759f19d26 4.15.0-213-generic #224-Ubuntu SMP
Re: [PR] HDFS-17247. Improve AvailableSpaceRackFaultTolerantBlockPlacementPolicy logic [hadoop]
haiyang1987 commented on PR #6245: URL: https://github.com/apache/hadoop/pull/6245#issuecomment-1793323594 The implementation logic in both the AvailableSpaceBlockPlacementPolicy and AvailableSpaceRackFaultTolerantBlockPlacementPolicy is essentially identical. therefore, a public class called AvailableSpaceBlockPlacementPolicyUtils has been created. this enables any potential future optimizations to be made to AvailableSpaceBlockPlacementPolicyUtils, allowing for easy modifications to both policies. Hi @ayushtkn @Hexiaoqiao @ZanderXu @tomscut @zhangshuyan0 Would you mind to take a review this pr? thank you very much~ -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] MAPREDUCE-7461: Fixed assertionComparision failure by resolving xml path for 'name' [hadoop]
hadoop-yetus commented on PR #6252: URL: https://github.com/apache/hadoop/pull/6252#issuecomment-1793322663 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 17m 22s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 14m 56s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 33m 49s | | trunk passed | | +1 :green_heart: | compile | 1m 46s | | trunk passed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 1m 36s | | trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | checkstyle | 1m 14s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 16s | | trunk passed | | +1 :green_heart: | javadoc | 1m 16s | | trunk passed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 8s | | trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | spotbugs | 1m 56s | | trunk passed | | +1 :green_heart: | shadedclient | 33m 9s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 32s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 0m 52s | | the patch passed | | +1 :green_heart: | compile | 1m 36s | | the patch passed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 1m 36s | | the patch passed | | +1 :green_heart: | compile | 1m 27s | | the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | javac | 1m 27s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 1m 1s | [/results-checkstyle-hadoop-mapreduce-project_hadoop-mapreduce-client.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6252/1/artifact/out/results-checkstyle-hadoop-mapreduce-project_hadoop-mapreduce-client.txt) | hadoop-mapreduce-project/hadoop-mapreduce-client: The patch generated 4 new + 3 unchanged - 2 fixed = 7 total (was 5) | | +1 :green_heart: | mvnsite | 0m 57s | | the patch passed | | +1 :green_heart: | javadoc | 0m 52s | | the patch passed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 50s | | the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | spotbugs | 1m 56s | | the patch passed | | +1 :green_heart: | shadedclient | 33m 46s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 8m 40s | | hadoop-mapreduce-client-app in the patch passed. | | +1 :green_heart: | unit | 4m 44s | | hadoop-mapreduce-client-hs in the patch passed. | | +1 :green_heart: | asflicense | 0m 38s | | The patch does not generate ASF License warnings. | | | | 172m 58s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6252/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6252 | | JIRA Issue | MAPREDUCE-7461 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 594f3c272237 4.15.0-213-generic #224-Ubuntu SMP Mon Jun 19 13:30:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / c132a535e0246f1998726333cc18a373b8256b0f | | Default Java | Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6252/1/testReport/ | | Max. process+thread count | 739 (vs. ulimit of 5500) | | modules | C:
Re: [PR] YARN-11483. [Federation] Router AdminCLI Supports Clean Finish Apps. [hadoop]
hadoop-yetus commented on PR #6251: URL: https://github.com/apache/hadoop/pull/6251#issuecomment-1793315021 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 55s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 48m 34s | | trunk passed | | +1 :green_heart: | compile | 0m 31s | | trunk passed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 0m 29s | | trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | checkstyle | 0m 28s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 33s | | trunk passed | | +1 :green_heart: | javadoc | 0m 34s | | trunk passed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 26s | | trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | spotbugs | 0m 55s | | trunk passed | | +1 :green_heart: | shadedclient | 37m 55s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 24s | | the patch passed | | +1 :green_heart: | compile | 0m 24s | | the patch passed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 0m 24s | | the patch passed | | +1 :green_heart: | compile | 0m 22s | | the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | javac | 0m 22s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 17s | [/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6251/1/artifact/out/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt) | hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client: The patch generated 2 new + 1 unchanged - 0 fixed = 3 total (was 1) | | +1 :green_heart: | mvnsite | 0m 25s | | the patch passed | | +1 :green_heart: | javadoc | 0m 22s | | the patch passed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 20s | | the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | spotbugs | 0m 55s | | the patch passed | | +1 :green_heart: | shadedclient | 38m 10s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 28m 3s | [/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6251/1/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt) | hadoop-yarn-client in the patch passed. | | +1 :green_heart: | asflicense | 0m 38s | | The patch does not generate ASF License warnings. | | | | 166m 34s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.yarn.client.cli.TestRouterCLI | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6251/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6251 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 8804801febfd 4.15.0-213-generic #224-Ubuntu SMP Mon Jun 19 13:30:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / c9e5a2b9b4e2853b75d63581d93f306e4b1caf77 | | Default Java | Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | Test Results |
[jira] [Commented] (HADOOP-18359) Update commons-cli from 1.2 to 1.5.
[ https://issues.apache.org/jira/browse/HADOOP-18359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17782812#comment-17782812 ] ASF GitHub Bot commented on HADOOP-18359: - slfan1989 commented on PR #6248: URL: https://github.com/apache/hadoop/pull/6248#issuecomment-1793308490 I've readed the error in 'patch-unit-root.txt', which we previously encountered in the trunk branch. I will backport YARN-11595 to resolve this issue. We can check the following link: [/patch-unit-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6248/1/artifact/out/patch-unit-root.txt) ``` ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:3.0.0-M1:test (default-test) on project hadoop-yarn-client: Execution default-test of goal org.apache.maven.plugins:maven-surefire-plugin:3.0.0-M1:test failed: java.lang.NoClassDefFoundError: org/junit/jupiter/api/TestInfo: org.junit.jupiter.api.TestInfo -> [Help 1] [ERROR] [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch. [ERROR] Re-run Maven using the -X switch to enable full debug logging. [ERROR] [ERROR] For more information about the errors and possible solutions, please read the following articles: [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/PluginExecutionException [ERROR] [ERROR] After correcting the problems, you can resume the build with the command [ERROR] mvn -rf :hadoop-yarn-client ``` > Update commons-cli from 1.2 to 1.5. > > > Key: HADOOP-18359 > URL: https://issues.apache.org/jira/browse/HADOOP-18359 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Affects Versions: 3.4.0 >Reporter: Shilun Fan >Assignee: Shilun Fan >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HADOOP-18359. Update commons-cli from 1.2 to 1.5. (#5095). [hadoop]
slfan1989 commented on PR #6248: URL: https://github.com/apache/hadoop/pull/6248#issuecomment-1793308490 I've readed the error in 'patch-unit-root.txt', which we previously encountered in the trunk branch. I will backport YARN-11595 to resolve this issue. We can check the following link: [/patch-unit-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6248/1/artifact/out/patch-unit-root.txt) ``` ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:3.0.0-M1:test (default-test) on project hadoop-yarn-client: Execution default-test of goal org.apache.maven.plugins:maven-surefire-plugin:3.0.0-M1:test failed: java.lang.NoClassDefFoundError: org/junit/jupiter/api/TestInfo: org.junit.jupiter.api.TestInfo -> [Help 1] [ERROR] [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch. [ERROR] Re-run Maven using the -X switch to enable full debug logging. [ERROR] [ERROR] For more information about the errors and possible solutions, please read the following articles: [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/PluginExecutionException [ERROR] [ERROR] After correcting the problems, you can resume the build with the command [ERROR] mvn -rf :hadoop-yarn-client ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[PR] YARN-11595. [BackPort] Fix hadoop-yarn-client#java.lang.NoClassDefFoundError. [hadoop]
slfan1989 opened a new pull request, #6253: URL: https://github.com/apache/hadoop/pull/6253 ### Description of PR ### How was this patch tested? ### For code changes: - [ ] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')? - [ ] Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, `NOTICE-binary` files? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HADOOP-18947. Fixed test flakiness during string comparision [hadoop]
ayushtkn merged PR #6215: URL: https://github.com/apache/hadoop/pull/6215 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] YARN-11484. [Federation] Router Supports Yarn Client CLI Cmds. [hadoop]
hadoop-yetus commented on PR #6132: URL: https://github.com/apache/hadoop/pull/6132#issuecomment-1793275656 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 50s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +0 :ok: | buf | 0m 1s | | buf was not available. | | +0 :ok: | buf | 0m 1s | | buf was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 14m 17s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 35m 59s | | trunk passed | | +1 :green_heart: | compile | 18m 19s | | trunk passed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 16m 50s | | trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | checkstyle | 4m 41s | | trunk passed | | +1 :green_heart: | mvnsite | 5m 11s | | trunk passed | | +1 :green_heart: | javadoc | 4m 56s | | trunk passed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 4m 35s | | trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | spotbugs | 9m 11s | | trunk passed | | +1 :green_heart: | shadedclient | 39m 11s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 28s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 16s | | the patch passed | | -1 :x: | compile | 4m 28s | [/patch-compile-root-jdkUbuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6132/9/artifact/out/patch-compile-root-jdkUbuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04.txt) | root in the patch failed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04. | | -1 :x: | cc | 4m 28s | [/patch-compile-root-jdkUbuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6132/9/artifact/out/patch-compile-root-jdkUbuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04.txt) | root in the patch failed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04. | | -1 :x: | javac | 4m 28s | [/patch-compile-root-jdkUbuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6132/9/artifact/out/patch-compile-root-jdkUbuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04.txt) | root in the patch failed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04. | | +1 :green_heart: | compile | 17m 48s | | the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | cc | 17m 48s | | the patch passed | | +1 :green_heart: | javac | 17m 48s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 4m 30s | | the patch passed | | +1 :green_heart: | mvnsite | 5m 10s | | the patch passed | | +1 :green_heart: | javadoc | 4m 47s | | the patch passed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 47s | | hadoop-yarn-api in the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05. | | +1 :green_heart: | javadoc | 0m 58s | | hadoop-yarn-common in the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05. | | +1 :green_heart: | javadoc | 0m 58s | | hadoop-yarn-server-resourcemanager in the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05. | | +1 :green_heart: | javadoc | 0m 38s | | hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdkPrivateBuild-1.8.0_382-8u382-ga-1~20.04.1-b05 with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 generated 0 new + 163 unchanged - 1 fixed = 163 total (was 164) | | +1 :green_heart: | javadoc | 0m 37s | | hadoop-mapreduce-client-jobclient in the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05. | | +1 :green_heart: | javadoc | 0m 39s | | hadoop-yarn-server-router in the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05. | | +1 :green_heart: | spotbugs | 10m 17s | | the patch passed | | +1 :green_heart: |
[PR] MAPREDUCE-7461: Fixed assertionComparision failure by resolving xml path for 'name' [hadoop]
kavvya97 opened a new pull request, #6252: URL: https://github.com/apache/hadoop/pull/6252 **Setup:** Java version: openjdk 11.0.20.1 Maven version: Apache Maven 3.6.3 ### **Issue**: https://issues.apache.org/jira/browse/MAPREDUCE-7461 ### Description of PR The following tests can fail due to flakiness while comparing the contents of the generated XML response. **Module**: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app `org.apache.hadoop.mapreduce.v2.app.webapp.TestAMWebServicesJobs#testJobIdXML` `org.apache.hadoop.mapreduce.v2.app.webapp.TestAMWebServicesJobs#testJobsXML` **Module**: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs `org.apache.hadoop.mapreduce.v2.hs.webapp.TestHsWebServicesJobs#testJobIdXML` ### Steps to reproduce 1. `git clone https://github.com/apache/hadoop` 2. `mvn install -pl hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app -am -DskipTests` 3. Run the tests `mvn -pl hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app test -Dtests=org.apache.hadoop.mapreduce.v2.app.webapp.TestAMWebServicesJobs#testJobIdXML` 4. Run the test with the Nondex tool and observe the test results `mvn -pl hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app edu.illinois:nondex-maven-plugin:2.1.1:nondex -Dtest=org.apache.hadoop.mapreduce.v2.app.webapp.TestAMWebServicesJobs#testJobsIdXML` - Test Fails when Running Nondex in ONE mode (Assumes deterministic implementation of code but shuffled once different from underlying implementation) `-DnondexMode=ONE` & FULL Mode ` -DnondexMode=FULL` (shuffles differently for each call) ### Root Cause The test attempts to send a HTTP GET request to a specific URL and expects a response in XML format. However, XML response order is not necessarily guaranteed. The contents of the XML and the tags are compared with Job contents from `appContext` based on [job Id](https://github.com/kavvya97/hadoop/blob/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/webapp/TestAMWebServicesJobs.java#L493-L494) in [verifyAMJobXML](https://github.com/kavvya97/hadoop/blob/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/webapp/TestAMWebServicesJobs.java#L486) / [verifyHsJobXML](https://github.com/kavvya97/hadoop/blob/9c621fcea72a988c930ef614a7c22de00d0c7d21/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/test/java/org/apache/hadoop/mapreduce/v2/hs/webapp/TestHsWebServicesJobs.java#L232). When comparing t he the XML contents, The `` tag occurs in multiple places inside field. However, the root element within is not always compared due to non-deterministic order. When the name tag is being compared, the test utilizes [WebServicesTestUtils.java getXmlString](https://github.com/kavvya97/hadoop/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/WebServicesTestUtils.java#L78) method for retrieving the name from the XML content. However, It always takes the first tag irrespective of whether it is nested or in root which causes the test to fail and become flaky. Since the XML contents are not ordered, The following errors occur ``` [ERROR] Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 8.377 s <<< FAILURE! - in org.apache.hadoop.mapreduce.v2.app.webapp.TestAMWebServicesJobs [ERROR] testJobIdXML(org.apache.hadoop.mapreduce.v2.app.webapp.TestAMWebServicesJobs) Time elapsed: 8.361 s <<< FAILURE! java.lang.AssertionError: [name] Expecting: "mapreduce.job.acl-view-job" to match pattern: "RandomWriter" ``` ### Fix Since [WebServicesTestUtils.java getXmlString](https://github.com/kavvya97/hadoop/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/webapp/WebServicesTestUtils.java#L78) compares the first tag only which might not necessary be the root tag, The Fix uses Xpath to resolve the conflicts by identify the root tag. Thus the test passes since the root tag is always retrieved irrespective of xml order. ### How was this patch tested? The fix was tested by adding a suitable fix and running the Nondex plugin again and ensuring that all the tests pass in FULL Mode and ONE Mode of the Nondex runs. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[PR] YARN-11483. [Federation] Router AdminCLI Supports Clean Finish Apps. [hadoop]
slfan1989 opened a new pull request, #6251: URL: https://github.com/apache/hadoop/pull/6251 ### Description of PR JIRA:YARN-11483. [Federation] Router AdminCLI Supports Clean Finish Apps. ### How was this patch tested? ### For code changes: - [ ] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')? - [ ] Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, `NOTICE-binary` files? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-18958) UserGroupInformation debug log improve
[ https://issues.apache.org/jira/browse/HADOOP-18958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ayush Saxena updated HADOOP-18958: -- Fix Version/s: (was: 3.3.4) > UserGroupInformation debug log improve > -- > > Key: HADOOP-18958 > URL: https://issues.apache.org/jira/browse/HADOOP-18958 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Affects Versions: 3.3.5, 3.3.3, 3.3.4 >Reporter: wangzhihui >Priority: Minor > Labels: pull-request-available > Attachments: 20231029-122825-1.jpeg, 20231029-122825.jpeg, > 20231030-143525.jpeg, image-2023-10-29-09-47-56-489.png, > image-2023-10-30-14-35-11-161.png > > Original Estimate: 1h > Remaining Estimate: 1h > > Using “new Exception( )” to print the call stack of "doAs Method " in > the UserGroupInformation class. Using this way will print meaningless > Exception information and too many call stacks, This is not conducive to > troubleshooting > *example:* > !20231029-122825.jpeg|width=991,height=548! > > *improved result* : > > !image-2023-10-29-09-47-56-489.png|width=1099,height=156! > !20231030-143525.jpeg|width=572,height=674! -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18958) UserGroupInformation debug log improve
[ https://issues.apache.org/jira/browse/HADOOP-18958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17782794#comment-17782794 ] ASF GitHub Bot commented on HADOOP-18958: - ayushtkn closed pull request #6234: HADOOP-18958. UserGroupInformation debug log improve. URL: https://github.com/apache/hadoop/pull/6234 > UserGroupInformation debug log improve > -- > > Key: HADOOP-18958 > URL: https://issues.apache.org/jira/browse/HADOOP-18958 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Affects Versions: 3.3.5, 3.3.3, 3.3.4 >Reporter: wangzhihui >Priority: Minor > Labels: pull-request-available > Fix For: 3.3.4 > > Attachments: 20231029-122825-1.jpeg, 20231029-122825.jpeg, > 20231030-143525.jpeg, image-2023-10-29-09-47-56-489.png, > image-2023-10-30-14-35-11-161.png > > Original Estimate: 1h > Remaining Estimate: 1h > > Using “new Exception( )” to print the call stack of "doAs Method " in > the UserGroupInformation class. Using this way will print meaningless > Exception information and too many call stacks, This is not conducive to > troubleshooting > *example:* > !20231029-122825.jpeg|width=991,height=548! > > *improved result* : > > !image-2023-10-29-09-47-56-489.png|width=1099,height=156! > !20231030-143525.jpeg|width=572,height=674! -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18958) UserGroupInformation debug log improve
[ https://issues.apache.org/jira/browse/HADOOP-18958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17782795#comment-17782795 ] ASF GitHub Bot commented on HADOOP-18958: - ayushtkn commented on PR #6234: URL: https://github.com/apache/hadoop/pull/6234#issuecomment-1793254799 The PR should be raised against trunk, 3.3.4 is already released, it can't be modified > UserGroupInformation debug log improve > -- > > Key: HADOOP-18958 > URL: https://issues.apache.org/jira/browse/HADOOP-18958 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Affects Versions: 3.3.5, 3.3.3, 3.3.4 >Reporter: wangzhihui >Priority: Minor > Labels: pull-request-available > Fix For: 3.3.4 > > Attachments: 20231029-122825-1.jpeg, 20231029-122825.jpeg, > 20231030-143525.jpeg, image-2023-10-29-09-47-56-489.png, > image-2023-10-30-14-35-11-161.png > > Original Estimate: 1h > Remaining Estimate: 1h > > Using “new Exception( )” to print the call stack of "doAs Method " in > the UserGroupInformation class. Using this way will print meaningless > Exception information and too many call stacks, This is not conducive to > troubleshooting > *example:* > !20231029-122825.jpeg|width=991,height=548! > > *improved result* : > > !image-2023-10-29-09-47-56-489.png|width=1099,height=156! > !20231030-143525.jpeg|width=572,height=674! -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HADOOP-18958. UserGroupInformation debug log improve. [hadoop]
ayushtkn commented on PR #6234: URL: https://github.com/apache/hadoop/pull/6234#issuecomment-1793254799 The PR should be raised against trunk, 3.3.4 is already released, it can't be modified -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HADOOP-18958. UserGroupInformation debug log improve. [hadoop]
ayushtkn closed pull request #6234: HADOOP-18958. UserGroupInformation debug log improve. URL: https://github.com/apache/hadoop/pull/6234 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-18963) Fix typos in .gitignore
[ https://issues.apache.org/jira/browse/HADOOP-18963?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ayush Saxena resolved HADOOP-18963. --- Fix Version/s: 3.4.0 (was: 3.3.6) Hadoop Flags: Reviewed Resolution: Fixed > Fix typos in .gitignore > --- > > Key: HADOOP-18963 > URL: https://issues.apache.org/jira/browse/HADOOP-18963 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Affects Versions: 3.3.6 >Reporter: 袁焊忠 >Assignee: 袁焊忠 >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > > DS_Store is auto generated by Mac in every opened folder, which is useless > but annoying. Not only DS_Store file in the repository root directory should > be ignored but DS_Store file in its subfolders. > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-18963) Fix typos in .gitignore
[ https://issues.apache.org/jira/browse/HADOOP-18963?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ayush Saxena updated HADOOP-18963: -- Summary: Fix typos in .gitignore (was: Fix typos in .gitignore #6243) > Fix typos in .gitignore > --- > > Key: HADOOP-18963 > URL: https://issues.apache.org/jira/browse/HADOOP-18963 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Affects Versions: 3.3.6 >Reporter: 袁焊忠 >Assignee: 袁焊忠 >Priority: Major > Labels: pull-request-available > Fix For: 3.3.6 > > > DS_Store is auto generated by Mac in every opened folder, which is useless > but annoying. Not only DS_Store file in the repository root directory should > be ignored but DS_Store file in its subfolders. > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-18963) Fix typos in .gitignore #6243
[ https://issues.apache.org/jira/browse/HADOOP-18963?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ayush Saxena reassigned HADOOP-18963: - Assignee: 袁焊忠 > Fix typos in .gitignore #6243 > - > > Key: HADOOP-18963 > URL: https://issues.apache.org/jira/browse/HADOOP-18963 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Affects Versions: 3.3.6 >Reporter: 袁焊忠 >Assignee: 袁焊忠 >Priority: Major > Labels: pull-request-available > Fix For: 3.3.6 > > > DS_Store is auto generated by Mac in every opened folder, which is useless > but annoying. Not only DS_Store file in the repository root directory should > be ignored but DS_Store file in its subfolders. > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18963) Fix typos in .gitignore
[ https://issues.apache.org/jira/browse/HADOOP-18963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17782791#comment-17782791 ] Ayush Saxena commented on HADOOP-18963: --- Committed to trunk. Thanx [~yuanhanzhong666] for the contribution!!! > Fix typos in .gitignore > --- > > Key: HADOOP-18963 > URL: https://issues.apache.org/jira/browse/HADOOP-18963 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Affects Versions: 3.3.6 >Reporter: 袁焊忠 >Assignee: 袁焊忠 >Priority: Major > Labels: pull-request-available > Fix For: 3.3.6 > > > DS_Store is auto generated by Mac in every opened folder, which is useless > but annoying. Not only DS_Store file in the repository root directory should > be ignored but DS_Store file in its subfolders. > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18963) Fix typos in .gitignore
[ https://issues.apache.org/jira/browse/HADOOP-18963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17782790#comment-17782790 ] ASF GitHub Bot commented on HADOOP-18963: - ayushtkn merged PR #6243: URL: https://github.com/apache/hadoop/pull/6243 > Fix typos in .gitignore > --- > > Key: HADOOP-18963 > URL: https://issues.apache.org/jira/browse/HADOOP-18963 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Affects Versions: 3.3.6 >Reporter: 袁焊忠 >Assignee: 袁焊忠 >Priority: Major > Labels: pull-request-available > Fix For: 3.3.6 > > > DS_Store is auto generated by Mac in every opened folder, which is useless > but annoying. Not only DS_Store file in the repository root directory should > be ignored but DS_Store file in its subfolders. > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HADOOP-18963. Fix typos in .gitignore [hadoop]
ayushtkn merged PR #6243: URL: https://github.com/apache/hadoop/pull/6243 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] YARN-11608. Fix QueueCapacityVectorInfo NPE when accessible labels config is used. [hadoop]
hadoop-yetus commented on PR #6250: URL: https://github.com/apache/hadoop/pull/6250#issuecomment-1793157663 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 28s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +0 :ok: | jsonlint | 0m 1s | | jsonlint was not available. | | +0 :ok: | xmllint | 0m 1s | | xmllint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 6 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 32m 17s | | trunk passed | | +1 :green_heart: | compile | 0m 42s | | trunk passed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 0m 37s | | trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | checkstyle | 0m 38s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 41s | | trunk passed | | +1 :green_heart: | javadoc | 0m 45s | | trunk passed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 38s | | trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | spotbugs | 1m 18s | | trunk passed | | +1 :green_heart: | shadedclient | 21m 6s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 31s | | the patch passed | | +1 :green_heart: | compile | 0m 35s | | the patch passed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 0m 35s | | the patch passed | | +1 :green_heart: | compile | 0m 30s | | the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | javac | 0m 30s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 27s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 31s | | the patch passed | | +1 :green_heart: | javadoc | 0m 32s | | the patch passed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 29s | | the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | spotbugs | 1m 15s | | the patch passed | | +1 :green_heart: | shadedclient | 20m 47s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 85m 46s | | hadoop-yarn-server-resourcemanager in the patch passed. | | +1 :green_heart: | asflicense | 0m 29s | | The patch does not generate ASF License warnings. | | | | 173m 19s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6250/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6250 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets jsonlint xmllint | | uname | Linux 058e3b88fff7 4.15.0-213-generic #224-Ubuntu SMP Mon Jun 19 13:30:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / ffa239be8d34b40d647dd6a6c01cf6319a496dec | | Default Java | Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6250/1/testReport/ | | Max. process+thread count | 949 (vs. ulimit of 5500) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6250/1/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. -- This is
Re: [PR] YARN-11548. [Federation] Router Supports Format FederationStateStore. [hadoop]
hadoop-yetus commented on PR #6116: URL: https://github.com/apache/hadoop/pull/6116#issuecomment-1793064538 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 41s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 14m 35s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 35m 45s | | trunk passed | | +1 :green_heart: | compile | 2m 50s | | trunk passed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 2m 34s | | trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | checkstyle | 1m 27s | | trunk passed | | +1 :green_heart: | mvnsite | 2m 24s | | trunk passed | | +1 :green_heart: | javadoc | 2m 18s | | trunk passed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 2m 4s | | trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | spotbugs | 4m 41s | | trunk passed | | +1 :green_heart: | shadedclient | 36m 54s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 31s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 48s | | the patch passed | | +1 :green_heart: | compile | 2m 25s | | the patch passed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 2m 25s | | the patch passed | | +1 :green_heart: | compile | 2m 19s | | the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | javac | 2m 19s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 1m 12s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 54s | | the patch passed | | +1 :green_heart: | javadoc | 1m 46s | | the patch passed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 40s | | the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | spotbugs | 4m 39s | | the patch passed | | +1 :green_heart: | shadedclient | 36m 27s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 3m 34s | [/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6116/6/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-common.txt) | hadoop-yarn-server-common in the patch passed. | | +1 :green_heart: | unit | 101m 41s | | hadoop-yarn-server-resourcemanager in the patch passed. | | +1 :green_heart: | unit | 0m 32s | | hadoop-yarn-server-router in the patch passed. | | +1 :green_heart: | asflicense | 0m 43s | | The patch does not generate ASF License warnings. | | | | 273m 28s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.yarn.server.federation.store.impl.TestZookeeperFederationStateStore | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6116/6/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6116 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 950e29b13e9a 4.15.0-213-generic #224-Ubuntu SMP Mon Jun 19 13:30:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / c5d5234edc3cea47de461bf85ab5241e85beffeb | | Default Java | Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | Test Results |
Re: [PR] HDFS-16791 Add getEnclosingRoot API to filesystem interface and all implementations [hadoop]
mccormickt12 commented on code in PR #6198: URL: https://github.com/apache/hadoop/pull/6198#discussion_r1382114831 ## hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java: ## @@ -1940,7 +1940,8 @@ public Path getEnclosingRoot(Path path) throws IOException { try { res = fsState.resolve((path.toString()), true); } catch (FileNotFoundException ex) { -throw new NotInMountpointException(path, String.format("getEnclosingRoot - %s", ex.getMessage())); +throw new NotInMountpointException(path, +String.format("getEnclosingRoot - %s", ex.getMessage())); Review Comment: Added, thanks! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[PR] YARN-11608. Fix QueueCapacityVectorInfo NPE when accessible labels config is used. [hadoop]
brumi1024 opened a new pull request, #6250: URL: https://github.com/apache/hadoop/pull/6250 ### Description of PR Added a null check to avoid the NPE when accessible labels config is used. ### How was this patch tested? Unit test + brought up a cluster. ### For code changes: - [ ] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')? - [ ] Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, `NOTICE-binary` files? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18359) Update commons-cli from 1.2 to 1.5.
[ https://issues.apache.org/jira/browse/HADOOP-18359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17782700#comment-17782700 ] ASF GitHub Bot commented on HADOOP-18359: - hadoop-yetus commented on PR #6248: URL: https://github.com/apache/hadoop/pull/6248#issuecomment-1792948807 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 6m 54s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 0s | | xmllint was not available. | | +0 :ok: | shelldocs | 0m 0s | | Shelldocs was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 6 new or modified test files. | _ branch-3.3 Compile Tests _ | | +0 :ok: | mvndep | 13m 53s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 35m 19s | | branch-3.3 passed | | +1 :green_heart: | compile | 18m 47s | | branch-3.3 passed | | +1 :green_heart: | checkstyle | 2m 55s | | branch-3.3 passed | | +1 :green_heart: | mvnsite | 26m 9s | | branch-3.3 passed | | +1 :green_heart: | javadoc | 7m 21s | | branch-3.3 passed | | +0 :ok: | spotbugs | 0m 22s | | branch/hadoop-project no spotbugs output file (spotbugsXml.xml) | | +1 :green_heart: | shadedclient | 72m 24s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 36s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 41m 0s | | the patch passed | | +1 :green_heart: | compile | 18m 22s | | the patch passed | | -1 :x: | javac | 18m 22s | [/results-compile-javac-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6248/1/artifact/out/results-compile-javac-root.txt) | root generated 105 new + 1806 unchanged - 1 fixed = 1911 total (was 1807) | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 2m 52s | | root: The patch generated 0 new + 367 unchanged - 26 fixed = 367 total (was 393) | | +1 :green_heart: | mvnsite | 23m 0s | | the patch passed | | +1 :green_heart: | shellcheck | 0m 0s | | No new issues. | | +1 :green_heart: | javadoc | 7m 14s | | the patch passed | | +0 :ok: | spotbugs | 0m 23s | | hadoop-project has no data from spotbugs | | +1 :green_heart: | shadedclient | 70m 41s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 418m 23s | [/patch-unit-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6248/1/artifact/out/patch-unit-root.txt) | root in the patch failed. | | +1 :green_heart: | asflicense | 1m 27s | | The patch does not generate ASF License warnings. | | | | 802m 11s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.tools.TestDFSAdmin | | | hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes | | | hadoop.hdfs.server.namenode.TestNameNodeMXBean | | | hadoop.hdfs.TestDFSClientExcludedNodes | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6248/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6248 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint shellcheck shelldocs | | uname | Linux 7e016e152000 4.15.0-213-generic #224-Ubuntu SMP Mon Jun 19 13:30:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | branch-3.3 / f9319df4825aaa20dc9ed5e0b3549b57234fc84e | | Default Java | Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~18.04-b09 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6248/1/testReport/ | | Max. process+thread count | 3142 (vs. ulimit of 5500) | | modules | C: hadoop-project hadoop-common-project/hadoop-common hadoop-common-project/hadoop-registry hadoop-hdfs-project/hadoop-hdfs hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core
Re: [PR] HADOOP-18359. Update commons-cli from 1.2 to 1.5. (#5095). [hadoop]
hadoop-yetus commented on PR #6248: URL: https://github.com/apache/hadoop/pull/6248#issuecomment-1792948807 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 6m 54s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 0s | | xmllint was not available. | | +0 :ok: | shelldocs | 0m 0s | | Shelldocs was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 6 new or modified test files. | _ branch-3.3 Compile Tests _ | | +0 :ok: | mvndep | 13m 53s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 35m 19s | | branch-3.3 passed | | +1 :green_heart: | compile | 18m 47s | | branch-3.3 passed | | +1 :green_heart: | checkstyle | 2m 55s | | branch-3.3 passed | | +1 :green_heart: | mvnsite | 26m 9s | | branch-3.3 passed | | +1 :green_heart: | javadoc | 7m 21s | | branch-3.3 passed | | +0 :ok: | spotbugs | 0m 22s | | branch/hadoop-project no spotbugs output file (spotbugsXml.xml) | | +1 :green_heart: | shadedclient | 72m 24s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 36s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 41m 0s | | the patch passed | | +1 :green_heart: | compile | 18m 22s | | the patch passed | | -1 :x: | javac | 18m 22s | [/results-compile-javac-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6248/1/artifact/out/results-compile-javac-root.txt) | root generated 105 new + 1806 unchanged - 1 fixed = 1911 total (was 1807) | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 2m 52s | | root: The patch generated 0 new + 367 unchanged - 26 fixed = 367 total (was 393) | | +1 :green_heart: | mvnsite | 23m 0s | | the patch passed | | +1 :green_heart: | shellcheck | 0m 0s | | No new issues. | | +1 :green_heart: | javadoc | 7m 14s | | the patch passed | | +0 :ok: | spotbugs | 0m 23s | | hadoop-project has no data from spotbugs | | +1 :green_heart: | shadedclient | 70m 41s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 418m 23s | [/patch-unit-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6248/1/artifact/out/patch-unit-root.txt) | root in the patch failed. | | +1 :green_heart: | asflicense | 1m 27s | | The patch does not generate ASF License warnings. | | | | 802m 11s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.tools.TestDFSAdmin | | | hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes | | | hadoop.hdfs.server.namenode.TestNameNodeMXBean | | | hadoop.hdfs.TestDFSClientExcludedNodes | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6248/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6248 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint shellcheck shelldocs | | uname | Linux 7e016e152000 4.15.0-213-generic #224-Ubuntu SMP Mon Jun 19 13:30:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | branch-3.3 / f9319df4825aaa20dc9ed5e0b3549b57234fc84e | | Default Java | Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~18.04-b09 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6248/1/testReport/ | | Max. process+thread count | 3142 (vs. ulimit of 5500) | | modules | C: hadoop-project hadoop-common-project/hadoop-common hadoop-common-project/hadoop-registry hadoop-hdfs-project/hadoop-hdfs hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-uploader hadoop-tools/hadoop-streaming hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-workload
[jira] [Commented] (HADOOP-17377) ABFS: MsiTokenProvider doesn't retry HTTP 429 from the Instance Metadata Service
[ https://issues.apache.org/jira/browse/HADOOP-17377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17782689#comment-17782689 ] Agnes Tevesz commented on HADOOP-17377: --- [~ste...@apache.org] [~brandonvin] Can you help to move this change forward? Who should be the owner of this task? The ticket is not assigned to anyone and there is no activity on the change since end of August. This fix should land in hadoop. The pod identity in azure was deprecated: [https://github.com/Azure/aad-pod-identity] If we get the token directly from the instance metadata service we hit this HTTP 429 issue with tpcds tests very frequently: [https://azure.github.io/azure-workload-identity/docs/] The pod identity component most likely provided the retry logic before, but we cannot install depreciated components on an AKS cluster. Can this change get finished? > ABFS: MsiTokenProvider doesn't retry HTTP 429 from the Instance Metadata > Service > > > Key: HADOOP-17377 > URL: https://issues.apache.org/jira/browse/HADOOP-17377 > Project: Hadoop Common > Issue Type: Bug > Components: fs/azure >Affects Versions: 3.2.1 >Reporter: Brandon >Priority: Major > Labels: pull-request-available > > *Summary* > The instance metadata service has its own guidance for error handling and > retry which are different from the Blob store. > [https://docs.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/how-to-use-vm-token#error-handling] > In particular, it responds with HTTP 429 if request rate is too high. Whereas > Blob store will respond with HTTP 503. The retry policy used only accounts > for the latter as it will retry any status >=500. This can result in job > instability when running multiple processes on the same host. > *Environment* > * Spark talking to an ABFS store > * Hadoop 3.2.1 > * Running on an Azure VM with user-assigned identity, ABFS configured to use > MsiTokenProvider > * 6 executor processes on each VM > *Example* > Here's an example error message and stack trace. It's always the same stack > trace. This appears in logs a few hundred to low thousands of times a day. > It's luckily skating by since the download operation is wrapped in 3 retries. > {noformat} > AADToken: HTTP connection failed for getting token from AzureAD. Http > response: 429 null > Content-Type: application/json; charset=utf-8 Content-Length: 90 Request ID: > Proxies: none > First 1K of Body: {"error":"invalid_request","error_description":"Temporarily > throttled, too many requests"} > at > org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation.executeHttpOperation(AbfsRestOperation.java:190) > at > org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation.execute(AbfsRestOperation.java:125) > at > org.apache.hadoop.fs.azurebfs.services.AbfsClient.getAclStatus(AbfsClient.java:506) > at > org.apache.hadoop.fs.azurebfs.services.AbfsClient.getAclStatus(AbfsClient.java:489) > at > org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.getIsNamespaceEnabled(AzureBlobFileSystemStore.java:208) > at > org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.getFileStatus(AzureBlobFileSystemStore.java:473) > at > org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.getFileStatus(AzureBlobFileSystem.java:437) > at org.apache.hadoop.fs.FileSystem.isFile(FileSystem.java:1717) > at org.apache.spark.util.Utils$.fetchHcfsFile(Utils.scala:747) > at org.apache.spark.util.Utils$.doFetchFile(Utils.scala:724) > at org.apache.spark.util.Utils$.fetchFile(Utils.scala:496) > at > org.apache.spark.executor.Executor.$anonfun$updateDependencies$7(Executor.scala:812) > at > org.apache.spark.executor.Executor.$anonfun$updateDependencies$7$adapted(Executor.scala:803) > at > scala.collection.TraversableLike$WithFilter.$anonfun$foreach$1(TraversableLike.scala:792) > at > scala.collection.mutable.HashMap.$anonfun$foreach$1(HashMap.scala:149) > at scala.collection.mutable.HashTable.foreachEntry(HashTable.scala:237) > at scala.collection.mutable.HashTable.foreachEntry$(HashTable.scala:230) > at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:44) > at scala.collection.mutable.HashMap.foreach(HashMap.scala:149) > at > scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:791) > at > org.apache.spark.executor.Executor.org$apache$spark$executor$Executor$$updateDependencies(Executor.scala:803) > at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:375) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at >
Re: [PR] HDFS-17249. TestDFSUtil.testIsValidName() run failure [hadoop]
hadoop-yetus commented on PR #6249: URL: https://github.com/apache/hadoop/pull/6249#issuecomment-1792905854 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 28s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 13m 52s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 19m 49s | | trunk passed | | +1 :green_heart: | compile | 3m 13s | | trunk passed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 3m 5s | | trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | checkstyle | 0m 53s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 40s | | trunk passed | | +1 :green_heart: | javadoc | 1m 25s | | trunk passed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 46s | | trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | spotbugs | 3m 29s | | trunk passed | | +1 :green_heart: | shadedclient | 21m 33s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 27s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 15s | | the patch passed | | +1 :green_heart: | compile | 3m 7s | | the patch passed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 3m 7s | | the patch passed | | +1 :green_heart: | compile | 2m 58s | | the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | javac | 2m 58s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 42s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 20s | | the patch passed | | +1 :green_heart: | javadoc | 1m 8s | | the patch passed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 29s | | the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | spotbugs | 3m 27s | | the patch passed | | +1 :green_heart: | shadedclient | 23m 56s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 2m 3s | | hadoop-hdfs-client in the patch passed. | | +1 :green_heart: | unit | 199m 52s | | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 37s | | The patch does not generate ASF License warnings. | | | | 315m 16s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6249/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6249 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux c6e98e525e7d 4.15.0-213-generic #224-Ubuntu SMP Mon Jun 19 13:30:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 5a5ede8be1dcfc73bf4f55c1e012f547f33b1c86 | | Default Java | Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6249/2/testReport/ | | Max. process+thread count | 3744 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6249/2/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated
Re: [PR] HDFS-17249. TestDFSUtil.testIsValidName() run failure [hadoop]
LiuGuH commented on PR #6249: URL: https://github.com/apache/hadoop/pull/6249#issuecomment-1792617907 @GauthamBanasandra I create a new JIRA for this. If the title is not matches, please give me a suggestion .Thanks. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HADOOP-18954. Filter NaN values from JMX json interface [hadoop]
hadoop-yetus commented on PR #6229: URL: https://github.com/apache/hadoop/pull/6229#issuecomment-1792605958 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 48s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 1s | | xmllint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 3 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 47m 51s | | trunk passed | | +1 :green_heart: | compile | 18m 28s | | trunk passed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 16m 52s | | trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | checkstyle | 1m 13s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 38s | | trunk passed | | +1 :green_heart: | javadoc | 1m 11s | | trunk passed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 48s | | trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | spotbugs | 2m 36s | | trunk passed | | +1 :green_heart: | shadedclient | 40m 37s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 56s | | the patch passed | | +1 :green_heart: | compile | 17m 33s | | the patch passed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 17m 33s | | the patch passed | | +1 :green_heart: | compile | 16m 36s | | the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | javac | 16m 36s | | the patch passed | | -1 :x: | blanks | 0m 0s | [/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6229/3/artifact/out/blanks-eol.txt) | The patch has 7 line(s) that end in blanks. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply | | -0 :warning: | checkstyle | 1m 12s | [/results-checkstyle-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6229/3/artifact/out/results-checkstyle-hadoop-common-project_hadoop-common.txt) | hadoop-common-project/hadoop-common: The patch generated 2 new + 174 unchanged - 0 fixed = 176 total (was 174) | | +1 :green_heart: | mvnsite | 1m 35s | | the patch passed | | -1 :x: | javadoc | 1m 6s | [/results-javadoc-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6229/3/artifact/out/results-javadoc-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04.txt) | hadoop-common-project_hadoop-common-jdkUbuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 generated 5 new + 0 unchanged - 0 fixed = 5 total (was 0) | | +1 :green_heart: | javadoc | 0m 48s | | the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | spotbugs | 2m 41s | | the patch passed | | +1 :green_heart: | shadedclient | 42m 27s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 19m 6s | | hadoop-common in the patch passed. | | +1 :green_heart: | asflicense | 0m 58s | | The patch does not generate ASF License warnings. | | | | 240m 42s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6229/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6229 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint | | uname | Linux cd9db61caa66 4.15.0-213-generic #224-Ubuntu SMP Mon Jun 19 13:30:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 8551f23a260a99e259fc05b4ee1f9896b0e3 | | Default Java | Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | Multi-JDK versions |
[jira] [Commented] (HADOOP-18954) Filter NaN values from JMX json interface
[ https://issues.apache.org/jira/browse/HADOOP-18954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17782606#comment-17782606 ] ASF GitHub Bot commented on HADOOP-18954: - hadoop-yetus commented on PR #6229: URL: https://github.com/apache/hadoop/pull/6229#issuecomment-1792605958 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 48s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 1s | | xmllint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 3 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 47m 51s | | trunk passed | | +1 :green_heart: | compile | 18m 28s | | trunk passed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 16m 52s | | trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | checkstyle | 1m 13s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 38s | | trunk passed | | +1 :green_heart: | javadoc | 1m 11s | | trunk passed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 48s | | trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | spotbugs | 2m 36s | | trunk passed | | +1 :green_heart: | shadedclient | 40m 37s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 56s | | the patch passed | | +1 :green_heart: | compile | 17m 33s | | the patch passed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 17m 33s | | the patch passed | | +1 :green_heart: | compile | 16m 36s | | the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | javac | 16m 36s | | the patch passed | | -1 :x: | blanks | 0m 0s | [/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6229/3/artifact/out/blanks-eol.txt) | The patch has 7 line(s) that end in blanks. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply | | -0 :warning: | checkstyle | 1m 12s | [/results-checkstyle-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6229/3/artifact/out/results-checkstyle-hadoop-common-project_hadoop-common.txt) | hadoop-common-project/hadoop-common: The patch generated 2 new + 174 unchanged - 0 fixed = 176 total (was 174) | | +1 :green_heart: | mvnsite | 1m 35s | | the patch passed | | -1 :x: | javadoc | 1m 6s | [/results-javadoc-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6229/3/artifact/out/results-javadoc-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04.txt) | hadoop-common-project_hadoop-common-jdkUbuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 generated 5 new + 0 unchanged - 0 fixed = 5 total (was 0) | | +1 :green_heart: | javadoc | 0m 48s | | the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | spotbugs | 2m 41s | | the patch passed | | +1 :green_heart: | shadedclient | 42m 27s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 19m 6s | | hadoop-common in the patch passed. | | +1 :green_heart: | asflicense | 0m 58s | | The patch does not generate ASF License warnings. | | | | 240m 42s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6229/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6229 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint | | uname | Linux cd9db61caa66 4.15.0-213-generic #224-Ubuntu SMP Mon Jun 19 13:30:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build
Re: [PR] HDFS-17248. Fix isValidName error [hadoop]
GauthamBanasandra commented on PR #6249: URL: https://github.com/apache/hadoop/pull/6249#issuecomment-1792544583 > > @LiuGuH could you please file a new bug on issues.apache.org and use it instead of tagging to `HDFS-17248`? > > Sure. I can. But what's the difference between them that I need to notice? Thanks @LiuGuH in the Hadoop community we normally follow the convention where the bug title in JIRA matches that of the Github PR. In this case, HDFS-17248 in JIRA has the title `Fix shaded client for building Hadoop on Windows`. So, it can't have the title that you have put currently (`Fix isValidName error`). Thus, you'll need to create a new JIRA for this. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-18797) Support Concurrent Writes With S3A Magic Committer
[ https://issues.apache.org/jira/browse/HADOOP-18797?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-18797: Fix Version/s: 3.3.9 > Support Concurrent Writes With S3A Magic Committer > -- > > Key: HADOOP-18797 > URL: https://issues.apache.org/jira/browse/HADOOP-18797 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/s3 >Reporter: Emanuel Velzi >Assignee: Syed Shameerur Rahman >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0, 3.3.9 > > > There is a failure in the commit process when multiple jobs are writing to a > s3 directory *concurrently* using {*}magic committers{*}. > This issue is closely related HADOOP-17318. > When multiple Spark jobs write to the same S3A directory, they upload files > simultaneously using "__magic" as the base directory for staging. Inside this > directory, there are multiple "/job-some-uuid" directories, each representing > a concurrently running job. > To fix some preoblems related to concunrrency a property was introduced in > the previous fix: "spark.hadoop.fs.s3a.committer.abort.pending.uploads". When > set to false, it ensures that during the cleanup stage, finalizing jobs do > not abort pending uploads from other jobs. So we see in logs this line: > {code:java} > DEBUG [main] o.a.h.fs.s3a.commit.AbstractS3ACommitter (819): Not cleanup up > pending uploads to s3a ...{code} > (from > [AbstractS3ACommitter.java#L952|https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/AbstractS3ACommitter.java#L952]) > However, in the next step, the {*}"__magic" directory is recursively > deleted{*}: > {code:java} > INFO [main] o.a.h.fs.s3a.commit.magic.MagicS3GuardCommitter (98): Deleting > magic directory s3a://my-bucket/my-table/__magic: duration 0:00.560s {code} > (from [AbstractS3ACommitter.java#L1112 > |https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/AbstractS3ACommitter.java#L1112]and > > [MagicS3GuardCommitter.java#L137)|https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/magic/MagicS3GuardCommitter.java#L137)] > This deletion operation *affects the second job* that is still running > because it loses pending uploads (i.e., ".pendingset" and ".pending" files). > The consequences can range from an exception in the best case to a silent > loss of data in the worst case. The latter occurs when Job_1 deletes files > just before Job_2 executes "listPendingUploadsToCommit" to list ".pendingset" > files in the job attempt directory previous to complete the uploads with POST > requests. > To resolve this issue, it's important {*}to ensure that only the prefix > associated with the job currently finalizing is cleaned{*}. > Here's a possible solution: > {code:java} > /** > * Delete the magic directory. > */ > public void cleanupStagingDirs() { > final Path out = getOutputPath(); > //Path path = magicSubdir(getOutputPath()); > Path path = new Path(magicSubdir(out), formatJobDir(getUUID())); > try(DurationInfo ignored = new DurationInfo(LOG, true, > "Deleting magic directory %s", path)) { > Invoker.ignoreIOExceptions(LOG, "cleanup magic directory", > path.toString(), > () -> deleteWithWarning(getDestFS(), path, true)); > } > } {code} > > The side effect of this issue is that the "__magic" directory is never > cleaned up. However, I believe this is a minor concern, even considering that > other folders such as "_SUCCESS" also persist after jobs end. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18797) Support Concurrent Writes With S3A Magic Committer
[ https://issues.apache.org/jira/browse/HADOOP-18797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17782573#comment-17782573 ] ASF GitHub Bot commented on HADOOP-18797: - steveloughran merged PR #6122: URL: https://github.com/apache/hadoop/pull/6122 > Support Concurrent Writes With S3A Magic Committer > -- > > Key: HADOOP-18797 > URL: https://issues.apache.org/jira/browse/HADOOP-18797 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/s3 >Reporter: Emanuel Velzi >Assignee: Syed Shameerur Rahman >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > > There is a failure in the commit process when multiple jobs are writing to a > s3 directory *concurrently* using {*}magic committers{*}. > This issue is closely related HADOOP-17318. > When multiple Spark jobs write to the same S3A directory, they upload files > simultaneously using "__magic" as the base directory for staging. Inside this > directory, there are multiple "/job-some-uuid" directories, each representing > a concurrently running job. > To fix some preoblems related to concunrrency a property was introduced in > the previous fix: "spark.hadoop.fs.s3a.committer.abort.pending.uploads". When > set to false, it ensures that during the cleanup stage, finalizing jobs do > not abort pending uploads from other jobs. So we see in logs this line: > {code:java} > DEBUG [main] o.a.h.fs.s3a.commit.AbstractS3ACommitter (819): Not cleanup up > pending uploads to s3a ...{code} > (from > [AbstractS3ACommitter.java#L952|https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/AbstractS3ACommitter.java#L952]) > However, in the next step, the {*}"__magic" directory is recursively > deleted{*}: > {code:java} > INFO [main] o.a.h.fs.s3a.commit.magic.MagicS3GuardCommitter (98): Deleting > magic directory s3a://my-bucket/my-table/__magic: duration 0:00.560s {code} > (from [AbstractS3ACommitter.java#L1112 > |https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/AbstractS3ACommitter.java#L1112]and > > [MagicS3GuardCommitter.java#L137)|https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/magic/MagicS3GuardCommitter.java#L137)] > This deletion operation *affects the second job* that is still running > because it loses pending uploads (i.e., ".pendingset" and ".pending" files). > The consequences can range from an exception in the best case to a silent > loss of data in the worst case. The latter occurs when Job_1 deletes files > just before Job_2 executes "listPendingUploadsToCommit" to list ".pendingset" > files in the job attempt directory previous to complete the uploads with POST > requests. > To resolve this issue, it's important {*}to ensure that only the prefix > associated with the job currently finalizing is cleaned{*}. > Here's a possible solution: > {code:java} > /** > * Delete the magic directory. > */ > public void cleanupStagingDirs() { > final Path out = getOutputPath(); > //Path path = magicSubdir(getOutputPath()); > Path path = new Path(magicSubdir(out), formatJobDir(getUUID())); > try(DurationInfo ignored = new DurationInfo(LOG, true, > "Deleting magic directory %s", path)) { > Invoker.ignoreIOExceptions(LOG, "cleanup magic directory", > path.toString(), > () -> deleteWithWarning(getDestFS(), path, true)); > } > } {code} > > The side effect of this issue is that the "__magic" directory is never > cleaned up. However, I believe this is a minor concern, even considering that > other folders such as "_SUCCESS" also persist after jobs end. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HADOOP-18797. Support Concurrent Writes With S3A Magic Committer [hadoop]
steveloughran merged PR #6122: URL: https://github.com/apache/hadoop/pull/6122 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HDFS-17248. Fix isValidName error [hadoop]
LiuGuH commented on PR #6249: URL: https://github.com/apache/hadoop/pull/6249#issuecomment-1792400786 > @LiuGuH could you please file a new bug on issues.apache.org and use it instead of tagging to `HDFS-17248`? Sure. I can. But what's the difference between them that I need to notice? Thanks -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HDFS-17248. Fix isValidName error [hadoop]
LiuGuH commented on code in PR #6249: URL: https://github.com/apache/hadoop/pull/6249#discussion_r1381639092 ## hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSUtilClient.java: ## @@ -661,9 +661,12 @@ public static boolean isValidName(String src) { String[] components = StringUtils.split(src, '/'); for (int i = 0; i < components.length; i++) { String element = components[i]; + // For Windows, we must allow the : in the drive letter. + if (Shell.WINDOWS && i == 1 && element.contains(":")) { Review Comment: OK, Thanks -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-18963) Fix typos in .gitignore #6243
[ https://issues.apache.org/jira/browse/HADOOP-18963?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HADOOP-18963: Labels: pull-request-available (was: ) > Fix typos in .gitignore #6243 > - > > Key: HADOOP-18963 > URL: https://issues.apache.org/jira/browse/HADOOP-18963 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Affects Versions: 3.3.6 >Reporter: 袁焊忠 >Priority: Major > Labels: pull-request-available > Fix For: 3.3.6 > > > DS_Store is auto generated by Mac in every opened folder, which is useless > but annoying. Not only DS_Store file in the repository root directory should > be ignored but DS_Store file in its subfolders. > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18963) Fix typos in .gitignore #6243
[ https://issues.apache.org/jira/browse/HADOOP-18963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17782552#comment-17782552 ] 袁焊忠 commented on HADOOP-18963: -- [https://github.com/apache/hadoop/pull/6243] is created for this ticket. > Fix typos in .gitignore #6243 > - > > Key: HADOOP-18963 > URL: https://issues.apache.org/jira/browse/HADOOP-18963 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Affects Versions: 3.3.6 >Reporter: 袁焊忠 >Priority: Major > Labels: pull-request-available > Fix For: 3.3.6 > > > DS_Store is auto generated by Mac in every opened folder, which is useless > but annoying. Not only DS_Store file in the repository root directory should > be ignored but DS_Store file in its subfolders. > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18963) Fix typos in .gitignore #6243
[ https://issues.apache.org/jira/browse/HADOOP-18963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17782551#comment-17782551 ] ASF GitHub Bot commented on HADOOP-18963: - YuanHanzhong commented on PR #6243: URL: https://github.com/apache/hadoop/pull/6243#issuecomment-1792366998 > I have approved your jira id request, can you create a HADOOP ticket & prefix that jira id on this PR I created HADOOP ticket https://issues.apache.org/jira/browse/HADOOP-18963 and prefixed id on this PR. > Fix typos in .gitignore #6243 > - > > Key: HADOOP-18963 > URL: https://issues.apache.org/jira/browse/HADOOP-18963 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Affects Versions: 3.3.6 >Reporter: 袁焊忠 >Priority: Major > Fix For: 3.3.6 > > > DS_Store is auto generated by Mac in every opened folder, which is useless > but annoying. Not only DS_Store file in the repository root directory should > be ignored but DS_Store file in its subfolders. > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HADOOP-18963. Fix typos in .gitignore [hadoop]
YuanHanzhong commented on PR #6243: URL: https://github.com/apache/hadoop/pull/6243#issuecomment-1792366998 > I have approved your jira id request, can you create a HADOOP ticket & prefix that jira id on this PR I created HADOOP ticket https://issues.apache.org/jira/browse/HADOOP-18963 and prefixed id on this PR. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-18963) Fix typos in .gitignore #6243
袁焊忠 created HADOOP-18963: Summary: Fix typos in .gitignore #6243 Key: HADOOP-18963 URL: https://issues.apache.org/jira/browse/HADOOP-18963 Project: Hadoop Common Issue Type: Improvement Components: common Affects Versions: 3.3.6 Reporter: 袁焊忠 Fix For: 3.3.6 DS_Store is auto generated by Mac in every opened folder, which is useless but annoying. Not only DS_Store file in the repository root directory should be ignored but DS_Store file in its subfolders. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HADOOP-18843. Guava version 32.0.1 bump to fix CVE-2023-2976 [hadoop-thirdparty]
steveloughran merged PR #23: URL: https://github.com/apache/hadoop-thirdparty/pull/23 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HADOOP-18843. Guava version 32.0.1 bump to fix CVE-2023-2976 [hadoop-thirdparty]
steveloughran commented on PR #23: URL: https://github.com/apache/hadoop-thirdparty/pull/23#issuecomment-1792337815 done. mukund has been looking at doing a new 3.3.x release...we should get this out first -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18487) Make protobuf 2.5 an optional runtime dependency.
[ https://issues.apache.org/jira/browse/HADOOP-18487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17782544#comment-17782544 ] ASF GitHub Bot commented on HADOOP-18487: - steveloughran commented on PR #6185: URL: https://github.com/apache/hadoop/pull/6185#issuecomment-1792335963 oh, i get it. I'd updated the BUILDING.txt in the wrong branch, so even though it was pushed up it had gone to the branch on the earlier pr. lets see what yetus says and merge it > Make protobuf 2.5 an optional runtime dependency. > - > > Key: HADOOP-18487 > URL: https://issues.apache.org/jira/browse/HADOOP-18487 > Project: Hadoop Common > Issue Type: Improvement > Components: build, ipc >Affects Versions: 3.3.4 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > > uses of protobuf 2.5 and RpcEnginej have been deprecated since 3.3.0 in > HADOOP-17046 > while still keeping those files around (for a long time...), how about we > make the protobuf 2.5.0 export off hadoop common and hadoop-hdfs *provided*, > rather than *compile* > that way, if apps want it for their own apis, they have to explicitly ask for > it, but at least our own scans don't break. > i have no idea what will happen to the rest of the stack at this point, it > will be "interesting" to see -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HADOOP-18487. Protobuf 2.5 removal part 2: stop exporting protobuf-2.5" [hadoop]
steveloughran commented on PR #6185: URL: https://github.com/apache/hadoop/pull/6185#issuecomment-1792335963 oh, i get it. I'd updated the BUILDING.txt in the wrong branch, so even though it was pushed up it had gone to the branch on the earlier pr. lets see what yetus says and merge it -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18487) Make protobuf 2.5 an optional runtime dependency.
[ https://issues.apache.org/jira/browse/HADOOP-18487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17782543#comment-17782543 ] ASF GitHub Bot commented on HADOOP-18487: - ayushtkn commented on PR #6185: URL: https://github.com/apache/hadoop/pull/6185#issuecomment-1792334819 ohh, cool, go ahead, feel free to merge > Make protobuf 2.5 an optional runtime dependency. > - > > Key: HADOOP-18487 > URL: https://issues.apache.org/jira/browse/HADOOP-18487 > Project: Hadoop Common > Issue Type: Improvement > Components: build, ipc >Affects Versions: 3.3.4 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > > uses of protobuf 2.5 and RpcEnginej have been deprecated since 3.3.0 in > HADOOP-17046 > while still keeping those files around (for a long time...), how about we > make the protobuf 2.5.0 export off hadoop common and hadoop-hdfs *provided*, > rather than *compile* > that way, if apps want it for their own apis, they have to explicitly ask for > it, but at least our own scans don't break. > i have no idea what will happen to the rest of the stack at this point, it > will be "interesting" to see -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HADOOP-18487. Protobuf 2.5 removal part 2: stop exporting protobuf-2.5" [hadoop]
ayushtkn commented on PR #6185: URL: https://github.com/apache/hadoop/pull/6185#issuecomment-1792334819 ohh, cool, go ahead, feel free to merge -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18962) Upgrade kafka to 3.4.0
[ https://issues.apache.org/jira/browse/HADOOP-18962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17782539#comment-17782539 ] Steve Loughran commented on HADOOP-18962: - add ...component = build, set version to affected version. thanks > Upgrade kafka to 3.4.0 > -- > > Key: HADOOP-18962 > URL: https://issues.apache.org/jira/browse/HADOOP-18962 > Project: Hadoop Common > Issue Type: Bug >Reporter: D M Murali Krishna Reddy >Assignee: D M Murali Krishna Reddy >Priority: Major > Labels: pull-request-available > > Upgrade kafka-clients to 3.4.0 to fix > https://nvd.nist.gov/vuln/detail/CVE-2023-25194 -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18487) Make protobuf 2.5 an optional runtime dependency.
[ https://issues.apache.org/jira/browse/HADOOP-18487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17782540#comment-17782540 ] ASF GitHub Bot commented on HADOOP-18487: - steveloughran commented on PR #6185: URL: https://github.com/apache/hadoop/pull/6185#issuecomment-1792331971 @ayushtkn i'd updated the doc already > Make protobuf 2.5 an optional runtime dependency. > - > > Key: HADOOP-18487 > URL: https://issues.apache.org/jira/browse/HADOOP-18487 > Project: Hadoop Common > Issue Type: Improvement > Components: build, ipc >Affects Versions: 3.3.4 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > > uses of protobuf 2.5 and RpcEnginej have been deprecated since 3.3.0 in > HADOOP-17046 > while still keeping those files around (for a long time...), how about we > make the protobuf 2.5.0 export off hadoop common and hadoop-hdfs *provided*, > rather than *compile* > that way, if apps want it for their own apis, they have to explicitly ask for > it, but at least our own scans don't break. > i have no idea what will happen to the rest of the stack at this point, it > will be "interesting" to see -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HADOOP-18487. Protobuf 2.5 removal part 2: stop exporting protobuf-2.5" [hadoop]
steveloughran commented on PR #6185: URL: https://github.com/apache/hadoop/pull/6185#issuecomment-1792331971 @ayushtkn i'd updated the doc already -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HDFS-16791 Add getEnclosingRoot API to filesystem interface and all implementations [hadoop]
steveloughran commented on code in PR #6198: URL: https://github.com/apache/hadoop/pull/6198#discussion_r1381582965 ## hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java: ## @@ -1940,7 +1940,8 @@ public Path getEnclosingRoot(Path path) throws IOException { try { res = fsState.resolve((path.toString()), true); } catch (FileNotFoundException ex) { -throw new NotInMountpointException(path, String.format("getEnclosingRoot - %s", ex.getMessage())); +throw new NotInMountpointException(path, +String.format("getEnclosingRoot - %s", ex.getMessage())); Review Comment: ok, add an initCause(ex) building this exception and the one in viewFS. stack traces are too useful to throw away -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HDFS-16791 Add getEnclosingRoot API to filesystem interface and all implementations [hadoop]
steveloughran commented on code in PR #6198: URL: https://github.com/apache/hadoop/pull/6198#discussion_r1381581642 ## hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md: ## @@ -601,7 +601,32 @@ on the filesystem. 1. The outcome of this operation MUST be identical to the value of `getFileStatus(P).getBlockSize()`. -1. By inference, it MUST be > 0 for any file of length > 0. +2. By inference, it MUST be > 0 for any file of length > 0. + +### `Path getEnclosingRoot(Path p)` + +This method is used to find a root directory for a path given. This is useful for creating +staging and temp directories in the same enclosing root directory. There are constraints around how +renames are allowed to atomically occur (ex. across hdfs volumes or across encryption zones). + +For any two paths p1 and p2 that do not have the same enclosing root, `rename(p1, p2)` is expected to fail or will not +be atomic. + +The following statement is always true: +`getEnclosingRoot(p) == getEnclosingRoot(getEnclosingRoot(p))` + + Preconditions + +The path does not have to exist, but the path does need to be valid and reconcilable by the filesystem +* if a linkfallback is used all paths are reconcilable +* if a linkfallback is not used there must be a mount point covering the path + + + Postconditions + +* The path returned will not be null, if there is no deeper enclosing root, the root path ("/") will be returned. +* The path returned is a directory Review Comment: the python is trying to define the rules, the english is a wrapper around it. So think about how you'd convert those bullet points in terms of assertions you'd have before and after an implementation -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HDFS-16791 Add getEnclosingRoot API to filesystem interface and all implementations [hadoop]
steveloughran commented on code in PR #6198: URL: https://github.com/apache/hadoop/pull/6198#discussion_r1381580444 ## hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java: ## @@ -1919,6 +1933,21 @@ public Collection getAllStoragePolicies() } return allPolicies; } + +@Override +public Path getEnclosingRoot(Path path) throws IOException { + InodeTree.ResolveResult res; + try { +res = fsState.resolve((path.toString()), true); + } catch (FileNotFoundException ex) { +throw new NotInMountpointException(path, String.format("getEnclosingRoot - %s", ex.getMessage())); Review Comment: no, I mean that .initCause() is needed to preserve the entire stack trace. otherwise root causes of problems may get lost. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HDFS-17015. Fix typos in .gitignore [hadoop]
YuanHanzhong commented on PR #6243: URL: https://github.com/apache/hadoop/pull/6243#issuecomment-1792294793 Thank you, I'll do that. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18954) Filter NaN values from JMX json interface
[ https://issues.apache.org/jira/browse/HADOOP-18954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17782522#comment-17782522 ] ASF GitHub Bot commented on HADOOP-18954: - K0K0V0K commented on code in PR #6229: URL: https://github.com/apache/hadoop/pull/6229#discussion_r1381509417 ## hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/jmx/TestJMXJsonServlet.java: ## @@ -62,10 +62,15 @@ public static void assertReFind(String re, String value) { result = readOutput(new URL(baseUrl, "/jmx?qry=java.lang:type=Memory")); assertReFind("\"name\"\\s*:\\s*\"java.lang:type=Memory\"", result); assertReFind("\"modelerType\"", result); - + +System.setProperty("THE_TEST_OF_THE_NAN_VALUES", String.valueOf(Float.NaN)); result = readOutput(new URL(baseUrl, "/jmx")); assertReFind("\"name\"\\s*:\\s*\"java.lang:type=Memory\"", result); - +assertReFind( Review Comment: Hi @Hexiaoqiao ! Thanks for the review! Yes, I think it would be good to have a test like that, but that change just won't be nice I am afraid. The problem is that there is no nice way to connect the **HttpServerFunctionalTest#createTestServer(Configuration conf)** to the **JMXJsonServlet**, cause in the **HttpServlet2#addDefaultServlets** we just provide class reference. A possible solution is to create another JMX servlet class to do the trick. I would prefer the previous code, cause that was less ugly, but I am open to keeping this one, to have the test. > Filter NaN values from JMX json interface > - > > Key: HADOOP-18954 > URL: https://issues.apache.org/jira/browse/HADOOP-18954 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Reporter: Bence Kosztolnik >Assignee: Bence Kosztolnik >Priority: Major > Labels: pull-request-available > > As we can see in this [Yarn > documentation|https://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/ResourceManagerRest.html] > beans can represent Float values as NaN. These values will be represented in > the JMX response JSON like: > {noformat} > ... > "GuaranteedCapacity": NaN, > ... > {noformat} > Based on the [JSON doc|https://www.json.org/] NaN is not a valid JSON token ( > however some of the parser libs can handle it ), so not every consumer can > parse values like these. > To be able to parse NaN values, a new feature flag should be created. > The new feature will replace the NaN values with 0.0 values. > The feature is default turned off. It can be enabled with the > *hadoop.http.jmx.nan-filter.enabled* config. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HDFS-15413. add dfs.client.read.striped.datanode.max.attempts to fix read ecfile timeout [hadoop]
ayushtkn commented on code in PR #5829: URL: https://github.com/apache/hadoop/pull/5829#discussion_r1381508823 ## hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/StripeReader.java: ## @@ -233,41 +236,60 @@ private ByteBufferStrategy[] getReadStrategies(StripingChunk chunk) { private int readToBuffer(BlockReader blockReader, DatanodeInfo currentNode, ByteBufferStrategy strategy, - ExtendedBlock currentBlock) throws IOException { + LocatedBlock currentBlock, int chunkIndex) throws IOException { final int targetLength = strategy.getTargetLength(); -int length = 0; -try { - while (length < targetLength) { -int ret = strategy.readFromBlock(blockReader); -if (ret < 0) { - throw new IOException("Unexpected EOS from the reader"); +int curAttempts = 0; +while (curAttempts < readDNMaxAttempts) { + int length = 0; + try { +while (length < targetLength) { + int ret = strategy.readFromBlock(blockReader); + if (ret < 0) { +throw new IOException("Unexpected EOS from the reader"); + } + length += ret; } -length += ret; +return length; + } catch (ChecksumException ce) { +DFSClient.LOG.warn("Found Checksum error for " ++ currentBlock + " from " + currentNode ++ " at " + ce.getPos()); +//Clear buffer to make next decode success +strategy.getReadBuffer().clear(); +// we want to remember which block replicas we have tried +corruptedBlocks.addCorruptedBlock(currentBlock.getBlock(), currentNode); +throw ce; + } catch (IOException e) { +//Clear buffer to make next decode success +strategy.getReadBuffer().clear(); +if (curAttempts < readDNMaxAttempts - 1) { + curAttempts++; + if (readerInfos[chunkIndex].reader != null) { +readerInfos[chunkIndex].reader.close(); + } + if (dfsStripedInputStream.createBlockReader(currentBlock, + alignedStripe.getOffsetInBlock(), targetBlocks, + readerInfos, chunkIndex, readTo)) { +blockReader = readerInfos[chunkIndex].reader; +String msg = "Reconnect to " + currentNode.getInfoAddr() ++ " for block " + currentBlock.getBlock(); +DFSClient.LOG.warn(msg); +continue; + } +DFSClient.LOG.warn("Exception while reading from " ++ currentBlock + " of " + dfsStripedInputStream.getSrc() + " from " ++ currentNode, e); +throw e; } - return length; -} catch (ChecksumException ce) { - DFSClient.LOG.warn("Found Checksum error for " - + currentBlock + " from " + currentNode - + " at " + ce.getPos()); - //Clear buffer to make next decode success - strategy.getReadBuffer().clear(); - // we want to remember which block replicas we have tried - corruptedBlocks.addCorruptedBlock(currentBlock, currentNode); - throw ce; -} catch (IOException e) { - DFSClient.LOG.warn("Exception while reading from " - + currentBlock + " of " + dfsStripedInputStream.getSrc() + " from " - + currentNode, e); - //Clear buffer to make next decode success - strategy.getReadBuffer().clear(); - throw e; } } +return -1; Review Comment: I don't think we should return -1, there is logic which uses the return value ``` for (ByteBufferStrategy strategy : strategies) { int bytesReead = readToBuffer(reader, datanode, strategy, currentBlock); ret += bytesReead; } ``` We should throw exception or a valid value ## hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/StripeReader.java: ## @@ -233,41 +236,60 @@ private ByteBufferStrategy[] getReadStrategies(StripingChunk chunk) { private int readToBuffer(BlockReader blockReader, DatanodeInfo currentNode, ByteBufferStrategy strategy, - ExtendedBlock currentBlock) throws IOException { + LocatedBlock currentBlock, int chunkIndex) throws IOException { final int targetLength = strategy.getTargetLength(); -int length = 0; -try { - while (length < targetLength) { -int ret = strategy.readFromBlock(blockReader); -if (ret < 0) { - throw new IOException("Unexpected EOS from the reader"); +int curAttempts = 0; +while (curAttempts < readDNMaxAttempts) { + int length = 0; + try { +while (length < targetLength) { + int ret = strategy.readFromBlock(blockReader); + if (ret < 0) { +throw new IOException("Unexpected EOS from the reader"); + } + length += ret; } -length += ret; +return length; + } catch (ChecksumException ce) { +
Re: [PR] HADOOP-18954. Filter NaN values from JMX json interface [hadoop]
K0K0V0K commented on code in PR #6229: URL: https://github.com/apache/hadoop/pull/6229#discussion_r1381509417 ## hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/jmx/TestJMXJsonServlet.java: ## @@ -62,10 +62,15 @@ public static void assertReFind(String re, String value) { result = readOutput(new URL(baseUrl, "/jmx?qry=java.lang:type=Memory")); assertReFind("\"name\"\\s*:\\s*\"java.lang:type=Memory\"", result); assertReFind("\"modelerType\"", result); - + +System.setProperty("THE_TEST_OF_THE_NAN_VALUES", String.valueOf(Float.NaN)); result = readOutput(new URL(baseUrl, "/jmx")); assertReFind("\"name\"\\s*:\\s*\"java.lang:type=Memory\"", result); - +assertReFind( Review Comment: Hi @Hexiaoqiao ! Thanks for the review! Yes, I think it would be good to have a test like that, but that change just won't be nice I am afraid. The problem is that there is no nice way to connect the **HttpServerFunctionalTest#createTestServer(Configuration conf)** to the **JMXJsonServlet**, cause in the **HttpServlet2#addDefaultServlets** we just provide class reference. A possible solution is to create another JMX servlet class to do the trick. I would prefer the previous code, cause that was less ugly, but I am open to keeping this one, to have the test. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18954) Filter NaN values from JMX json interface
[ https://issues.apache.org/jira/browse/HADOOP-18954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17782521#comment-17782521 ] ASF GitHub Bot commented on HADOOP-18954: - K0K0V0K commented on code in PR #6229: URL: https://github.com/apache/hadoop/pull/6229#discussion_r1381509417 ## hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/jmx/TestJMXJsonServlet.java: ## @@ -62,10 +62,15 @@ public static void assertReFind(String re, String value) { result = readOutput(new URL(baseUrl, "/jmx?qry=java.lang:type=Memory")); assertReFind("\"name\"\\s*:\\s*\"java.lang:type=Memory\"", result); assertReFind("\"modelerType\"", result); - + +System.setProperty("THE_TEST_OF_THE_NAN_VALUES", String.valueOf(Float.NaN)); result = readOutput(new URL(baseUrl, "/jmx")); assertReFind("\"name\"\\s*:\\s*\"java.lang:type=Memory\"", result); - +assertReFind( Review Comment: Hi @Hexiaoqiao ! Thanks for the review! Yes, I think it would be good to have a test like that, but that change just won't be nice I am afraid. The problem is that there is no nice way to connect the **HttpServerFunctionalTest#createTestServer(Configuration conf)** to the **JMXJsonServlet**, cause in the **HttpServlet2#addDefaultServlets** we just provide class reference. A possible solution is to create another JMX servlet class to do the trick. I would prefer the previous code, cause that was less ugly, but I am open to keeping this one, to have the the test. > Filter NaN values from JMX json interface > - > > Key: HADOOP-18954 > URL: https://issues.apache.org/jira/browse/HADOOP-18954 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Reporter: Bence Kosztolnik >Assignee: Bence Kosztolnik >Priority: Major > Labels: pull-request-available > > As we can see in this [Yarn > documentation|https://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/ResourceManagerRest.html] > beans can represent Float values as NaN. These values will be represented in > the JMX response JSON like: > {noformat} > ... > "GuaranteedCapacity": NaN, > ... > {noformat} > Based on the [JSON doc|https://www.json.org/] NaN is not a valid JSON token ( > however some of the parser libs can handle it ), so not every consumer can > parse values like these. > To be able to parse NaN values, a new feature flag should be created. > The new feature will replace the NaN values with 0.0 values. > The feature is default turned off. It can be enabled with the > *hadoop.http.jmx.nan-filter.enabled* config. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HADOOP-18954. Filter NaN values from JMX json interface [hadoop]
K0K0V0K commented on code in PR #6229: URL: https://github.com/apache/hadoop/pull/6229#discussion_r1381509417 ## hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/jmx/TestJMXJsonServlet.java: ## @@ -62,10 +62,15 @@ public static void assertReFind(String re, String value) { result = readOutput(new URL(baseUrl, "/jmx?qry=java.lang:type=Memory")); assertReFind("\"name\"\\s*:\\s*\"java.lang:type=Memory\"", result); assertReFind("\"modelerType\"", result); - + +System.setProperty("THE_TEST_OF_THE_NAN_VALUES", String.valueOf(Float.NaN)); result = readOutput(new URL(baseUrl, "/jmx")); assertReFind("\"name\"\\s*:\\s*\"java.lang:type=Memory\"", result); - +assertReFind( Review Comment: Hi @Hexiaoqiao ! Thanks for the review! Yes, I think it would be good to have a test like that, but that change just won't be nice I am afraid. The problem is that there is no nice way to connect the **HttpServerFunctionalTest#createTestServer(Configuration conf)** to the **JMXJsonServlet**, cause in the **HttpServlet2#addDefaultServlets** we just provide class reference. A possible solution is to create another JMX servlet class to do the trick. I would prefer the previous code, cause that was less ugly, but I am open to keeping this one, to have the the test. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] YARN-11606. Upgrade fst to 2.57 [hadoop]
hadoop-yetus commented on PR #6246: URL: https://github.com/apache/hadoop/pull/6246#issuecomment-1792239910 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 42s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 0s | | xmllint was not available. | | +0 :ok: | shelldocs | 0m 0s | | Shelldocs was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 14m 6s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 35m 12s | | trunk passed | | +1 :green_heart: | compile | 18m 18s | | trunk passed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 16m 55s | | trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | mvnsite | 20m 35s | | trunk passed | | +1 :green_heart: | javadoc | 8m 39s | | trunk passed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 7m 32s | | trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | shadedclient | 50m 22s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 34s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 33m 24s | | the patch passed | | +1 :green_heart: | compile | 17m 58s | | the patch passed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 17m 58s | | the patch passed | | +1 :green_heart: | compile | 17m 20s | | the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | javac | 17m 20s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | mvnsite | 15m 17s | | the patch passed | | +1 :green_heart: | shellcheck | 0m 0s | | No new issues. | | +1 :green_heart: | javadoc | 8m 30s | | the patch passed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 7m 40s | | the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | shadedclient | 55m 3s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 758m 35s | [/patch-unit-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6246/2/artifact/out/patch-unit-root.txt) | root in the patch passed. | | +1 :green_heart: | asflicense | 1m 55s | | The patch does not generate ASF License warnings. | | | | 1063m 12s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.TestDFSUtil | | | hadoop.net.TestSocketIOWithTimeout | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6246/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6246 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient codespell detsecrets xmllint shellcheck shelldocs | | uname | Linux e063d317d365 4.15.0-213-generic #224-Ubuntu SMP Mon Jun 19 13:30:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 463d7834e0c19befc5e6b2787c69d87381c6f9b4 | | Default Java | Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6246/2/testReport/ | | Max. process+thread count | 3566 (vs. ulimit of 5500) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice . U: . | | Console output |
Re: [PR] HDFS-17248. Fix isValidName error [hadoop]
GauthamBanasandra commented on code in PR #6249: URL: https://github.com/apache/hadoop/pull/6249#discussion_r1381474325 ## hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSUtilClient.java: ## @@ -661,9 +661,12 @@ public static boolean isValidName(String src) { String[] components = StringUtils.split(src, '/'); for (int i = 0; i < components.length; i++) { String element = components[i]; + // For Windows, we must allow the : in the drive letter. + if (Shell.WINDOWS && i == 1 && element.contains(":")) { Review Comment: Could you please make an `endsWith` check for `:` instead of `element.contains`? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HADOOP-18843. Guava version 32.0.1 bump to fix CVE-2023-2976 [hadoop-thirdparty]
fredbalves86 commented on PR #23: URL: https://github.com/apache/hadoop-thirdparty/pull/23#issuecomment-1792219171 > ok, let's merge > > @fredbalves86 what name do you want to use for credit in the commit message? +what apache jira account is yours to assign the work to? You can use Frederico Alves I don't have an apache jira account -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] Update plugin for SBOM generation to 2.7.10 [hadoop]
ayushtkn commented on PR #6235: URL: https://github.com/apache/hadoop/pull/6235#issuecomment-1792219054 have triggered the build again. @VinodAnandan can you create a HADOOP ticket & prefix the jira id on this PR -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HDFS-17015. Fix typos in .gitignore [hadoop]
ayushtkn commented on PR #6243: URL: https://github.com/apache/hadoop/pull/6243#issuecomment-1792216883 I have approved your jira id request, can you create a HADOOP ticket & prefix that jira id on this PR -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] YARN-11606. Upgrade fst to 2.57 [hadoop]
hadoop-yetus commented on PR #6246: URL: https://github.com/apache/hadoop/pull/6246#issuecomment-1792190888 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 12m 57s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 0s | | xmllint was not available. | | +0 :ok: | shelldocs | 0m 0s | | Shelldocs was not available. | | +1 :green_heart: | @author | 0m 1s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 14m 16s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 31m 52s | | trunk passed | | +1 :green_heart: | compile | 16m 46s | | trunk passed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 16m 20s | | trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | mvnsite | 19m 25s | | trunk passed | | +1 :green_heart: | javadoc | 8m 52s | | trunk passed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 7m 29s | | trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | shadedclient | 50m 28s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 36s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 29m 1s | | the patch passed | | +1 :green_heart: | compile | 16m 17s | | the patch passed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 16m 17s | | the patch passed | | +1 :green_heart: | compile | 15m 33s | | the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | javac | 15m 33s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | mvnsite | 15m 7s | | the patch passed | | +1 :green_heart: | shellcheck | 0m 0s | | No new issues. | | +1 :green_heart: | javadoc | 9m 33s | | the patch passed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 7m 43s | | the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | shadedclient | 52m 33s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 762m 19s | [/patch-unit-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6246/1/artifact/out/patch-unit-root.txt) | root in the patch passed. | | +1 :green_heart: | asflicense | 1m 49s | | The patch does not generate ASF License warnings. | | | | 1061m 47s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.TestDFSUtil | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6246/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6246 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient codespell detsecrets xmllint shellcheck shelldocs | | uname | Linux 2e4c5117db18 4.15.0-213-generic #224-Ubuntu SMP Mon Jun 19 13:30:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 6336f2be7885796dd8fdab702709ff0b320a3c52 | | Default Java | Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6246/1/testReport/ | | Max. process+thread count | 3443 (vs. ulimit of 5500) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice . U: . | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6246/1/console | | versions
[jira] [Commented] (HADOOP-18958) UserGroupInformation debug log improve
[ https://issues.apache.org/jira/browse/HADOOP-18958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17782445#comment-17782445 ] wangzhihui commented on HADOOP-18958: - [~hexiaoqiao] Can you help us make the decision? > UserGroupInformation debug log improve > -- > > Key: HADOOP-18958 > URL: https://issues.apache.org/jira/browse/HADOOP-18958 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Affects Versions: 3.3.5, 3.3.3, 3.3.4 >Reporter: wangzhihui >Priority: Minor > Labels: pull-request-available > Fix For: 3.3.4 > > Attachments: 20231029-122825-1.jpeg, 20231029-122825.jpeg, > 20231030-143525.jpeg, image-2023-10-29-09-47-56-489.png, > image-2023-10-30-14-35-11-161.png > > Original Estimate: 1h > Remaining Estimate: 1h > > Using “new Exception( )” to print the call stack of "doAs Method " in > the UserGroupInformation class. Using this way will print meaningless > Exception information and too many call stacks, This is not conducive to > troubleshooting > *example:* > !20231029-122825.jpeg|width=991,height=548! > > *improved result* : > > !image-2023-10-29-09-47-56-489.png|width=1099,height=156! > !20231030-143525.jpeg|width=572,height=674! -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HDFS-17248. Fix isValidName error [hadoop]
LiuGuH commented on PR #6249: URL: https://github.com/apache/hadoop/pull/6249#issuecomment-1791981813 @GauthamBanasandra , hello sir,do you have time to review it,Thanks -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HDFS-17248. Fix isValidName error [hadoop]
hadoop-yetus commented on PR #6249: URL: https://github.com/apache/hadoop/pull/6249#issuecomment-1791979449 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 30s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 33m 51s | | trunk passed | | +1 :green_heart: | compile | 0m 44s | | trunk passed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 0m 40s | | trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | checkstyle | 0m 23s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 40s | | trunk passed | | +1 :green_heart: | javadoc | 0m 39s | | trunk passed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 32s | | trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | spotbugs | 1m 38s | | trunk passed | | +1 :green_heart: | shadedclient | 21m 50s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 32s | | the patch passed | | +1 :green_heart: | compile | 0m 34s | | the patch passed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 0m 34s | | the patch passed | | +1 :green_heart: | compile | 0m 30s | | the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | javac | 0m 30s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 15s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 32s | | the patch passed | | +1 :green_heart: | javadoc | 0m 26s | | the patch passed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 26s | | the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | spotbugs | 1m 31s | | the patch passed | | +1 :green_heart: | shadedclient | 21m 13s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 1m 54s | | hadoop-hdfs-client in the patch passed. | | +1 :green_heart: | asflicense | 0m 28s | | The patch does not generate ASF License warnings. | | | | 92m 9s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6249/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6249 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 1647ddffbe6c 4.15.0-213-generic #224-Ubuntu SMP Mon Jun 19 13:30:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 9f7aac66b9792b14e10c376f1ab5d259af75a7aa | | Default Java | Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6249/1/testReport/ | | Max. process+thread count | 726 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-client U: hadoop-hdfs-project/hadoop-hdfs-client | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6249/1/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific
Re: [PR] YARN-11606. Upgrade fst to 2.57 [hadoop]
hadoop-yetus commented on PR #6246: URL: https://github.com/apache/hadoop/pull/6246#issuecomment-1791971145 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 29s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 1s | | xmllint was not available. | | +0 :ok: | shelldocs | 0m 1s | | Shelldocs was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 13m 50s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 20m 18s | | trunk passed | | +1 :green_heart: | compile | 9m 53s | | trunk passed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 8m 56s | | trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | mvnsite | 13m 44s | | trunk passed | | +1 :green_heart: | javadoc | 5m 47s | | trunk passed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 5m 0s | | trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | shadedclient | 31m 1s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 29s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 17m 50s | | the patch passed | | +1 :green_heart: | compile | 9m 33s | | the patch passed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 9m 33s | | the patch passed | | +1 :green_heart: | compile | 8m 59s | | the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | javac | 8m 59s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | mvnsite | 9m 0s | | the patch passed | | +1 :green_heart: | shellcheck | 0m 0s | | No new issues. | | +1 :green_heart: | javadoc | 5m 46s | | the patch passed with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 5m 2s | | the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | +1 :green_heart: | shadedclient | 31m 54s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 651m 7s | [/patch-unit-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6246/3/artifact/out/patch-unit-root.txt) | root in the patch passed. | | +1 :green_heart: | asflicense | 1m 23s | | The patch does not generate ASF License warnings. | | | | 832m 48s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 | | | hadoop.hdfs.TestDFSUtil | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6246/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6246 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient codespell detsecrets xmllint shellcheck shelldocs | | uname | Linux 17748ebd0028 4.15.0-213-generic #224-Ubuntu SMP Mon Jun 19 13:30:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 463d7834e0c19befc5e6b2787c69d87381c6f9b4 | | Default Java | Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6246/3/testReport/ | | Max. process+thread count | 3598 (vs. ulimit of 5500) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice . U: . | | Console output |