[jira] [Commented] (HADOOP-18502) Hadoop metrics should return 0 when there is no change
[ https://issues.apache.org/jira/browse/HADOOP-18502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17621515#comment-17621515 ] ASF GitHub Bot commented on HADOOP-18502: - ted12138 opened a new pull request, #5058: URL: https://github.com/apache/hadoop/pull/5058 ### Description of PR ### How was this patch tested? ### For code changes: - [ ] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')? - [ ] Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, `NOTICE-binary` files? > Hadoop metrics should return 0 when there is no change > -- > > Key: HADOOP-18502 > URL: https://issues.apache.org/jira/browse/HADOOP-18502 > Project: Hadoop Common > Issue Type: Improvement >Reporter: leo sun >Assignee: leo sun >Priority: Major > Attachments: image-2022-10-21-14-41-43-105.png > > > When we try to switch active NN to standby, we find that the > getContentSummary average time is always a very high value even if there is > no more query. For us, the metrics return 0 is more reasonable. The monitor > is as below: > !image-2022-10-21-14-41-43-105.png! > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-18502) Hadoop metrics should return 0 when there is no change
[ https://issues.apache.org/jira/browse/HADOOP-18502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HADOOP-18502: Labels: pull-request-available (was: ) > Hadoop metrics should return 0 when there is no change > -- > > Key: HADOOP-18502 > URL: https://issues.apache.org/jira/browse/HADOOP-18502 > Project: Hadoop Common > Issue Type: Improvement >Reporter: leo sun >Assignee: leo sun >Priority: Major > Labels: pull-request-available > Attachments: image-2022-10-21-14-41-43-105.png > > > When we try to switch active NN to standby, we find that the > getContentSummary average time is always a very high value even if there is > no more query. For us, the metrics return 0 is more reasonable. The monitor > is as below: > !image-2022-10-21-14-41-43-105.png! > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ted12138 opened a new pull request, #5058: HADOOP-18502. MutableStat should return 0 when there is no change
ted12138 opened a new pull request, #5058: URL: https://github.com/apache/hadoop/pull/5058 ### Description of PR ### How was this patch tested? ### For code changes: - [ ] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')? - [ ] Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, `NOTICE-binary` files? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ted12138 closed pull request #5049: HDFS-16808. MutableStat will return 0 when there is no change
ted12138 closed pull request #5049: HDFS-16808. MutableStat will return 0 when there is no change URL: https://github.com/apache/hadoop/pull/5049 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-18502) Hadoop metrics should return 0 when there is no change
leo sun created HADOOP-18502: Summary: Hadoop metrics should return 0 when there is no change Key: HADOOP-18502 URL: https://issues.apache.org/jira/browse/HADOOP-18502 Project: Hadoop Common Issue Type: Improvement Reporter: leo sun Assignee: leo sun Attachments: image-2022-10-21-14-41-43-105.png When we try to switch active NN to standby, we find that the getContentSummary average time is always a very high value even if there is no more query. For us, the metrics return 0 is more reasonable. The monitor is as below: !image-2022-10-21-14-41-43-105.png! -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] slfan1989 opened a new pull request, #5057: YARN-11359. [Federation] Routing admin invocations transparently to multiple RMs.
slfan1989 opened a new pull request, #5057: URL: https://github.com/apache/hadoop/pull/5057 JIRA: YARN-11359. [Federation] Routing admin invocations transparently to multiple RMs. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] slfan1989 opened a new pull request, #5056: YARN-11358. [Federation] Add Strict Mode Configuration Parameters.
slfan1989 opened a new pull request, #5056: URL: https://github.com/apache/hadoop/pull/5056 JIRA: YARN-11358. [Federation] Add Strict Mode Configuration Parameters. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #4982: YARN-11332. [Federation] Improve FederationClientInterceptor#ThreadPool thread pool configuration.
hadoop-yetus commented on PR #4982: URL: https://github.com/apache/hadoop/pull/4982#issuecomment-1286453733 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 47s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 0s | | xmllint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 18m 34s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 28m 18s | | trunk passed | | +1 :green_heart: | compile | 10m 6s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 10m 16s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 2m 10s | | trunk passed | | +1 :green_heart: | mvnsite | 3m 25s | | trunk passed | | +1 :green_heart: | javadoc | 3m 44s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 3m 8s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 5m 56s | | trunk passed | | +1 :green_heart: | shadedclient | 22m 59s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 30s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 57s | | the patch passed | | +1 :green_heart: | compile | 10m 15s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 10m 15s | | hadoop-yarn-project_hadoop-yarn-jdkUbuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 generated 0 new + 731 unchanged - 2 fixed = 731 total (was 733) | | +1 :green_heart: | compile | 9m 16s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 9m 16s | | hadoop-yarn-project_hadoop-yarn-jdkPrivateBuild-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 generated 0 new + 641 unchanged - 2 fixed = 641 total (was 643) | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 1m 57s | | the patch passed | | +1 :green_heart: | mvnsite | 3m 19s | | the patch passed | | +1 :green_heart: | javadoc | 2m 59s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 2m 43s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 6m 17s | | the patch passed | | +1 :green_heart: | shadedclient | 22m 56s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 1m 33s | | hadoop-yarn-api in the patch passed. | | +1 :green_heart: | unit | 5m 22s | | hadoop-yarn-common in the patch passed. | | -1 :x: | unit | 5m 50s | [/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-router.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4982/11/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-router.txt) | hadoop-yarn-server-router in the patch passed. | | +1 :green_heart: | asflicense | 1m 16s | | The patch does not generate ASF License warnings. | | | | 189m 52s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.yarn.server.router.clientrm.TestFederationClientInterceptorRetry | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4982/11/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/4982 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint | | uname | Linux 0ecadfa6fe23 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Buil
[GitHub] [hadoop] hadoop-yetus commented on pull request #5030: YARN-11345. [Federation] Refactoring Yarn Router's Application Web Page.
hadoop-yetus commented on PR #5030: URL: https://github.com/apache/hadoop/pull/5030#issuecomment-1286438510 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 41s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 1s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 18m 31s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 27m 51s | | trunk passed | | +1 :green_heart: | compile | 10m 44s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 9m 17s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 2m 4s | | trunk passed | | +1 :green_heart: | mvnsite | 2m 10s | | trunk passed | | +1 :green_heart: | javadoc | 2m 22s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 2m 11s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 3m 38s | | trunk passed | | +1 :green_heart: | shadedclient | 22m 14s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 27s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 15s | | the patch passed | | +1 :green_heart: | compile | 9m 51s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 9m 51s | | the patch passed | | +1 :green_heart: | compile | 9m 4s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 9m 4s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 1m 47s | [/results-checkstyle-hadoop-yarn-project_hadoop-yarn.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5030/5/artifact/out/results-checkstyle-hadoop-yarn-project_hadoop-yarn.txt) | hadoop-yarn-project/hadoop-yarn: The patch generated 1 new + 3 unchanged - 0 fixed = 4 total (was 3) | | +1 :green_heart: | mvnsite | 1m 59s | | the patch passed | | +1 :green_heart: | javadoc | 1m 59s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 53s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 3m 35s | | the patch passed | | +1 :green_heart: | shadedclient | 24m 31s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 5m 21s | | hadoop-yarn-common in the patch passed. | | +1 :green_heart: | unit | 5m 49s | | hadoop-yarn-server-router in the patch passed. | | +1 :green_heart: | asflicense | 1m 14s | | The patch does not generate ASF License warnings. | | | | 174m 24s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5030/5/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5030 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 89d75cd3feb7 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / d2b820682398e8f8f449b089a7227c99c26e1c1e | | Default Java | Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5030/5/testReport/ | | Max. process+thread count | 814 (vs. ulimit of 5500) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-se
[GitHub] [hadoop] hadoop-yetus commented on pull request #5055: YARN-11357. Fix FederationClientInterceptor#submitApplication Can't Update SubClusterId
hadoop-yetus commented on PR #5055: URL: https://github.com/apache/hadoop/pull/5055#issuecomment-1286427217 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 55s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 43m 1s | | trunk passed | | +1 :green_heart: | compile | 0m 38s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 0m 35s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 0m 35s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 43s | | trunk passed | | +1 :green_heart: | javadoc | 0m 43s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 32s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 1m 10s | | trunk passed | | +1 :green_heart: | shadedclient | 23m 49s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 26s | | the patch passed | | +1 :green_heart: | compile | 0m 27s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 0m 27s | | the patch passed | | +1 :green_heart: | compile | 0m 24s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 0m 24s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 17s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 26s | | the patch passed | | +1 :green_heart: | javadoc | 0m 23s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 22s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 0m 57s | | the patch passed | | +1 :green_heart: | shadedclient | 23m 18s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 5m 15s | | hadoop-yarn-server-router in the patch passed. | | +1 :green_heart: | asflicense | 0m 40s | | The patch does not generate ASF License warnings. | | | | 107m 6s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5055/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5055 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 66db3e38c4de 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / a403687d5fbe0cc4fcc67f24bb0a3a714fa77bee | | Default Java | Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5055/1/testReport/ | | Max. process+thread count | 732 (vs. ulimit of 5500) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5055/1/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Ser
[jira] [Commented] (HADOOP-18399) SingleFilePerBlockCache to use LocalDirAllocator for file allocatoin
[ https://issues.apache.org/jira/browse/HADOOP-18399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17621471#comment-17621471 ] ASF GitHub Bot commented on HADOOP-18399: - hadoop-yetus commented on PR #5054: URL: https://github.com/apache/hadoop/pull/5054#issuecomment-1286420083 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 56s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 4 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 16m 34s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 30m 22s | | trunk passed | | +1 :green_heart: | compile | 28m 11s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 24m 56s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 5m 18s | | trunk passed | | +1 :green_heart: | mvnsite | 3m 27s | | trunk passed | | +1 :green_heart: | javadoc | 2m 30s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 2m 1s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 5m 4s | | trunk passed | | +1 :green_heart: | shadedclient | 25m 34s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 29s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 53s | | the patch passed | | +1 :green_heart: | compile | 25m 20s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 25m 20s | | the patch passed | | +1 :green_heart: | compile | 22m 4s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 22m 4s | | the patch passed | | +1 :green_heart: | blanks | 0m 1s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 4m 21s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5054/1/artifact/out/results-checkstyle-root.txt) | root: The patch generated 4 new + 3 unchanged - 0 fixed = 7 total (was 3) | | +1 :green_heart: | mvnsite | 2m 53s | | the patch passed | | -1 :x: | javadoc | 1m 16s | [/results-javadoc-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5054/1/artifact/out/results-javadoc-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04.txt) | hadoop-common-project_hadoop-common-jdkUbuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) | | -1 :x: | javadoc | 0m 52s | [/results-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5054/1/artifact/out/results-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07.txt) | hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 generated 4 new + 0 unchanged - 0 fixed = 4 total (was 0) | | +1 :green_heart: | spotbugs | 4m 35s | | the patch passed | | +1 :green_heart: | shadedclient | 24m 40s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 18m 36s | | hadoop-common in the patch passed. | | +1 :green_heart: | unit | 3m 7s | | hadoop-aws in the patch passed. | | +1 :green_heart: | asflicense | 1m 10s | | The patch does not generate ASF License warnings. | | | | 261m 47s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5054/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apach
[GitHub] [hadoop] hadoop-yetus commented on pull request #5054: HADOOP-18399 Prefetch - SingleFilePerBlockCache to use LocalDirAllocator for file allocation
hadoop-yetus commented on PR #5054: URL: https://github.com/apache/hadoop/pull/5054#issuecomment-1286420083 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 56s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 4 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 16m 34s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 30m 22s | | trunk passed | | +1 :green_heart: | compile | 28m 11s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 24m 56s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 5m 18s | | trunk passed | | +1 :green_heart: | mvnsite | 3m 27s | | trunk passed | | +1 :green_heart: | javadoc | 2m 30s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 2m 1s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 5m 4s | | trunk passed | | +1 :green_heart: | shadedclient | 25m 34s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 29s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 53s | | the patch passed | | +1 :green_heart: | compile | 25m 20s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 25m 20s | | the patch passed | | +1 :green_heart: | compile | 22m 4s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 22m 4s | | the patch passed | | +1 :green_heart: | blanks | 0m 1s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 4m 21s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5054/1/artifact/out/results-checkstyle-root.txt) | root: The patch generated 4 new + 3 unchanged - 0 fixed = 7 total (was 3) | | +1 :green_heart: | mvnsite | 2m 53s | | the patch passed | | -1 :x: | javadoc | 1m 16s | [/results-javadoc-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5054/1/artifact/out/results-javadoc-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04.txt) | hadoop-common-project_hadoop-common-jdkUbuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) | | -1 :x: | javadoc | 0m 52s | [/results-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5054/1/artifact/out/results-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07.txt) | hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 generated 4 new + 0 unchanged - 0 fixed = 4 total (was 0) | | +1 :green_heart: | spotbugs | 4m 35s | | the patch passed | | +1 :green_heart: | shadedclient | 24m 40s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 18m 36s | | hadoop-common in the patch passed. | | +1 :green_heart: | unit | 3m 7s | | hadoop-aws in the patch passed. | | +1 :green_heart: | asflicense | 1m 10s | | The patch does not generate ASF License warnings. | | | | 261m 47s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5054/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5054 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux b0621e85c222 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 01:49:29 UTC 2022
[jira] [Commented] (HADOOP-18487) protobuf-2.5.0 dependencies => provided
[ https://issues.apache.org/jira/browse/HADOOP-18487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17621455#comment-17621455 ] ASF GitHub Bot commented on HADOOP-18487: - hadoop-yetus commented on PR #4996: URL: https://github.com/apache/hadoop/pull/4996#issuecomment-1286385295 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 59s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 1s | | xmllint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 3 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 15m 36s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 25m 57s | | trunk passed | | +1 :green_heart: | compile | 23m 22s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 20m 55s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 4m 26s | | trunk passed | | +1 :green_heart: | mvnsite | 18m 58s | | trunk passed | | +1 :green_heart: | javadoc | 15m 50s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 15m 53s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +0 :ok: | spotbugs | 1m 20s | | branch/hadoop-project no spotbugs output file (spotbugsXml.xml) | | +1 :green_heart: | shadedclient | 21m 57s | | branch has no errors when building and testing our client artifacts. | | -0 :warning: | patch | 22m 27s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 34s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 10m 43s | | the patch passed | | +1 :green_heart: | compile | 22m 34s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | -1 :x: | javac | 22m 34s | [/results-compile-javac-root-jdkUbuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4996/4/artifact/out/results-compile-javac-root-jdkUbuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04.txt) | root-jdkUbuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 generated 1 new + 2821 unchanged - 1 fixed = 2822 total (was 2822) | | +1 :green_heart: | compile | 20m 38s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | -1 :x: | javac | 20m 38s | [/results-compile-javac-root-jdkPrivateBuild-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4996/4/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07.txt) | root-jdkPrivateBuild-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 generated 1 new + 2614 unchanged - 1 fixed = 2615 total (was 2615) | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 4m 1s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4996/4/artifact/out/results-checkstyle-root.txt) | root: The patch generated 1 new + 273 unchanged - 5 fixed = 274 total (was 278) | | +1 :green_heart: | mvnsite | 18m 45s | | the patch passed | | +1 :green_heart: | javadoc | 16m 19s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 15m 42s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +0 :ok: | spotbugs | 0m 52s | | hadoop-project has no data from spotbugs | | -1 :x: | spotbugs | 3m 4s | [/new-spotbugs-hadoop-common-project_hadoop-common.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4996/4/artifact/out/new-spotbugs-hadoop-common-project_hadoop-common.html) | hadoop-common-project/hadoop-common generated 3 new + 0 unchanged - 0 fixed = 3 total (was 0) | | -1 :x: | shadedclient | 21m 47s | | patch has errors when building and testing
[GitHub] [hadoop] hadoop-yetus commented on pull request #4996: HADOOP-18487. protobuf 2.5.0 marked as provided.
hadoop-yetus commented on PR #4996: URL: https://github.com/apache/hadoop/pull/4996#issuecomment-1286385295 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 59s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 1s | | xmllint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 3 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 15m 36s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 25m 57s | | trunk passed | | +1 :green_heart: | compile | 23m 22s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 20m 55s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 4m 26s | | trunk passed | | +1 :green_heart: | mvnsite | 18m 58s | | trunk passed | | +1 :green_heart: | javadoc | 15m 50s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 15m 53s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +0 :ok: | spotbugs | 1m 20s | | branch/hadoop-project no spotbugs output file (spotbugsXml.xml) | | +1 :green_heart: | shadedclient | 21m 57s | | branch has no errors when building and testing our client artifacts. | | -0 :warning: | patch | 22m 27s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 34s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 10m 43s | | the patch passed | | +1 :green_heart: | compile | 22m 34s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | -1 :x: | javac | 22m 34s | [/results-compile-javac-root-jdkUbuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4996/4/artifact/out/results-compile-javac-root-jdkUbuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04.txt) | root-jdkUbuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 generated 1 new + 2821 unchanged - 1 fixed = 2822 total (was 2822) | | +1 :green_heart: | compile | 20m 38s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | -1 :x: | javac | 20m 38s | [/results-compile-javac-root-jdkPrivateBuild-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4996/4/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07.txt) | root-jdkPrivateBuild-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 generated 1 new + 2614 unchanged - 1 fixed = 2615 total (was 2615) | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 4m 1s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4996/4/artifact/out/results-checkstyle-root.txt) | root: The patch generated 1 new + 273 unchanged - 5 fixed = 274 total (was 278) | | +1 :green_heart: | mvnsite | 18m 45s | | the patch passed | | +1 :green_heart: | javadoc | 16m 19s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 15m 42s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +0 :ok: | spotbugs | 0m 52s | | hadoop-project has no data from spotbugs | | -1 :x: | spotbugs | 3m 4s | [/new-spotbugs-hadoop-common-project_hadoop-common.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4996/4/artifact/out/new-spotbugs-hadoop-common-project_hadoop-common.html) | hadoop-common-project/hadoop-common generated 3 new + 0 unchanged - 0 fixed = 3 total (was 0) | | -1 :x: | shadedclient | 21m 47s | | patch has errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 0m 53s | | hadoop-project in the patch passed. | | +1 :green_heart: | unit | 18m 42s | | hadoop-common in the patch passed. | | +1 :green_heart: | unit |
[GitHub] [hadoop] slfan1989 opened a new pull request, #5055: YARN-11357. Fix FederationClientInterceptor#submitApplication Can't Update SubClusterId
slfan1989 opened a new pull request, #5055: URL: https://github.com/apache/hadoop/pull/5055 JIRA: YARN-11357. Fix FederationClientInterceptor#submitApplication Can't Update SubClusterId. In [YARN-11342](https://issues.apache.org/jira/browse/YARN-11342), I refactored the submitApplication method, but in the process of implementation, the judgment condition of if was written incorrectly. ``` ... // Step2. Query homeSubCluster according to ApplicationId. Boolean exists = existsApplicationHomeSubCluster(applicationId); ApplicationHomeSubCluster appHomeSubCluster = ApplicationHomeSubCluster.newInstance(applicationId, subClusterId); // should be !exists if (exists || retryCount == 0){ addApplicationHomeSubCluster(applicationId, appHomeSubCluster); } else{ updateApplicationHomeSubCluster(subClusterId, applicationId, appHomeSubCluster); } ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] slfan1989 commented on a diff in pull request #5030: YARN-11345. [Federation] Refactoring Yarn Router's Application Web Page.
slfan1989 commented on code in PR #5030: URL: https://github.com/apache/hadoop/pull/5030#discussion_r1001269099 ## hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/AppsBlock.java: ## @@ -81,53 +155,57 @@ protected void render(Block html) { // Render the applications StringBuilder appsTableData = new StringBuilder("[\n"); -for (AppInfo app : apps.getApps()) { - try { - -String percent = String.format("%.1f", app.getProgress() * 100.0F); -String trackingURL = -app.getTrackingUrl() == null ? "#" : app.getTrackingUrl(); -// AppID numerical value parsed by parseHadoopID in yarn.dt.plugins.js -appsTableData.append("[\"") -.append("") -.append(app.getAppId()).append("\",\"") -.append(escape(app.getUser())).append("\",\"") -.append(escape(app.getName())).append("\",\"") -.append(escape(app.getApplicationType())).append("\",\"") -.append(escape(app.getQueue())).append("\",\"") -.append(String.valueOf(app.getPriority())).append("\",\"") -.append(app.getStartTime()).append("\",\"") -.append(app.getFinishTime()).append("\",\"") -.append(app.getState()).append("\",\"") -.append(app.getFinalStatus()).append("\",\"") -// Progress bar -.append(" ").append(" ") -// History link -.append("\",\"") -.append("History").append(""); -appsTableData.append("\"],\n"); - - } catch (Exception e) { -LOG.info( -"Cannot add application {}: {}", app.getAppId(), e.getMessage()); + +if (appsInfo != null) { + Collection apps = appsInfo.getApps(); + if (CollectionUtils.isNotEmpty(apps)) { +int numApps = apps.size(); +int i = 0; +for (AppInfo app : apps) { + try { +String percent = String.format("%.1f", app.getProgress() * 100.0F); +String trackingURL = +app.getTrackingUrl() == null ? "#" : app.getTrackingUrl(); + +// AppID numerical value parsed by parseHadoopID in yarn.dt.plugins.js +appsTableData.append("[\"") +.append("") +.append(app.getAppId()).append("\",\"") +.append(escape(app.getUser())).append("\",\"") +.append(escape(app.getName())).append("\",\"") +.append(escape(app.getApplicationType())).append("\",\"") +.append(escape(app.getQueue())).append("\",\"") +.append(app.getPriority()).append("\",\"") +.append(app.getStartTime()).append("\",\"") +.append(app.getFinishTime()).append("\",\"") +.append(app.getState()).append("\",\"") +.append(app.getFinalStatus()).append("\",\"") +// Progress bar +.append(" ").append(" ") +// History link +.append("\",\"") +.append("History").append(""); +appsTableData.append("\"]\n"); + +if (i < numApps - 1) { + appsTableData.append(","); +} + } catch (Exception e) { +LOG.info("Cannot add application {}: {}", app.getAppId(), e.getMessage()); + } + i++; Review Comment: I read this part of the code carefully, the original code is reasonable, we should remove the extra comma outside the loop because we can't tell where the last comma is For example: We have 4 apps, A, B, C, D, and we expect the result to be [A, B, C, D], but when traversing the app list, the last app D fails, and we end up with [A ,B,C,], the last app D fails, we can't handle this situation inside the loop. But the readability of the original logic is not good, I refactored this part of the code. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] slfan1989 commented on a diff in pull request #5030: YARN-11345. [Federation] Refactoring Yarn Router's Application Web Page.
slfan1989 commented on code in PR #5030: URL: https://github.com/apache/hadoop/pull/5030#discussion_r1001269099 ## hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/AppsBlock.java: ## @@ -81,53 +155,57 @@ protected void render(Block html) { // Render the applications StringBuilder appsTableData = new StringBuilder("[\n"); -for (AppInfo app : apps.getApps()) { - try { - -String percent = String.format("%.1f", app.getProgress() * 100.0F); -String trackingURL = -app.getTrackingUrl() == null ? "#" : app.getTrackingUrl(); -// AppID numerical value parsed by parseHadoopID in yarn.dt.plugins.js -appsTableData.append("[\"") -.append("") -.append(app.getAppId()).append("\",\"") -.append(escape(app.getUser())).append("\",\"") -.append(escape(app.getName())).append("\",\"") -.append(escape(app.getApplicationType())).append("\",\"") -.append(escape(app.getQueue())).append("\",\"") -.append(String.valueOf(app.getPriority())).append("\",\"") -.append(app.getStartTime()).append("\",\"") -.append(app.getFinishTime()).append("\",\"") -.append(app.getState()).append("\",\"") -.append(app.getFinalStatus()).append("\",\"") -// Progress bar -.append(" ").append(" ") -// History link -.append("\",\"") -.append("History").append(""); -appsTableData.append("\"],\n"); - - } catch (Exception e) { -LOG.info( -"Cannot add application {}: {}", app.getAppId(), e.getMessage()); + +if (appsInfo != null) { + Collection apps = appsInfo.getApps(); + if (CollectionUtils.isNotEmpty(apps)) { +int numApps = apps.size(); +int i = 0; +for (AppInfo app : apps) { + try { +String percent = String.format("%.1f", app.getProgress() * 100.0F); +String trackingURL = +app.getTrackingUrl() == null ? "#" : app.getTrackingUrl(); + +// AppID numerical value parsed by parseHadoopID in yarn.dt.plugins.js +appsTableData.append("[\"") +.append("") +.append(app.getAppId()).append("\",\"") +.append(escape(app.getUser())).append("\",\"") +.append(escape(app.getName())).append("\",\"") +.append(escape(app.getApplicationType())).append("\",\"") +.append(escape(app.getQueue())).append("\",\"") +.append(app.getPriority()).append("\",\"") +.append(app.getStartTime()).append("\",\"") +.append(app.getFinishTime()).append("\",\"") +.append(app.getState()).append("\",\"") +.append(app.getFinalStatus()).append("\",\"") +// Progress bar +.append(" ").append(" ") +// History link +.append("\",\"") +.append("History").append(""); +appsTableData.append("\"]\n"); + +if (i < numApps - 1) { + appsTableData.append(","); +} + } catch (Exception e) { +LOG.info("Cannot add application {}: {}", app.getAppId(), e.getMessage()); + } + i++; Review Comment: I read this part of the code carefully, the original code is reasonable, we should remove the extra comma outside the loop because we can't tell where the last comma is For example: We have 4 apps, A, B, C, D, and we expect the result to be [A, B, C, D], but when traversing the app list, the last app D fails, and we end up with [A ,B,C,], the last app D fails, we can't handle this situation inside the loop But the readability of the original logic is not good, I refactored this part of the code. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17705) S3A to add option fs.s3a.endpoint.region to set AWS region
[ https://issues.apache.org/jira/browse/HADOOP-17705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17621413#comment-17621413 ] Greg Senia commented on HADOOP-17705: - Also fixes it on Hadoop 2.x versions if the SDK is updated to 1.11.45 with the aws-sdk patched... We have a case open with AWS to force them to address this. https://github.com/aws/aws-sdk-java/pull/2537 > S3A to add option fs.s3a.endpoint.region to set AWS region > -- > > Key: HADOOP-17705 > URL: https://issues.apache.org/jira/browse/HADOOP-17705 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Mehakmeet Singh >Assignee: Mehakmeet Singh >Priority: Major > Labels: pull-request-available > Fix For: 3.3.2 > > Time Spent: 3h > Remaining Estimate: 0h > > Currently, AWS region is either constructed via the endpoint URL, by making > an assumption that the 2nd component after delimiter "." is the region in > endpoint URL, which doesn't work for private links and sets the default to > us-east-1 thus causing authorization issue w.r.t the private link. > The option fs.s3a.endpoint.region allows this to be explicitly set > h2. how to set the s3 region on older hadoop releases > For anyone who needs to set the signing region on older versions of the s3a > client *you do not need this festure*. instead just provide a custom endpoint > to region mapping json file > # Download the default region mapping file > [awssdk_config_default.json|https://github.com/aws/aws-sdk-java/blob/master/aws-java-sdk-core/src/main/resources/com/amazonaws/internal/config/awssdk_config_default.json] > # Add a new regular expression to map the endpoint/hostname to the target > region > # Save the file as {{/etc/hadoop/conf/awssdk_config_override.json}} > # verify basic hadop fs -ls commands work > # copy to the rest of the cluster. > # There should be no need to restart any services -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18399) SingleFilePerBlockCache to use LocalDirAllocator for file allocatoin
[ https://issues.apache.org/jira/browse/HADOOP-18399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17621411#comment-17621411 ] ASF GitHub Bot commented on HADOOP-18399: - virajjasani commented on PR #5054: URL: https://github.com/apache/hadoop/pull/5054#issuecomment-1286286255 ``` [INFO] - > SingleFilePerBlockCache to use LocalDirAllocator for file allocatoin > > > Key: HADOOP-18399 > URL: https://issues.apache.org/jira/browse/HADOOP-18399 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.4.0 >Reporter: Steve Loughran >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > > prefetching stream's SingleFilePerBlockCache uses Files.tempFile() to > allocate a temp file. > it should be using LocalDirAllocator to allocate space from a list of dirs, > taking a config key to use. for s3a we will use the Constants.BUFFER_DIR > option, which on yarn deployments is fixed under the env.LOCAL_DIR path, so > automatically cleaned up on container exit -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] virajjasani commented on pull request #5054: HADOOP-18399 Prefetch - SingleFilePerBlockCache to use LocalDirAllocator for file allocation
virajjasani commented on PR #5054: URL: https://github.com/apache/hadoop/pull/5054#issuecomment-1286286255 ``` [INFO] --- maven-surefire-plugin:3.0.0-M1:test (default-test) @ hadoop-aws --- [INFO] [INFO] --- [INFO] T E S T S [INFO] --- [INFO] Results: [INFO] [WARNING] Tests run: 436, Failures: 0, Errors: 0, Skipped: 4 [INFO] --- maven-failsafe-plugin:3.0.0-M1:integration-test (default-integration-test) @ hadoop-aws --- [INFO] [INFO] --- [INFO] T E S T S [INFO] --- [INFO] Results: [INFO] [WARNING] Tests run: 1154, Failures: 0, Errors: 0, Skipped: 148 [INFO] --- maven-failsafe-plugin:3.0.0-M1:integration-test (sequential-integration-tests) @ hadoop-aws --- [INFO] [INFO] --- [INFO] T E S T S [INFO] --- [INFO] Results: [INFO] [ERROR] Errors: [ERROR] ITestS3AContractRootDir>AbstractContractRootDirectoryTest.testRecursiveRootListing:267 » TestTimedOut [INFO] [ERROR] Tests run: 124, Failures: 0, Errors: 1, Skipped: 10 ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] slfan1989 commented on a diff in pull request #5030: YARN-11345. [Federation] Refactoring Yarn Router's Application Web Page.
slfan1989 commented on code in PR #5030: URL: https://github.com/apache/hadoop/pull/5030#discussion_r1001214547 ## hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/AppsBlock.java: ## @@ -81,53 +155,57 @@ protected void render(Block html) { // Render the applications StringBuilder appsTableData = new StringBuilder("[\n"); -for (AppInfo app : apps.getApps()) { - try { - -String percent = String.format("%.1f", app.getProgress() * 100.0F); -String trackingURL = -app.getTrackingUrl() == null ? "#" : app.getTrackingUrl(); -// AppID numerical value parsed by parseHadoopID in yarn.dt.plugins.js -appsTableData.append("[\"") -.append("") -.append(app.getAppId()).append("\",\"") -.append(escape(app.getUser())).append("\",\"") -.append(escape(app.getName())).append("\",\"") -.append(escape(app.getApplicationType())).append("\",\"") -.append(escape(app.getQueue())).append("\",\"") -.append(String.valueOf(app.getPriority())).append("\",\"") -.append(app.getStartTime()).append("\",\"") -.append(app.getFinishTime()).append("\",\"") -.append(app.getState()).append("\",\"") -.append(app.getFinalStatus()).append("\",\"") -// Progress bar -.append(" ").append(" ") -// History link -.append("\",\"") -.append("History").append(""); -appsTableData.append("\"],\n"); - - } catch (Exception e) { -LOG.info( -"Cannot add application {}: {}", app.getAppId(), e.getMessage()); + +if (appsInfo != null) { + Collection apps = appsInfo.getApps(); + if (CollectionUtils.isNotEmpty(apps)) { +int numApps = apps.size(); +int i = 0; +for (AppInfo app : apps) { + try { +String percent = String.format("%.1f", app.getProgress() * 100.0F); +String trackingURL = +app.getTrackingUrl() == null ? "#" : app.getTrackingUrl(); + +// AppID numerical value parsed by parseHadoopID in yarn.dt.plugins.js +appsTableData.append("[\"") +.append("") +.append(app.getAppId()).append("\",\"") +.append(escape(app.getUser())).append("\",\"") +.append(escape(app.getName())).append("\",\"") +.append(escape(app.getApplicationType())).append("\",\"") +.append(escape(app.getQueue())).append("\",\"") +.append(app.getPriority()).append("\",\"") +.append(app.getStartTime()).append("\",\"") +.append(app.getFinishTime()).append("\",\"") +.append(app.getState()).append("\",\"") +.append(app.getFinalStatus()).append("\",\"") +// Progress bar +.append(" ").append(" ") +// History link +.append("\",\"") +.append("History").append(""); +appsTableData.append("\"]\n"); + +if (i < numApps - 1) { + appsTableData.append(","); +} + } catch (Exception e) { +LOG.info("Cannot add application {}: {}", app.getAppId(), e.getMessage()); + } + i++; Review Comment: Thanks for your suggestion, I will refactor this part of the code to make it more readable. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-18399) SingleFilePerBlockCache to use LocalDirAllocator for file allocatoin
[ https://issues.apache.org/jira/browse/HADOOP-18399?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HADOOP-18399: Labels: pull-request-available (was: ) > SingleFilePerBlockCache to use LocalDirAllocator for file allocatoin > > > Key: HADOOP-18399 > URL: https://issues.apache.org/jira/browse/HADOOP-18399 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.4.0 >Reporter: Steve Loughran >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > > prefetching stream's SingleFilePerBlockCache uses Files.tempFile() to > allocate a temp file. > it should be using LocalDirAllocator to allocate space from a list of dirs, > taking a config key to use. for s3a we will use the Constants.BUFFER_DIR > option, which on yarn deployments is fixed under the env.LOCAL_DIR path, so > automatically cleaned up on container exit -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18399) SingleFilePerBlockCache to use LocalDirAllocator for file allocatoin
[ https://issues.apache.org/jira/browse/HADOOP-18399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17621407#comment-17621407 ] ASF GitHub Bot commented on HADOOP-18399: - virajjasani commented on PR #5054: URL: https://github.com/apache/hadoop/pull/5054#issuecomment-1286270467 Tested against `us-west-2` with: `mvn -Dparallel-tests -DtestsThreadCount=8 -Dscale clean verify` and `mvn -Dparallel-tests -DtestsThreadCount=8 -Dprefetch -Dscale clean verify` > SingleFilePerBlockCache to use LocalDirAllocator for file allocatoin > > > Key: HADOOP-18399 > URL: https://issues.apache.org/jira/browse/HADOOP-18399 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.4.0 >Reporter: Steve Loughran >Assignee: Viraj Jasani >Priority: Major > > prefetching stream's SingleFilePerBlockCache uses Files.tempFile() to > allocate a temp file. > it should be using LocalDirAllocator to allocate space from a list of dirs, > taking a config key to use. for s3a we will use the Constants.BUFFER_DIR > option, which on yarn deployments is fixed under the env.LOCAL_DIR path, so > automatically cleaned up on container exit -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] virajjasani commented on pull request #5054: HADOOP-18399 Prefetch - SingleFilePerBlockCache to use LocalDirAllocator for file allocation
virajjasani commented on PR #5054: URL: https://github.com/apache/hadoop/pull/5054#issuecomment-1286270467 Tested against `us-west-2` with: `mvn -Dparallel-tests -DtestsThreadCount=8 -Dscale clean verify` and `mvn -Dparallel-tests -DtestsThreadCount=8 -Dprefetch -Dscale clean verify` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #5017: YARN-11330. use secure XML parsers (#4981)
hadoop-yetus commented on PR #5017: URL: https://github.com/apache/hadoop/pull/5017#issuecomment-1286243493 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 49s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 15 new or modified test files. | _ branch-3.3 Compile Tests _ | | +0 :ok: | mvndep | 16m 1s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 24m 10s | | branch-3.3 passed | | +1 :green_heart: | compile | 18m 24s | | branch-3.3 passed | | +1 :green_heart: | checkstyle | 3m 3s | | branch-3.3 passed | | +1 :green_heart: | mvnsite | 6m 0s | | branch-3.3 passed | | +1 :green_heart: | javadoc | 4m 50s | | branch-3.3 passed | | +1 :green_heart: | spotbugs | 8m 48s | | branch-3.3 passed | | +1 :green_heart: | shadedclient | 25m 40s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 35s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 30s | | the patch passed | | +1 :green_heart: | compile | 17m 35s | | the patch passed | | +1 :green_heart: | javac | 17m 35s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 2m 59s | | the patch passed | | +1 :green_heart: | mvnsite | 5m 48s | | the patch passed | | +1 :green_heart: | javadoc | 4m 20s | | the patch passed | | +1 :green_heart: | spotbugs | 9m 3s | | the patch passed | | +1 :green_heart: | shadedclient | 25m 37s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 18m 28s | [/patch-unit-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5017/3/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt) | hadoop-common in the patch passed. | | +1 :green_heart: | unit | 23m 22s | | hadoop-yarn-server-nodemanager in the patch passed. | | +1 :green_heart: | unit | 90m 38s | | hadoop-yarn-server-resourcemanager in the patch passed. | | +1 :green_heart: | unit | 27m 44s | | hadoop-yarn-client in the patch passed. | | +1 :green_heart: | asflicense | 1m 29s | | The patch does not generate ASF License warnings. | | | | 343m 2s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.security.ssl.TestReloadingX509TrustManager | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5017/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5017 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 48ddbcab3355 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | branch-3.3 / 7ba468df1f8725b3f2230171773b515540c13b92 | | Default Java | Private Build-1.8.0_342-8u342-b07-0ubuntu1~18.04-b07 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5017/3/testReport/ | | Max. process+thread count | 1860 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client U: . | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5017/3/console | | versions | git=2.17.1 maven=3.6.0 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact
[GitHub] [hadoop] slfan1989 commented on a diff in pull request #5030: YARN-11345. [Federation] Refactoring Yarn Router's Application Web Page.
slfan1989 commented on code in PR #5030: URL: https://github.com/apache/hadoop/pull/5030#discussion_r1001179187 ## hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/AppsBlock.java: ## @@ -81,53 +155,57 @@ protected void render(Block html) { // Render the applications StringBuilder appsTableData = new StringBuilder("[\n"); -for (AppInfo app : apps.getApps()) { - try { - -String percent = String.format("%.1f", app.getProgress() * 100.0F); -String trackingURL = -app.getTrackingUrl() == null ? "#" : app.getTrackingUrl(); -// AppID numerical value parsed by parseHadoopID in yarn.dt.plugins.js -appsTableData.append("[\"") -.append("") -.append(app.getAppId()).append("\",\"") -.append(escape(app.getUser())).append("\",\"") -.append(escape(app.getName())).append("\",\"") -.append(escape(app.getApplicationType())).append("\",\"") -.append(escape(app.getQueue())).append("\",\"") -.append(String.valueOf(app.getPriority())).append("\",\"") -.append(app.getStartTime()).append("\",\"") -.append(app.getFinishTime()).append("\",\"") -.append(app.getState()).append("\",\"") -.append(app.getFinalStatus()).append("\",\"") -// Progress bar -.append(" ").append(" ") -// History link -.append("\",\"") -.append("History").append(""); -appsTableData.append("\"],\n"); - - } catch (Exception e) { -LOG.info( -"Cannot add application {}: {}", app.getAppId(), e.getMessage()); + +if (appsInfo != null) { + Collection apps = appsInfo.getApps(); + if (CollectionUtils.isNotEmpty(apps)) { +int numApps = apps.size(); +int i = 0; Review Comment: I put i++ after try...catch and the loop can end even if the application has errors. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] slfan1989 commented on a diff in pull request #5030: YARN-11345. [Federation] Refactoring Yarn Router's Application Web Page.
slfan1989 commented on code in PR #5030: URL: https://github.com/apache/hadoop/pull/5030#discussion_r1001179567 ## hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/AppsBlock.java: ## @@ -81,53 +155,57 @@ protected void render(Block html) { // Render the applications StringBuilder appsTableData = new StringBuilder("[\n"); -for (AppInfo app : apps.getApps()) { - try { - -String percent = String.format("%.1f", app.getProgress() * 100.0F); -String trackingURL = -app.getTrackingUrl() == null ? "#" : app.getTrackingUrl(); -// AppID numerical value parsed by parseHadoopID in yarn.dt.plugins.js -appsTableData.append("[\"") -.append("") -.append(app.getAppId()).append("\",\"") -.append(escape(app.getUser())).append("\",\"") -.append(escape(app.getName())).append("\",\"") -.append(escape(app.getApplicationType())).append("\",\"") -.append(escape(app.getQueue())).append("\",\"") -.append(String.valueOf(app.getPriority())).append("\",\"") -.append(app.getStartTime()).append("\",\"") -.append(app.getFinishTime()).append("\",\"") -.append(app.getState()).append("\",\"") -.append(app.getFinalStatus()).append("\",\"") -// Progress bar -.append(" ").append(" ") -// History link -.append("\",\"") -.append("History").append(""); -appsTableData.append("\"],\n"); - - } catch (Exception e) { -LOG.info( -"Cannot add application {}: {}", app.getAppId(), e.getMessage()); + +if (appsInfo != null) { + Collection apps = appsInfo.getApps(); + if (CollectionUtils.isNotEmpty(apps)) { +int numApps = apps.size(); +int i = 0; +for (AppInfo app : apps) { + try { +String percent = String.format("%.1f", app.getProgress() * 100.0F); +String trackingURL = +app.getTrackingUrl() == null ? "#" : app.getTrackingUrl(); + +// AppID numerical value parsed by parseHadoopID in yarn.dt.plugins.js +appsTableData.append("[\"") +.append("") +.append(app.getAppId()).append("\",\"") +.append(escape(app.getUser())).append("\",\"") +.append(escape(app.getName())).append("\",\"") +.append(escape(app.getApplicationType())).append("\",\"") +.append(escape(app.getQueue())).append("\",\"") +.append(app.getPriority()).append("\",\"") +.append(app.getStartTime()).append("\",\"") +.append(app.getFinishTime()).append("\",\"") +.append(app.getState()).append("\",\"") +.append(app.getFinalStatus()).append("\",\"") +// Progress bar +.append(" ").append(" ") +// History link +.append("\",\"") +.append("History").append(""); +appsTableData.append("\"]\n"); + +if (i < numApps - 1) { + appsTableData.append(","); +} + } catch (Exception e) { +LOG.info("Cannot add application {}: {}", app.getAppId(), e.getMessage()); + } + i++; Review Comment: i++ is placed after try...catch. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] slfan1989 commented on a diff in pull request #5030: YARN-11345. [Federation] Refactoring Yarn Router's Application Web Page.
slfan1989 commented on code in PR #5030: URL: https://github.com/apache/hadoop/pull/5030#discussion_r1001179187 ## hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/AppsBlock.java: ## @@ -81,53 +155,57 @@ protected void render(Block html) { // Render the applications StringBuilder appsTableData = new StringBuilder("[\n"); -for (AppInfo app : apps.getApps()) { - try { - -String percent = String.format("%.1f", app.getProgress() * 100.0F); -String trackingURL = -app.getTrackingUrl() == null ? "#" : app.getTrackingUrl(); -// AppID numerical value parsed by parseHadoopID in yarn.dt.plugins.js -appsTableData.append("[\"") -.append("") -.append(app.getAppId()).append("\",\"") -.append(escape(app.getUser())).append("\",\"") -.append(escape(app.getName())).append("\",\"") -.append(escape(app.getApplicationType())).append("\",\"") -.append(escape(app.getQueue())).append("\",\"") -.append(String.valueOf(app.getPriority())).append("\",\"") -.append(app.getStartTime()).append("\",\"") -.append(app.getFinishTime()).append("\",\"") -.append(app.getState()).append("\",\"") -.append(app.getFinalStatus()).append("\",\"") -// Progress bar -.append(" ").append(" ") -// History link -.append("\",\"") -.append("History").append(""); -appsTableData.append("\"],\n"); - - } catch (Exception e) { -LOG.info( -"Cannot add application {}: {}", app.getAppId(), e.getMessage()); + +if (appsInfo != null) { + Collection apps = appsInfo.getApps(); + if (CollectionUtils.isNotEmpty(apps)) { +int numApps = apps.size(); +int i = 0; Review Comment: I put i++ after try...catch, because if any app is affected, the loop is guaranteed to end. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] slfan1989 commented on pull request #5032: YARN-11295. [Federation] Router Support DelegationToken in MemoryStore mode.
slfan1989 commented on PR #5032: URL: https://github.com/apache/hadoop/pull/5032#issuecomment-1286226835 @goiri Thank you very much for helping to review the code! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17705) S3A to add option fs.s3a.endpoint.region to set AWS region
[ https://issues.apache.org/jira/browse/HADOOP-17705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17621385#comment-17621385 ] Greg Senia commented on HADOOP-17705: - [~ste...@apache.org] apparently this was the real fix to the issue I actually tested it with Hadoop 3.2.4 and it fixes the problem without any code changes. I have raised this with our AWS Account Team as a vendor product we are using hit the same issue that was not utilizing Hadoop-aws code. https://github.com/aws/aws-sdk-java/pull/2537 > S3A to add option fs.s3a.endpoint.region to set AWS region > -- > > Key: HADOOP-17705 > URL: https://issues.apache.org/jira/browse/HADOOP-17705 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Mehakmeet Singh >Assignee: Mehakmeet Singh >Priority: Major > Labels: pull-request-available > Fix For: 3.3.2 > > Time Spent: 3h > Remaining Estimate: 0h > > Currently, AWS region is either constructed via the endpoint URL, by making > an assumption that the 2nd component after delimiter "." is the region in > endpoint URL, which doesn't work for private links and sets the default to > us-east-1 thus causing authorization issue w.r.t the private link. > The option fs.s3a.endpoint.region allows this to be explicitly set > h2. how to set the s3 region on older hadoop releases > For anyone who needs to set the signing region on older versions of the s3a > client *you do not need this festure*. instead just provide a custom endpoint > to region mapping json file > # Download the default region mapping file > [awssdk_config_default.json|https://github.com/aws/aws-sdk-java/blob/master/aws-java-sdk-core/src/main/resources/com/amazonaws/internal/config/awssdk_config_default.json] > # Add a new regular expression to map the endpoint/hostname to the target > region > # Save the file as {{/etc/hadoop/conf/awssdk_config_override.json}} > # verify basic hadop fs -ls commands work > # copy to the rest of the cluster. > # There should be no need to restart any services -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18233) Possible race condition with TemporaryAWSCredentialsProvider
[ https://issues.apache.org/jira/browse/HADOOP-18233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17621383#comment-17621383 ] Jimmy Wong commented on HADOOP-18233: - Yep, setting "fs.s3a.bucket.probe" to "2" also resolves the issue. > Possible race condition with TemporaryAWSCredentialsProvider > > > Key: HADOOP-18233 > URL: https://issues.apache.org/jira/browse/HADOOP-18233 > Project: Hadoop Common > Issue Type: Bug > Components: auth, fs/s3 >Affects Versions: 3.3.1 > Environment: spark v3.2.0 > hadoop-aws v3.3.1 > java version 1.8.0_265 via zulu-8 >Reporter: Jason Sleight >Priority: Major > Labels: pull-request-available > > I'm in the process of upgrading spark+hadoop versions for my workflows and > observing a weird behavior regression. I'm setting > {code:java} > spark.hadoop.fs.s3a.aws.credentials.provider=org.apache.hadoop.fs.s3a.TemporaryAWSCredentialsProvider > spark.hadoop.fs.s3.impl=org.apache.hadoop.fs.s3a.S3AFileSystem > spark.sql.catalogImplementation=hive > spark.hadoop.aws.region=us-west-2 > ...many other things, I think these might be the relevant ones though...{code} > in Spark config and I'm observing some non-fatal warnings/exceptions (see > below for some examples). The warnings/exceptions randomly appear for some > tasks, which causes them to fail, but then when Spark retries the task it > will succeed. The initial tasks don't always fail either, just sometimes. > I also found that if I switch to a SimpleAWSCredentials and use static keys, > then I don't see any issues. > My old setup was spark v3.0.2 with hadoop-aws v3.2.1 which also does not have > these warnings/exceptions. > From reading some other tickets I thought perhaps adding > {code:java} > spark.sql.hive.metastore.sharedPrefixes=com.amazonaws {code} > would help, but it did not. > Appreciate any suggestions for how to proceed or debug further :) > > Example stack traces: > First one for an s3 read > {code:java} > WARN TaskSetManager: Lost task 27.0 in stage 4.0 (TID 29) ( executor > 13): java.nio.file.AccessDeniedException: > s3a://bucket/path/to/part.snappy.parquet: > org.apache.hadoop.fs.s3a.CredentialInitializationException: Provider > TemporaryAWSCredentialsProvider has no credentials > at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:206) > at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:170) > at > org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:3289) > at > org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:3185) > at > org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:3053) > at > org.apache.parquet.hadoop.util.HadoopInputFile.fromPath(HadoopInputFile.java:39) > at > org.apache.spark.sql.execution.datasources.parquet.ParquetFooterReader.readFooter(ParquetFooterReader.java:39) > at > org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat.footerFileMetaData$lzycompute$1(ParquetFileFormat.scala:268) > at > org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat.footerFileMetaData$1(ParquetFileFormat.scala:267) > at > org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat.$anonfun$buildReaderWithPartitionValues$2(ParquetFileFormat.scala:270) > at > org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:116) > at > org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:164) > at > org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:93) > at > org.apache.spark.sql.execution.FileSourceScanExec$$anon$1.hasNext(DataSourceScanExec.scala:522) > at > org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage7.columnartorow_nextBatch_0$(Unknown > Source) > at > org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage7.processNext(Unknown > Source) > at > org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) > at > org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:759) > at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460) > at > org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:140) > at > org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59) > at > org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99) > at > org.apache.spark.scheduler.ShuffleMapTask.runTask(
[GitHub] [hadoop] hadoop-yetus commented on pull request #4980: MAPREDUCE-7411: use secure XML parsers
hadoop-yetus commented on PR #4980: URL: https://github.com/apache/hadoop/pull/4980#issuecomment-1286203551 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 51s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 12 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 15m 41s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 25m 43s | | trunk passed | | +1 :green_heart: | compile | 2m 50s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 2m 24s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 1m 16s | | trunk passed | | +1 :green_heart: | mvnsite | 3m 31s | | trunk passed | | +1 :green_heart: | javadoc | 2m 58s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 2m 41s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 4m 54s | | trunk passed | | +1 :green_heart: | shadedclient | 20m 40s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 30s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 15s | | the patch passed | | +1 :green_heart: | compile | 2m 34s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 2m 34s | | the patch passed | | +1 :green_heart: | compile | 2m 11s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 2m 11s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 57s | | hadoop-mapreduce-project/hadoop-mapreduce-client: The patch generated 0 new + 31 unchanged - 2 fixed = 31 total (was 33) | | +1 :green_heart: | mvnsite | 2m 33s | | the patch passed | | +1 :green_heart: | javadoc | 1m 56s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 39s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 4m 32s | | the patch passed | | +1 :green_heart: | shadedclient | 20m 12s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 7m 9s | | hadoop-mapreduce-client-core in the patch passed. | | +1 :green_heart: | unit | 8m 47s | | hadoop-mapreduce-client-app in the patch passed. | | +1 :green_heart: | unit | 5m 7s | | hadoop-mapreduce-client-hs in the patch passed. | | +1 :green_heart: | unit | 134m 57s | | hadoop-mapreduce-client-jobclient in the patch passed. | | +1 :green_heart: | asflicense | 0m 54s | | The patch does not generate ASF License warnings. | | | | 282m 40s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4980/4/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/4980 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 2892e6611423 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 975f92cfdf1920c1dd709d3e3ce4471e94d86f40 | | Default Java | Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4980/4/testReport/ | | Max. process+thread count | 1590 (vs. ulimit of 5500) | | modules | C: hadoop-mapreduce-project/hadoop-mapreduce-cli
[GitHub] [hadoop] hadoop-yetus commented on pull request #4929: YARN-11229. [Federation] Add checkUserAccessToQueue REST APIs for Router.
hadoop-yetus commented on PR #4929: URL: https://github.com/apache/hadoop/pull/4929#issuecomment-1286100860 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 38s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 4 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 16m 6s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 26m 3s | | trunk passed | | +1 :green_heart: | compile | 4m 0s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 3m 28s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 1m 30s | | trunk passed | | +1 :green_heart: | mvnsite | 2m 2s | | trunk passed | | +1 :green_heart: | javadoc | 1m 44s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 37s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 3m 20s | | trunk passed | | +1 :green_heart: | shadedclient | 20m 47s | | branch has no errors when building and testing our client artifacts. | | -0 :warning: | patch | 21m 15s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 29s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 29s | | the patch passed | | +1 :green_heart: | compile | 3m 54s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | -1 :x: | javac | 3m 54s | [/results-compile-javac-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server-jdkUbuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4929/13/artifact/out/results-compile-javac-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server-jdkUbuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04.txt) | hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server-jdkUbuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 generated 1 new + 437 unchanged - 1 fixed = 438 total (was 438) | | +1 :green_heart: | compile | 3m 19s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 3m 19s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 1m 8s | [/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4929/13/artifact/out/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server.txt) | hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server: The patch generated 1 new + 0 unchanged - 3 fixed = 1 total (was 3) | | +1 :green_heart: | mvnsite | 1m 30s | | the patch passed | | +1 :green_heart: | javadoc | 1m 24s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 11s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 3m 2s | | the patch passed | | +1 :green_heart: | shadedclient | 20m 32s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 99m 14s | | hadoop-yarn-server-resourcemanager in the patch passed. | | +1 :green_heart: | unit | 5m 13s | | hadoop-yarn-server-router in the patch passed. | | +1 :green_heart: | asflicense | 0m 43s | | The patch does not generate ASF License warnings. | | | | 226m 57s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4929/13/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/4929 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux a20c7213bb79 4.15.0-191
[jira] [Assigned] (HADOOP-18399) SingleFilePerBlockCache to use LocalDirAllocator for file allocatoin
[ https://issues.apache.org/jira/browse/HADOOP-18399?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Viraj Jasani reassigned HADOOP-18399: - Assignee: Viraj Jasani > SingleFilePerBlockCache to use LocalDirAllocator for file allocatoin > > > Key: HADOOP-18399 > URL: https://issues.apache.org/jira/browse/HADOOP-18399 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.4.0 >Reporter: Steve Loughran >Assignee: Viraj Jasani >Priority: Major > > prefetching stream's SingleFilePerBlockCache uses Files.tempFile() to > allocate a temp file. > it should be using LocalDirAllocator to allocate space from a list of dirs, > taking a config key to use. for s3a we will use the Constants.BUFFER_DIR > option, which on yarn deployments is fixed under the env.LOCAL_DIR path, so > automatically cleaned up on container exit -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] goiri merged pull request #5032: YARN-11295. [Federation] Router Support DelegationToken in MemoryStore mode.
goiri merged PR #5032: URL: https://github.com/apache/hadoop/pull/5032 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] goiri commented on a diff in pull request #5030: YARN-11345. [Federation] Refactoring Yarn Router's Application Web Page.
goiri commented on code in PR #5030: URL: https://github.com/apache/hadoop/pull/5030#discussion_r1001069884 ## hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/AppsBlock.java: ## @@ -81,53 +155,57 @@ protected void render(Block html) { // Render the applications StringBuilder appsTableData = new StringBuilder("[\n"); -for (AppInfo app : apps.getApps()) { - try { - -String percent = String.format("%.1f", app.getProgress() * 100.0F); -String trackingURL = -app.getTrackingUrl() == null ? "#" : app.getTrackingUrl(); -// AppID numerical value parsed by parseHadoopID in yarn.dt.plugins.js -appsTableData.append("[\"") -.append("") -.append(app.getAppId()).append("\",\"") -.append(escape(app.getUser())).append("\",\"") -.append(escape(app.getName())).append("\",\"") -.append(escape(app.getApplicationType())).append("\",\"") -.append(escape(app.getQueue())).append("\",\"") -.append(String.valueOf(app.getPriority())).append("\",\"") -.append(app.getStartTime()).append("\",\"") -.append(app.getFinishTime()).append("\",\"") -.append(app.getState()).append("\",\"") -.append(app.getFinalStatus()).append("\",\"") -// Progress bar -.append(" ").append(" ") -// History link -.append("\",\"") -.append("History").append(""); -appsTableData.append("\"],\n"); - - } catch (Exception e) { -LOG.info( -"Cannot add application {}: {}", app.getAppId(), e.getMessage()); + +if (appsInfo != null) { + Collection apps = appsInfo.getApps(); + if (CollectionUtils.isNotEmpty(apps)) { +int numApps = apps.size(); +int i = 0; Review Comment: i++ missing? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #4982: YARN-11332. [Federation] Improve FederationClientInterceptor#ThreadPool thread pool configuration.
hadoop-yetus commented on PR #4982: URL: https://github.com/apache/hadoop/pull/4982#issuecomment-1286083147 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 46s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 1s | | xmllint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 15m 44s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 27m 21s | | trunk passed | | +1 :green_heart: | compile | 10m 13s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 9m 22s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 2m 13s | | trunk passed | | +1 :green_heart: | mvnsite | 3m 30s | | trunk passed | | +1 :green_heart: | javadoc | 3m 25s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 3m 16s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 6m 14s | | trunk passed | | +1 :green_heart: | shadedclient | 25m 5s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 31s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 55s | | the patch passed | | +1 :green_heart: | compile | 10m 23s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 10m 23s | | hadoop-yarn-project_hadoop-yarn-jdkUbuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 generated 0 new + 730 unchanged - 2 fixed = 730 total (was 732) | | +1 :green_heart: | compile | 10m 31s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 10m 31s | | hadoop-yarn-project_hadoop-yarn-jdkPrivateBuild-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 generated 0 new + 640 unchanged - 2 fixed = 640 total (was 642) | | +1 :green_heart: | blanks | 0m 1s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 2m 2s | | the patch passed | | +1 :green_heart: | mvnsite | 3m 18s | | the patch passed | | +1 :green_heart: | javadoc | 3m 17s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 3m 5s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 6m 30s | | the patch passed | | +1 :green_heart: | shadedclient | 23m 40s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 1m 30s | | hadoop-yarn-api in the patch passed. | | +1 :green_heart: | unit | 5m 30s | | hadoop-yarn-common in the patch passed. | | -1 :x: | unit | 5m 35s | [/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-router.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4982/10/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-router.txt) | hadoop-yarn-server-router in the patch passed. | | +1 :green_heart: | asflicense | 1m 12s | | The patch does not generate ASF License warnings. | | | | 190m 33s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.yarn.server.router.clientrm.TestFederationClientInterceptorRetry | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4982/10/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/4982 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint | | uname | Linux 696bb3cf817c 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Buil
[GitHub] [hadoop] hadoop-yetus commented on pull request #5030: YARN-11345. [Federation] Refactoring Yarn Router's Application Web Page.
hadoop-yetus commented on PR #5030: URL: https://github.com/apache/hadoop/pull/5030#issuecomment-1286042680 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 52s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 15m 28s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 28m 33s | | trunk passed | | +1 :green_heart: | compile | 10m 23s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 8m 49s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 1m 46s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 32s | | trunk passed | | +1 :green_heart: | javadoc | 1m 35s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 23s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 2m 58s | | trunk passed | | +1 :green_heart: | shadedclient | 23m 25s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 24s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 0m 59s | | the patch passed | | +1 :green_heart: | compile | 9m 47s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 9m 47s | | the patch passed | | +1 :green_heart: | compile | 8m 45s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 8m 45s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 1m 36s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 24s | | the patch passed | | +1 :green_heart: | javadoc | 1m 18s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 18s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 2m 59s | | the patch passed | | +1 :green_heart: | shadedclient | 23m 35s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 4m 45s | | hadoop-yarn-common in the patch passed. | | +1 :green_heart: | unit | 5m 21s | | hadoop-yarn-server-router in the patch passed. | | +1 :green_heart: | asflicense | 0m 48s | | The patch does not generate ASF License warnings. | | | | 162m 43s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5030/4/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5030 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 5262eacd5120 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / ca3270139a999c571b191194487a2fc118a19152 | | Default Java | Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5030/4/testReport/ | | Max. process+thread count | 753 (vs. ulimit of 5500) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router U: hadoop-yarn-project/hadoop-yarn | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5030/4/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0 https://yetu
[jira] [Commented] (HADOOP-18233) Possible race condition with TemporaryAWSCredentialsProvider
[ https://issues.apache.org/jira/browse/HADOOP-18233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17621303#comment-17621303 ] Steve Loughran commented on HADOOP-18233: - maybe if the bucket probe is 0. we should still ask the credential list for a set of credentials during init. this would be slow if some remote call is made (iAM instance, sts creds, some of the delegation token providers) but it is at least single threaded > Possible race condition with TemporaryAWSCredentialsProvider > > > Key: HADOOP-18233 > URL: https://issues.apache.org/jira/browse/HADOOP-18233 > Project: Hadoop Common > Issue Type: Bug > Components: auth, fs/s3 >Affects Versions: 3.3.1 > Environment: spark v3.2.0 > hadoop-aws v3.3.1 > java version 1.8.0_265 via zulu-8 >Reporter: Jason Sleight >Priority: Major > Labels: pull-request-available > > I'm in the process of upgrading spark+hadoop versions for my workflows and > observing a weird behavior regression. I'm setting > {code:java} > spark.hadoop.fs.s3a.aws.credentials.provider=org.apache.hadoop.fs.s3a.TemporaryAWSCredentialsProvider > spark.hadoop.fs.s3.impl=org.apache.hadoop.fs.s3a.S3AFileSystem > spark.sql.catalogImplementation=hive > spark.hadoop.aws.region=us-west-2 > ...many other things, I think these might be the relevant ones though...{code} > in Spark config and I'm observing some non-fatal warnings/exceptions (see > below for some examples). The warnings/exceptions randomly appear for some > tasks, which causes them to fail, but then when Spark retries the task it > will succeed. The initial tasks don't always fail either, just sometimes. > I also found that if I switch to a SimpleAWSCredentials and use static keys, > then I don't see any issues. > My old setup was spark v3.0.2 with hadoop-aws v3.2.1 which also does not have > these warnings/exceptions. > From reading some other tickets I thought perhaps adding > {code:java} > spark.sql.hive.metastore.sharedPrefixes=com.amazonaws {code} > would help, but it did not. > Appreciate any suggestions for how to proceed or debug further :) > > Example stack traces: > First one for an s3 read > {code:java} > WARN TaskSetManager: Lost task 27.0 in stage 4.0 (TID 29) ( executor > 13): java.nio.file.AccessDeniedException: > s3a://bucket/path/to/part.snappy.parquet: > org.apache.hadoop.fs.s3a.CredentialInitializationException: Provider > TemporaryAWSCredentialsProvider has no credentials > at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:206) > at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:170) > at > org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:3289) > at > org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:3185) > at > org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:3053) > at > org.apache.parquet.hadoop.util.HadoopInputFile.fromPath(HadoopInputFile.java:39) > at > org.apache.spark.sql.execution.datasources.parquet.ParquetFooterReader.readFooter(ParquetFooterReader.java:39) > at > org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat.footerFileMetaData$lzycompute$1(ParquetFileFormat.scala:268) > at > org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat.footerFileMetaData$1(ParquetFileFormat.scala:267) > at > org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat.$anonfun$buildReaderWithPartitionValues$2(ParquetFileFormat.scala:270) > at > org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:116) > at > org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:164) > at > org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:93) > at > org.apache.spark.sql.execution.FileSourceScanExec$$anon$1.hasNext(DataSourceScanExec.scala:522) > at > org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage7.columnartorow_nextBatch_0$(Unknown > Source) > at > org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage7.processNext(Unknown > Source) > at > org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) > at > org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:759) > at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460) > at > org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:140) > at > org.apache.spark.shuffle.Shuffle
[jira] [Commented] (HADOOP-18233) Possible race condition with TemporaryAWSCredentialsProvider
[ https://issues.apache.org/jira/browse/HADOOP-18233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17621299#comment-17621299 ] Steve Loughran commented on HADOOP-18233: - note that we did change the probe to 0 in HADOOP-17454 for faster launch... > Possible race condition with TemporaryAWSCredentialsProvider > > > Key: HADOOP-18233 > URL: https://issues.apache.org/jira/browse/HADOOP-18233 > Project: Hadoop Common > Issue Type: Bug > Components: auth, fs/s3 >Affects Versions: 3.3.1 > Environment: spark v3.2.0 > hadoop-aws v3.3.1 > java version 1.8.0_265 via zulu-8 >Reporter: Jason Sleight >Priority: Major > Labels: pull-request-available > > I'm in the process of upgrading spark+hadoop versions for my workflows and > observing a weird behavior regression. I'm setting > {code:java} > spark.hadoop.fs.s3a.aws.credentials.provider=org.apache.hadoop.fs.s3a.TemporaryAWSCredentialsProvider > spark.hadoop.fs.s3.impl=org.apache.hadoop.fs.s3a.S3AFileSystem > spark.sql.catalogImplementation=hive > spark.hadoop.aws.region=us-west-2 > ...many other things, I think these might be the relevant ones though...{code} > in Spark config and I'm observing some non-fatal warnings/exceptions (see > below for some examples). The warnings/exceptions randomly appear for some > tasks, which causes them to fail, but then when Spark retries the task it > will succeed. The initial tasks don't always fail either, just sometimes. > I also found that if I switch to a SimpleAWSCredentials and use static keys, > then I don't see any issues. > My old setup was spark v3.0.2 with hadoop-aws v3.2.1 which also does not have > these warnings/exceptions. > From reading some other tickets I thought perhaps adding > {code:java} > spark.sql.hive.metastore.sharedPrefixes=com.amazonaws {code} > would help, but it did not. > Appreciate any suggestions for how to proceed or debug further :) > > Example stack traces: > First one for an s3 read > {code:java} > WARN TaskSetManager: Lost task 27.0 in stage 4.0 (TID 29) ( executor > 13): java.nio.file.AccessDeniedException: > s3a://bucket/path/to/part.snappy.parquet: > org.apache.hadoop.fs.s3a.CredentialInitializationException: Provider > TemporaryAWSCredentialsProvider has no credentials > at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:206) > at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:170) > at > org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:3289) > at > org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:3185) > at > org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:3053) > at > org.apache.parquet.hadoop.util.HadoopInputFile.fromPath(HadoopInputFile.java:39) > at > org.apache.spark.sql.execution.datasources.parquet.ParquetFooterReader.readFooter(ParquetFooterReader.java:39) > at > org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat.footerFileMetaData$lzycompute$1(ParquetFileFormat.scala:268) > at > org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat.footerFileMetaData$1(ParquetFileFormat.scala:267) > at > org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat.$anonfun$buildReaderWithPartitionValues$2(ParquetFileFormat.scala:270) > at > org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:116) > at > org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:164) > at > org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:93) > at > org.apache.spark.sql.execution.FileSourceScanExec$$anon$1.hasNext(DataSourceScanExec.scala:522) > at > org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage7.columnartorow_nextBatch_0$(Unknown > Source) > at > org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage7.processNext(Unknown > Source) > at > org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) > at > org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:759) > at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460) > at > org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:140) > at > org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59) > at > org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99) > at > org.apache.spark.scheduler.Shuffl
[jira] [Commented] (HADOOP-17612) Upgrade Zookeeper to 3.6.3 and Curator to 5.2.0
[ https://issues.apache.org/jira/browse/HADOOP-17612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17621297#comment-17621297 ] ASF GitHub Bot commented on HADOOP-17612: - hadoop-yetus commented on PR #5047: URL: https://github.com/apache/hadoop/pull/5047#issuecomment-1285938999 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 11m 37s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 0s | | xmllint was not available. | | +0 :ok: | shelldocs | 0m 0s | | Shelldocs was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ branch-3.3 Compile Tests _ | | +0 :ok: | mvndep | 17m 37s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 27m 4s | | branch-3.3 passed | | +1 :green_heart: | compile | 19m 3s | | branch-3.3 passed | | +1 :green_heart: | checkstyle | 3m 11s | | branch-3.3 passed | | +1 :green_heart: | mvnsite | 21m 34s | | branch-3.3 passed | | +1 :green_heart: | javadoc | 7m 8s | | branch-3.3 passed | | +0 :ok: | spotbugs | 0m 23s | | branch/hadoop-project no spotbugs output file (spotbugsXml.xml) | | +1 :green_heart: | shadedclient | 59m 31s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 39s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 39m 50s | | the patch passed | | +1 :green_heart: | compile | 18m 33s | | the patch passed | | -1 :x: | javac | 18m 33s | [/results-compile-javac-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5047/1/artifact/out/results-compile-javac-root.txt) | root generated 10 new + 1867 unchanged - 0 fixed = 1877 total (was 1867) | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 3m 6s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5047/1/artifact/out/results-checkstyle-root.txt) | root: The patch generated 1 new + 356 unchanged - 2 fixed = 357 total (was 358) | | +1 :green_heart: | mvnsite | 21m 8s | | the patch passed | | +1 :green_heart: | shellcheck | 0m 0s | | No new issues. | | +1 :green_heart: | javadoc | 7m 0s | | the patch passed | | +0 :ok: | spotbugs | 0m 22s | | hadoop-project has no data from spotbugs | | +1 :green_heart: | shadedclient | 60m 18s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 720m 4s | [/patch-unit-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5047/1/artifact/out/patch-unit-root.txt) | root in the patch passed. | | +1 :green_heart: | asflicense | 1m 54s | | The patch does not generate ASF License warnings. | | | | 1073m 18s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.TestRollingUpgrade | | | hadoop.hdfs.tools.TestDFSAdmin | | | hadoop.hdfs.server.datanode.TestDataNodeRollingUpgrade | | | hadoop.hdfs.server.namenode.ha.TestHAMetrics | | | hadoop.hdfs.TestFileCreation | | | hadoop.yarn.client.api.impl.TestAMRMClient | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5047/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5047 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient codespell detsecrets xmllint spotbugs checkstyle shellcheck shelldocs | | uname | Linux e92120545992 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | branch-3.3 / 679535c33e2229f16acf0917b6b99c3e73f23f69 | | Default Java | Private Build-1.8.0_342-8u342-b07-0ubuntu1~18.04-b07 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5047/1/testReport/ | | Max. process+thread count | 3137 (vs. ulimit of 5500) | | modules | C: hadoop-proj
[GitHub] [hadoop] hadoop-yetus commented on pull request #5047: HADOOP-17612. Upgrade Zookeeper to 3.6.3 and Curator to 5.2.0 (#3241)
hadoop-yetus commented on PR #5047: URL: https://github.com/apache/hadoop/pull/5047#issuecomment-1285938999 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 11m 37s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 0s | | xmllint was not available. | | +0 :ok: | shelldocs | 0m 0s | | Shelldocs was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ branch-3.3 Compile Tests _ | | +0 :ok: | mvndep | 17m 37s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 27m 4s | | branch-3.3 passed | | +1 :green_heart: | compile | 19m 3s | | branch-3.3 passed | | +1 :green_heart: | checkstyle | 3m 11s | | branch-3.3 passed | | +1 :green_heart: | mvnsite | 21m 34s | | branch-3.3 passed | | +1 :green_heart: | javadoc | 7m 8s | | branch-3.3 passed | | +0 :ok: | spotbugs | 0m 23s | | branch/hadoop-project no spotbugs output file (spotbugsXml.xml) | | +1 :green_heart: | shadedclient | 59m 31s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 39s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 39m 50s | | the patch passed | | +1 :green_heart: | compile | 18m 33s | | the patch passed | | -1 :x: | javac | 18m 33s | [/results-compile-javac-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5047/1/artifact/out/results-compile-javac-root.txt) | root generated 10 new + 1867 unchanged - 0 fixed = 1877 total (was 1867) | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 3m 6s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5047/1/artifact/out/results-checkstyle-root.txt) | root: The patch generated 1 new + 356 unchanged - 2 fixed = 357 total (was 358) | | +1 :green_heart: | mvnsite | 21m 8s | | the patch passed | | +1 :green_heart: | shellcheck | 0m 0s | | No new issues. | | +1 :green_heart: | javadoc | 7m 0s | | the patch passed | | +0 :ok: | spotbugs | 0m 22s | | hadoop-project has no data from spotbugs | | +1 :green_heart: | shadedclient | 60m 18s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 720m 4s | [/patch-unit-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5047/1/artifact/out/patch-unit-root.txt) | root in the patch passed. | | +1 :green_heart: | asflicense | 1m 54s | | The patch does not generate ASF License warnings. | | | | 1073m 18s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.TestRollingUpgrade | | | hadoop.hdfs.tools.TestDFSAdmin | | | hadoop.hdfs.server.datanode.TestDataNodeRollingUpgrade | | | hadoop.hdfs.server.namenode.ha.TestHAMetrics | | | hadoop.hdfs.TestFileCreation | | | hadoop.yarn.client.api.impl.TestAMRMClient | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5047/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5047 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient codespell detsecrets xmllint spotbugs checkstyle shellcheck shelldocs | | uname | Linux e92120545992 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | branch-3.3 / 679535c33e2229f16acf0917b6b99c3e73f23f69 | | Default Java | Private Build-1.8.0_342-8u342-b07-0ubuntu1~18.04-b07 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5047/1/testReport/ | | Max. process+thread count | 3137 (vs. ulimit of 5500) | | modules | C: hadoop-project hadoop-common-project/hadoop-auth hadoop-common-project/hadoop-common hadoop-common-project/hadoop-registry hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-nfs hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common
[jira] [Commented] (HADOOP-18233) Possible race condition with TemporaryAWSCredentialsProvider
[ https://issues.apache.org/jira/browse/HADOOP-18233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17621295#comment-17621295 ] Steve Loughran commented on HADOOP-18233: - ok, so the race condition is that one thread can be doing the init, when another thread tries to make an fs call and fails with some unauth problem? try setting "fs.s3a.bucket.probe" to 2 and see if that makes it go away. because that forces an s3 request during the (blocking) s3a filesystem init. > Possible race condition with TemporaryAWSCredentialsProvider > > > Key: HADOOP-18233 > URL: https://issues.apache.org/jira/browse/HADOOP-18233 > Project: Hadoop Common > Issue Type: Bug > Components: auth, fs/s3 >Affects Versions: 3.3.1 > Environment: spark v3.2.0 > hadoop-aws v3.3.1 > java version 1.8.0_265 via zulu-8 >Reporter: Jason Sleight >Priority: Major > Labels: pull-request-available > > I'm in the process of upgrading spark+hadoop versions for my workflows and > observing a weird behavior regression. I'm setting > {code:java} > spark.hadoop.fs.s3a.aws.credentials.provider=org.apache.hadoop.fs.s3a.TemporaryAWSCredentialsProvider > spark.hadoop.fs.s3.impl=org.apache.hadoop.fs.s3a.S3AFileSystem > spark.sql.catalogImplementation=hive > spark.hadoop.aws.region=us-west-2 > ...many other things, I think these might be the relevant ones though...{code} > in Spark config and I'm observing some non-fatal warnings/exceptions (see > below for some examples). The warnings/exceptions randomly appear for some > tasks, which causes them to fail, but then when Spark retries the task it > will succeed. The initial tasks don't always fail either, just sometimes. > I also found that if I switch to a SimpleAWSCredentials and use static keys, > then I don't see any issues. > My old setup was spark v3.0.2 with hadoop-aws v3.2.1 which also does not have > these warnings/exceptions. > From reading some other tickets I thought perhaps adding > {code:java} > spark.sql.hive.metastore.sharedPrefixes=com.amazonaws {code} > would help, but it did not. > Appreciate any suggestions for how to proceed or debug further :) > > Example stack traces: > First one for an s3 read > {code:java} > WARN TaskSetManager: Lost task 27.0 in stage 4.0 (TID 29) ( executor > 13): java.nio.file.AccessDeniedException: > s3a://bucket/path/to/part.snappy.parquet: > org.apache.hadoop.fs.s3a.CredentialInitializationException: Provider > TemporaryAWSCredentialsProvider has no credentials > at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:206) > at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:170) > at > org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:3289) > at > org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:3185) > at > org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:3053) > at > org.apache.parquet.hadoop.util.HadoopInputFile.fromPath(HadoopInputFile.java:39) > at > org.apache.spark.sql.execution.datasources.parquet.ParquetFooterReader.readFooter(ParquetFooterReader.java:39) > at > org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat.footerFileMetaData$lzycompute$1(ParquetFileFormat.scala:268) > at > org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat.footerFileMetaData$1(ParquetFileFormat.scala:267) > at > org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat.$anonfun$buildReaderWithPartitionValues$2(ParquetFileFormat.scala:270) > at > org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:116) > at > org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:164) > at > org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:93) > at > org.apache.spark.sql.execution.FileSourceScanExec$$anon$1.hasNext(DataSourceScanExec.scala:522) > at > org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage7.columnartorow_nextBatch_0$(Unknown > Source) > at > org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage7.processNext(Unknown > Source) > at > org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) > at > org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:759) > at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460) > at > org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:140) >
[jira] [Reopened] (HADOOP-18233) Possible race condition with TemporaryAWSCredentialsProvider
[ https://issues.apache.org/jira/browse/HADOOP-18233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran reopened HADOOP-18233: - > Possible race condition with TemporaryAWSCredentialsProvider > > > Key: HADOOP-18233 > URL: https://issues.apache.org/jira/browse/HADOOP-18233 > Project: Hadoop Common > Issue Type: Bug > Components: auth, fs/s3 >Affects Versions: 3.3.1 > Environment: spark v3.2.0 > hadoop-aws v3.3.1 > java version 1.8.0_265 via zulu-8 >Reporter: Jason Sleight >Priority: Major > Labels: pull-request-available > > I'm in the process of upgrading spark+hadoop versions for my workflows and > observing a weird behavior regression. I'm setting > {code:java} > spark.hadoop.fs.s3a.aws.credentials.provider=org.apache.hadoop.fs.s3a.TemporaryAWSCredentialsProvider > spark.hadoop.fs.s3.impl=org.apache.hadoop.fs.s3a.S3AFileSystem > spark.sql.catalogImplementation=hive > spark.hadoop.aws.region=us-west-2 > ...many other things, I think these might be the relevant ones though...{code} > in Spark config and I'm observing some non-fatal warnings/exceptions (see > below for some examples). The warnings/exceptions randomly appear for some > tasks, which causes them to fail, but then when Spark retries the task it > will succeed. The initial tasks don't always fail either, just sometimes. > I also found that if I switch to a SimpleAWSCredentials and use static keys, > then I don't see any issues. > My old setup was spark v3.0.2 with hadoop-aws v3.2.1 which also does not have > these warnings/exceptions. > From reading some other tickets I thought perhaps adding > {code:java} > spark.sql.hive.metastore.sharedPrefixes=com.amazonaws {code} > would help, but it did not. > Appreciate any suggestions for how to proceed or debug further :) > > Example stack traces: > First one for an s3 read > {code:java} > WARN TaskSetManager: Lost task 27.0 in stage 4.0 (TID 29) ( executor > 13): java.nio.file.AccessDeniedException: > s3a://bucket/path/to/part.snappy.parquet: > org.apache.hadoop.fs.s3a.CredentialInitializationException: Provider > TemporaryAWSCredentialsProvider has no credentials > at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:206) > at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:170) > at > org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:3289) > at > org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:3185) > at > org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:3053) > at > org.apache.parquet.hadoop.util.HadoopInputFile.fromPath(HadoopInputFile.java:39) > at > org.apache.spark.sql.execution.datasources.parquet.ParquetFooterReader.readFooter(ParquetFooterReader.java:39) > at > org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat.footerFileMetaData$lzycompute$1(ParquetFileFormat.scala:268) > at > org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat.footerFileMetaData$1(ParquetFileFormat.scala:267) > at > org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat.$anonfun$buildReaderWithPartitionValues$2(ParquetFileFormat.scala:270) > at > org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:116) > at > org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:164) > at > org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:93) > at > org.apache.spark.sql.execution.FileSourceScanExec$$anon$1.hasNext(DataSourceScanExec.scala:522) > at > org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage7.columnartorow_nextBatch_0$(Unknown > Source) > at > org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage7.processNext(Unknown > Source) > at > org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) > at > org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:759) > at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460) > at > org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:140) > at > org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59) > at > org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99) > at > org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:52) > at org.apache.spark.scheduler.Task.run(Task.scala:131) > at > org.apache.spark.
[jira] [Resolved] (HADOOP-18500) Upgrade maven-shade-plugin to 3.3.0
[ https://issues.apache.org/jira/browse/HADOOP-18500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran resolved HADOOP-18500. - Fix Version/s: 3.4.0 Resolution: Fixed > Upgrade maven-shade-plugin to 3.3.0 > --- > > Key: HADOOP-18500 > URL: https://issues.apache.org/jira/browse/HADOOP-18500 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Reporter: Willi Raschkowski >Priority: Minor > Labels: pull-request-available > Fix For: 3.4.0 > > Attachments: ImmutableMap_hadoop-client-runtime.txt, > ImmutableMap_hadoop-shaded-guava.txt > > > Maven-shade-plugin rewrites classes when moving them into {{hadoop-client}} > JARs. That's true even when it doesn't actually need to modify the byte code > of the classes, say for shading. > We use a tool that checks for classpath duplicates that don't have equal byte > code. This tool flags classes brought in via Hadoop. The classes it flagged > came on one side from > a JAR containing relocated classes ({{hadoop-client-api}} or {{-runtime}}) > and the other from the relocated JAR ({{hadoop-common}} or > {{hadoop-shaded-guava}}). We checked and the byte code for the same class is > indeed different between the relocated and non-relocated JARs. > This is because maven-shade-plugin, before 3.3.0, was rewriting class files > even when the relocation was a "no-op". See MSHADE-391 and > [apache/maven-shade-plugin#95|https://github.com/apache/maven-shade-plugin/pull/95]. > {quote}Maven Shade internally uses [ASM's > {{ClassRemapper}}|https://asm.ow2.io/javadoc/org/objectweb/asm/commons/ClassRemapper.html] > and defines a custom {{Remapper}} subclass, which takes care of relocation, > partially doing the work by itself and partially delegating to the ASM parent > class. An ASM {{ClassReader}} reads each class file from the original JAR and > *unconditionally* writes it into a {{{}ClassWriter{}}}, plugging in the > transformer. > This transformation, even if not a single relocation (package name mapping) > takes place, often leads to binary differences between original class and > transformed class, because constant pool or stack map frames have been > adjusted, not changing the functionality of the class, but making it look > like something changed when comparing class files before and after the > relocation process. > {quote} > Upgrading to maven-shade-plugin 3.3.0 fixes the unnecessary rewrite of > classes. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18500) Upgrade maven-shade-plugin to 3.3.0
[ https://issues.apache.org/jira/browse/HADOOP-18500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17621286#comment-17621286 ] ASF GitHub Bot commented on HADOOP-18500: - steveloughran commented on PR #5045: URL: https://github.com/apache/hadoop/pull/5045#issuecomment-1285928398 +1; put it in trunk to see if causes problems for any other patch going through yetus; if all is good we can put into 3.3 too. > Upgrade maven-shade-plugin to 3.3.0 > --- > > Key: HADOOP-18500 > URL: https://issues.apache.org/jira/browse/HADOOP-18500 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Reporter: Willi Raschkowski >Priority: Minor > Labels: pull-request-available > Attachments: ImmutableMap_hadoop-client-runtime.txt, > ImmutableMap_hadoop-shaded-guava.txt > > > Maven-shade-plugin rewrites classes when moving them into {{hadoop-client}} > JARs. That's true even when it doesn't actually need to modify the byte code > of the classes, say for shading. > We use a tool that checks for classpath duplicates that don't have equal byte > code. This tool flags classes brought in via Hadoop. The classes it flagged > came on one side from > a JAR containing relocated classes ({{hadoop-client-api}} or {{-runtime}}) > and the other from the relocated JAR ({{hadoop-common}} or > {{hadoop-shaded-guava}}). We checked and the byte code for the same class is > indeed different between the relocated and non-relocated JARs. > This is because maven-shade-plugin, before 3.3.0, was rewriting class files > even when the relocation was a "no-op". See MSHADE-391 and > [apache/maven-shade-plugin#95|https://github.com/apache/maven-shade-plugin/pull/95]. > {quote}Maven Shade internally uses [ASM's > {{ClassRemapper}}|https://asm.ow2.io/javadoc/org/objectweb/asm/commons/ClassRemapper.html] > and defines a custom {{Remapper}} subclass, which takes care of relocation, > partially doing the work by itself and partially delegating to the ASM parent > class. An ASM {{ClassReader}} reads each class file from the original JAR and > *unconditionally* writes it into a {{{}ClassWriter{}}}, plugging in the > transformer. > This transformation, even if not a single relocation (package name mapping) > takes place, often leads to binary differences between original class and > transformed class, because constant pool or stack map frames have been > adjusted, not changing the functionality of the class, but making it look > like something changed when comparing class files before and after the > relocation process. > {quote} > Upgrading to maven-shade-plugin 3.3.0 fixes the unnecessary rewrite of > classes. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on pull request #5045: HADOOP-18500: Upgrade maven-shade-plugin to 3.3.0
steveloughran commented on PR #5045: URL: https://github.com/apache/hadoop/pull/5045#issuecomment-1285928398 +1; put it in trunk to see if causes problems for any other patch going through yetus; if all is good we can put into 3.3 too. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18500) Upgrade maven-shade-plugin to 3.3.0
[ https://issues.apache.org/jira/browse/HADOOP-18500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17621285#comment-17621285 ] ASF GitHub Bot commented on HADOOP-18500: - steveloughran merged PR #5045: URL: https://github.com/apache/hadoop/pull/5045 > Upgrade maven-shade-plugin to 3.3.0 > --- > > Key: HADOOP-18500 > URL: https://issues.apache.org/jira/browse/HADOOP-18500 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Reporter: Willi Raschkowski >Priority: Minor > Labels: pull-request-available > Attachments: ImmutableMap_hadoop-client-runtime.txt, > ImmutableMap_hadoop-shaded-guava.txt > > > Maven-shade-plugin rewrites classes when moving them into {{hadoop-client}} > JARs. That's true even when it doesn't actually need to modify the byte code > of the classes, say for shading. > We use a tool that checks for classpath duplicates that don't have equal byte > code. This tool flags classes brought in via Hadoop. The classes it flagged > came on one side from > a JAR containing relocated classes ({{hadoop-client-api}} or {{-runtime}}) > and the other from the relocated JAR ({{hadoop-common}} or > {{hadoop-shaded-guava}}). We checked and the byte code for the same class is > indeed different between the relocated and non-relocated JARs. > This is because maven-shade-plugin, before 3.3.0, was rewriting class files > even when the relocation was a "no-op". See MSHADE-391 and > [apache/maven-shade-plugin#95|https://github.com/apache/maven-shade-plugin/pull/95]. > {quote}Maven Shade internally uses [ASM's > {{ClassRemapper}}|https://asm.ow2.io/javadoc/org/objectweb/asm/commons/ClassRemapper.html] > and defines a custom {{Remapper}} subclass, which takes care of relocation, > partially doing the work by itself and partially delegating to the ASM parent > class. An ASM {{ClassReader}} reads each class file from the original JAR and > *unconditionally* writes it into a {{{}ClassWriter{}}}, plugging in the > transformer. > This transformation, even if not a single relocation (package name mapping) > takes place, often leads to binary differences between original class and > transformed class, because constant pool or stack map frames have been > adjusted, not changing the functionality of the class, but making it look > like something changed when comparing class files before and after the > relocation process. > {quote} > Upgrading to maven-shade-plugin 3.3.0 fixes the unnecessary rewrite of > classes. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran merged pull request #5045: HADOOP-18500: Upgrade maven-shade-plugin to 3.3.0
steveloughran merged PR #5045: URL: https://github.com/apache/hadoop/pull/5045 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-18498) [ABFS]: Error introduced when SAS Token containing '?' prefix is passed
[ https://issues.apache.org/jira/browse/HADOOP-18498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-18498: Component/s: fs/azure > [ABFS]: Error introduced when SAS Token containing '?' prefix is passed > --- > > Key: HADOOP-18498 > URL: https://issues.apache.org/jira/browse/HADOOP-18498 > Project: Hadoop Common > Issue Type: Bug > Components: fs/azure >Reporter: Sree Bhattacharyya >Assignee: Sree Bhattacharyya >Priority: Minor > > Error Description: > At present, SAS Tokens generated from the Azure Portal may or may not contain > a ? as a prefix. SAS Tokens that contain the ? prefix will lead to an error > in the driver due to a clash of query parameters. This leads to customers > having to manually remove the ? prefix before passing the SAS Tokens. > Mitigation: > After receiving the SAS Token from the provider, check if any prefix ? is > present or not. If present, remove it and pass the SAS Token. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop-thirdparty] pjfanning commented on a diff in pull request #21: [HADOOP-18342] shaded avro jar
pjfanning commented on code in PR #21: URL: https://github.com/apache/hadoop-thirdparty/pull/21#discussion_r1000937480 ## hadoop-shaded-avro/pom.xml: ## @@ -0,0 +1,100 @@ + + +http://maven.apache.org/POM/4.0.0"; + xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"; + xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd";> + +hadoop-thirdparty +org.apache.hadoop.thirdparty +1.2.0-SNAPSHOT +.. + +4.0.0 +hadoop-shaded-avro +Apache Hadoop shaded Avro +jar + + + +org.apache.avro +avro +${avro.version} + + + + + + +${project.basedir}/.. +META-INF + +licenses-binary/* +NOTICE.txt +NOTICE-binary + + +META-INF/maven/org.apache.avro/* + + + +${project.basedir}/src/main/resources + + + + +org.apache.maven.plugins +maven-shade-plugin + + true + true + + + +shade-avro +package + +shade + + + + +org.apache.avro:avro + + + + +org/apache/avro + ${shaded.prefix}/avro + + + + +META-INF/LICENSE.txt +${basedir}/../LICENSE-binary + + + + + + + + + Review Comment: done -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop-thirdparty] pjfanning commented on pull request #21: [HADOOP-18342] shaded avro jar
pjfanning commented on PR #21: URL: https://github.com/apache/hadoop-thirdparty/pull/21#issuecomment-1285924920 > do see #19 and discussion about whether artifacts need names. it may be good to use a version number here... I can rename the module and jar to hadoop-shaded-avro_1_11 if that makes sense. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] minni31 commented on pull request #3760: YARN-11037. Add configurable logic to split resource request to least…
minni31 commented on PR #3760: URL: https://github.com/apache/hadoop/pull/3760#issuecomment-1285924514 @bibinchundatt Can you please review this PR -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on a diff in pull request #4835: HDFS-16740. Mini cluster test flakiness
steveloughran commented on code in PR #4835: URL: https://github.com/apache/hadoop/pull/4835#discussion_r1000928173 ## hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestRollingUpgrade.java: ## @@ -83,37 +83,18 @@ public static void runCmd(DFSAdmin dfsadmin, boolean success, } @Rule - public TemporaryFolder folder = new TemporaryFolder(); - - /** - * Create a default HDFS configuration which has test-specific data directories. This is - * intended to protect against interactions between test runs that might corrupt results. Each - * test run's data is automatically cleaned-up by JUnit. - * - * @return a default configuration with test-specific data directories - */ - public Configuration getHdfsConfiguration() throws IOException { -Configuration conf = new HdfsConfiguration(); Review Comment: retain this, but just return the new config. allows for changes later ## hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/MiniJournalCluster.java: ## @@ -64,7 +65,17 @@ public static class Builder { public Builder(Configuration conf) { this.conf = conf; } - + +public Builder(Configuration conf, TemporaryFolder baseDir) { Review Comment: i don't want to add junit dependencies here. better to take a File ref and pass it in when used ## hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/MiniDFSCluster.java: ## @@ -240,6 +241,10 @@ public Builder(Configuration conf) { } } +public Builder(Configuration conf, TemporaryFolder baseDir) { Review Comment: i don't want to add junit dependencies here; we don't know where else it is used. and test dependencies don't get exported by maven. the code will need to be given baseDir.getRoot() -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop-thirdparty] steveloughran commented on pull request #21: [HADOOP-18342] shaded avro jar
steveloughran commented on PR #21: URL: https://github.com/apache/hadoop-thirdparty/pull/21#issuecomment-1285907443 do see #19 and discussion about whether artifacts need names. it may be good to use a version number here... -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop-thirdparty] steveloughran commented on pull request #19: HADOOP-18197. Upgrade protobuf to 3.21.7
steveloughran commented on PR #19: URL: https://github.com/apache/hadoop-thirdparty/pull/19#issuecomment-1285906735 add for @pjfanning to get involved; whole question about whether to use a version number in #21. applies. * version in jar name is a PITA when upgrading/re-releasing, as every pom which imports the artifact needs to be patched. * but it does make it easier to see what version of parquet/avro is shipped. i think for hadoop we will have to use a maven property to define the version of the protobuf lib to use...not sure what this does to ide imports, though spark seems to handle this -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #5050: HDFS-16809. EC striped block is not sufficient when doing in maintenance.
hadoop-yetus commented on PR #5050: URL: https://github.com/apache/hadoop/pull/5050#issuecomment-1285904505 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 20s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 43m 42s | | trunk passed | | +1 :green_heart: | compile | 1m 43s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 1m 28s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 1m 17s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 40s | | trunk passed | | +1 :green_heart: | javadoc | 1m 16s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 40s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 3m 48s | | trunk passed | | +1 :green_heart: | shadedclient | 26m 21s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 27s | | the patch passed | | +1 :green_heart: | compile | 1m 31s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 1m 31s | | the patch passed | | +1 :green_heart: | compile | 1m 23s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 1m 23s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 58s | [/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5050/1/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 81 unchanged - 1 fixed = 82 total (was 82) | | +1 :green_heart: | mvnsite | 1m 28s | | the patch passed | | +1 :green_heart: | javadoc | 0m 56s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 35s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 3m 40s | | the patch passed | | +1 :green_heart: | shadedclient | 26m 24s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 388m 37s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5050/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 58s | | The patch does not generate ASF License warnings. | | | | 511m 5s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.namenode.ha.TestObserverNode | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5050/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5050 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 4937a3335e81 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 4745364a75f5b38d6b4a5446f0fcf7c755ccf85e | | Default Java | Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5050/1/testReport/ | | Max. process+t
[GitHub] [hadoop-thirdparty] steveloughran commented on a diff in pull request #21: [HADOOP-18342] shaded avro jar
steveloughran commented on code in PR #21: URL: https://github.com/apache/hadoop-thirdparty/pull/21#discussion_r1000914786 ## hadoop-shaded-avro/pom.xml: ## @@ -0,0 +1,100 @@ + + +http://maven.apache.org/POM/4.0.0"; + xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"; + xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd";> + +hadoop-thirdparty +org.apache.hadoop.thirdparty +1.2.0-SNAPSHOT +.. + +4.0.0 +hadoop-shaded-avro +Apache Hadoop shaded Avro +jar + + + +org.apache.avro +avro +${avro.version} + + + + + + +${project.basedir}/.. +META-INF + +licenses-binary/* +NOTICE.txt +NOTICE-binary + + +META-INF/maven/org.apache.avro/* + + + +${project.basedir}/src/main/resources + + + + +org.apache.maven.plugins +maven-shade-plugin + + true + true + + + +shade-avro +package + +shade + + + + +org.apache.avro:avro + + + + +org/apache/avro + ${shaded.prefix}/avro + + + + +META-INF/LICENSE.txt +${basedir}/../LICENSE-binary + + + + + + + + + Review Comment: nit, add a newline -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop-thirdparty] steveloughran commented on pull request #21: [HADOOP-18342] shaded avro jar
steveloughran commented on PR #21: URL: https://github.com/apache/hadoop-thirdparty/pull/21#issuecomment-1285900443 lgtm; needs a newline at the end of the new pom. +1 pending that -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18471) An unhandled ArrayIndexOutOfBoundsException in DefaultStringifier.storeArray() if provided with an empty input
[ https://issues.apache.org/jira/browse/HADOOP-18471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17621263#comment-17621263 ] ASF GitHub Bot commented on HADOOP-18471: - steveloughran commented on PR #4957: URL: https://github.com/apache/hadoop/pull/4957#issuecomment-1285897424 thanks, in trunk and branch-3.3 > An unhandled ArrayIndexOutOfBoundsException in > DefaultStringifier.storeArray() if provided with an empty input > -- > > Key: HADOOP-18471 > URL: https://issues.apache.org/jira/browse/HADOOP-18471 > Project: Hadoop Common > Issue Type: Bug > Components: common, io >Affects Versions: 3.3.4 >Reporter: FuzzingTeam >Assignee: FuzzingTeam >Priority: Minor > Labels: pull-request-available > Fix For: 3.4.0, 3.3.9 > > > The code throws an unhandled ArrayIndexOutOfBoundsException when method > _storeArray_ of DefaultStringifier.java is called with an empty array as > input. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on pull request #4957: HADOOP-18471. Fixed ArrayIndexOutOfBoundsException in class DefaultStringifier
steveloughran commented on PR #4957: URL: https://github.com/apache/hadoop/pull/4957#issuecomment-1285897424 thanks, in trunk and branch-3.3 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-18471) An unhandled ArrayIndexOutOfBoundsException in DefaultStringifier.storeArray() if provided with an empty input
[ https://issues.apache.org/jira/browse/HADOOP-18471?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran reassigned HADOOP-18471: --- Assignee: FuzzingTeam > An unhandled ArrayIndexOutOfBoundsException in > DefaultStringifier.storeArray() if provided with an empty input > -- > > Key: HADOOP-18471 > URL: https://issues.apache.org/jira/browse/HADOOP-18471 > Project: Hadoop Common > Issue Type: Bug > Components: common, io >Affects Versions: 3.3.4 >Reporter: FuzzingTeam >Assignee: FuzzingTeam >Priority: Minor > Labels: pull-request-available > Fix For: 3.4.0, 3.3.9 > > > The code throws an unhandled ArrayIndexOutOfBoundsException when method > _storeArray_ of DefaultStringifier.java is called with an empty array as > input. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-18471) An unhandled ArrayIndexOutOfBoundsException in DefaultStringifier.storeArray() if provided with an empty input
[ https://issues.apache.org/jira/browse/HADOOP-18471?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran resolved HADOOP-18471. - Fix Version/s: 3.4.0 3.3.9 Resolution: Fixed merged into branch-3.3 and trunk; not in 3.3.5 as i'm being more selective there right now > An unhandled ArrayIndexOutOfBoundsException in > DefaultStringifier.storeArray() if provided with an empty input > -- > > Key: HADOOP-18471 > URL: https://issues.apache.org/jira/browse/HADOOP-18471 > Project: Hadoop Common > Issue Type: Bug > Components: common, io >Affects Versions: 3.3.4 >Reporter: FuzzingTeam >Priority: Minor > Labels: pull-request-available > Fix For: 3.4.0, 3.3.9 > > > The code throws an unhandled ArrayIndexOutOfBoundsException when method > _storeArray_ of DefaultStringifier.java is called with an empty array as > input. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] pjfanning commented on pull request #4980: MAPREDUCE-7411: use secure XML parsers
pjfanning commented on PR #4980: URL: https://github.com/apache/hadoop/pull/4980#issuecomment-1285893112 @steveloughran rebased and build restarted -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18471) An unhandled ArrayIndexOutOfBoundsException in DefaultStringifier.storeArray() if provided with an empty input
[ https://issues.apache.org/jira/browse/HADOOP-18471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17621258#comment-17621258 ] ASF GitHub Bot commented on HADOOP-18471: - steveloughran merged PR #4957: URL: https://github.com/apache/hadoop/pull/4957 > An unhandled ArrayIndexOutOfBoundsException in > DefaultStringifier.storeArray() if provided with an empty input > -- > > Key: HADOOP-18471 > URL: https://issues.apache.org/jira/browse/HADOOP-18471 > Project: Hadoop Common > Issue Type: Bug > Components: common, io >Affects Versions: 3.3.4 >Reporter: FuzzingTeam >Priority: Minor > Labels: pull-request-available > > The code throws an unhandled ArrayIndexOutOfBoundsException when method > _storeArray_ of DefaultStringifier.java is called with an empty array as > input. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran merged pull request #4957: HADOOP-18471. Fixed ArrayIndexOutOfBoundsException in class DefaultStringifier
steveloughran merged PR #4957: URL: https://github.com/apache/hadoop/pull/4957 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18471) An unhandled ArrayIndexOutOfBoundsException in DefaultStringifier.storeArray() if provided with an empty input
[ https://issues.apache.org/jira/browse/HADOOP-18471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17621257#comment-17621257 ] ASF GitHub Bot commented on HADOOP-18471: - steveloughran commented on code in PR #4957: URL: https://github.com/apache/hadoop/pull/4957#discussion_r1000905429 ## hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/DefaultStringifier.java: ## @@ -158,6 +158,9 @@ public static K load(Configuration conf, String keyName, public static void storeArray(Configuration conf, K[] items, String keyName) throws IOException { +if (items.length == 0) { + throw new IndexOutOfBoundsException(); +} DefaultStringifier stringifier = new DefaultStringifier(conf, GenericsUtil.getClass(items[0])); Review Comment: ok > An unhandled ArrayIndexOutOfBoundsException in > DefaultStringifier.storeArray() if provided with an empty input > -- > > Key: HADOOP-18471 > URL: https://issues.apache.org/jira/browse/HADOOP-18471 > Project: Hadoop Common > Issue Type: Bug > Components: common, io >Affects Versions: 3.3.4 >Reporter: FuzzingTeam >Priority: Minor > Labels: pull-request-available > > The code throws an unhandled ArrayIndexOutOfBoundsException when method > _storeArray_ of DefaultStringifier.java is called with an empty array as > input. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on a diff in pull request #4957: HADOOP-18471. Fixed ArrayIndexOutOfBoundsException in class DefaultStringifier
steveloughran commented on code in PR #4957: URL: https://github.com/apache/hadoop/pull/4957#discussion_r1000905429 ## hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/DefaultStringifier.java: ## @@ -158,6 +158,9 @@ public static K load(Configuration conf, String keyName, public static void storeArray(Configuration conf, K[] items, String keyName) throws IOException { +if (items.length == 0) { + throw new IndexOutOfBoundsException(); +} DefaultStringifier stringifier = new DefaultStringifier(conf, GenericsUtil.getClass(items[0])); Review Comment: ok -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17767) ABFS: Improve test scripts
[ https://issues.apache.org/jira/browse/HADOOP-17767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17621252#comment-17621252 ] ASF GitHub Bot commented on HADOOP-17767: - steveloughran commented on PR #3124: URL: https://github.com/apache/hadoop/pull/3124#issuecomment-1285884963 @snvijaya merged to trunk; cherrypick in to branch-3.3, test it and push up a new pr and i will merge that. thanks > ABFS: Improve test scripts > -- > > Key: HADOOP-17767 > URL: https://issues.apache.org/jira/browse/HADOOP-17767 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.3.0 >Reporter: Sneha Vijayarajan >Assignee: Sneha Vijayarajan >Priority: Minor > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 1h 50m > Remaining Estimate: 0h > > Current test run scripts need manual update across all combinations in > runTests.sh for account name and is working off a single azure-auth-keys.xml > file. While having to test across accounts that span various geo, the config > file grows big and also needs a manual change for configs such as > fs.contract.test.[abfs/abfss] which has to be uniquely set. To use the script > across various combinations, dev to be aware of the names of all the > combinations defined in runTests.sh as well. > > These concerns are addressed in the new version of the scripts. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17767) ABFS: Improve test scripts
[ https://issues.apache.org/jira/browse/HADOOP-17767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17621251#comment-17621251 ] ASF GitHub Bot commented on HADOOP-17767: - steveloughran merged PR #3124: URL: https://github.com/apache/hadoop/pull/3124 > ABFS: Improve test scripts > -- > > Key: HADOOP-17767 > URL: https://issues.apache.org/jira/browse/HADOOP-17767 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.3.0 >Reporter: Sneha Vijayarajan >Assignee: Sneha Vijayarajan >Priority: Minor > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 1h 50m > Remaining Estimate: 0h > > Current test run scripts need manual update across all combinations in > runTests.sh for account name and is working off a single azure-auth-keys.xml > file. While having to test across accounts that span various geo, the config > file grows big and also needs a manual change for configs such as > fs.contract.test.[abfs/abfss] which has to be uniquely set. To use the script > across various combinations, dev to be aware of the names of all the > combinations defined in runTests.sh as well. > > These concerns are addressed in the new version of the scripts. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on pull request #3124: HADOOP-17767. ABFS: Updates test scripts
steveloughran commented on PR #3124: URL: https://github.com/apache/hadoop/pull/3124#issuecomment-1285884963 @snvijaya merged to trunk; cherrypick in to branch-3.3, test it and push up a new pr and i will merge that. thanks -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran merged pull request #3124: HADOOP-17767. ABFS: Updates test scripts
steveloughran merged PR #3124: URL: https://github.com/apache/hadoop/pull/3124 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on pull request #4980: MAPREDUCE-7411: use secure XML parsers
steveloughran commented on PR #4980: URL: https://github.com/apache/hadoop/pull/4980#issuecomment-1285878695 @pjfanning can you rebase and push up to kick jenkins off again? thanks -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] slfan1989 commented on pull request #5005: YARN-11342. [Federation] Refactor getNewApplication, submitApplication Use FederationActionRetry.
slfan1989 commented on PR #5005: URL: https://github.com/apache/hadoop/pull/5005#issuecomment-1285877687 @goiri Thank you very much for helping to review the code! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] slfan1989 commented on a diff in pull request #4929: YARN-11229. [Federation] Add checkUserAccessToQueue REST APIs for Router.
slfan1989 commented on code in PR #4929: URL: https://github.com/apache/hadoop/pull/4929#discussion_r1000889253 ## hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/webapp/TestFederationInterceptorRESTRetry.java: ## @@ -377,48 +378,37 @@ public void testGetNodeOneBadOneGood() * composed of only 1 bad SubCluster. */ @Test - public void testGetNodesOneBadSC() - throws YarnException, IOException, InterruptedException { + public void testGetNodesOneBadSC() throws Exception { setupCluster(Arrays.asList(bad2)); -NodesInfo response = interceptor.getNodes(null); -Assert.assertNotNull(response); -Assert.assertEquals(0, response.getNodes().size()); -// The remove duplicate operations is tested in TestRouterWebServiceUtil +LambdaTestUtils.intercept(YarnRuntimeException.class, "RM is stopped", +() -> interceptor.getNodes(null)); } /** * This test validates the correctness of GetNodes in case the cluster is * composed of only 2 bad SubClusters. */ @Test - public void testGetNodesTwoBadSCs() - throws YarnException, IOException, InterruptedException { + public void testGetNodesTwoBadSCs() throws Exception { + setupCluster(Arrays.asList(bad1, bad2)); -NodesInfo response = interceptor.getNodes(null); -Assert.assertNotNull(response); Review Comment: In Federation mode, if there is a problem with some sub-clusters, we should throw an exception directly to tell the user a clear error message, and should not return a normal response with empty node list. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] slfan1989 commented on a diff in pull request #5030: YARN-11345. [Federation] Refactoring Yarn Router's Application Web Page.
slfan1989 commented on code in PR #5030: URL: https://github.com/apache/hadoop/pull/5030#discussion_r1000883075 ## hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/AppsBlock.java: ## @@ -81,53 +154,57 @@ protected void render(Block html) { // Render the applications StringBuilder appsTableData = new StringBuilder("[\n"); -for (AppInfo app : apps.getApps()) { - try { - -String percent = String.format("%.1f", app.getProgress() * 100.0F); -String trackingURL = -app.getTrackingUrl() == null ? "#" : app.getTrackingUrl(); -// AppID numerical value parsed by parseHadoopID in yarn.dt.plugins.js -appsTableData.append("[\"") -.append("") -.append(app.getAppId()).append("\",\"") -.append(escape(app.getUser())).append("\",\"") -.append(escape(app.getName())).append("\",\"") -.append(escape(app.getApplicationType())).append("\",\"") -.append(escape(app.getQueue())).append("\",\"") -.append(String.valueOf(app.getPriority())).append("\",\"") -.append(app.getStartTime()).append("\",\"") -.append(app.getFinishTime()).append("\",\"") -.append(app.getState()).append("\",\"") -.append(app.getFinalStatus()).append("\",\"") -// Progress bar -.append(" ").append(" ") -// History link -.append("\",\"") -.append("History").append(""); -appsTableData.append("\"],\n"); - - } catch (Exception e) { -LOG.info( -"Cannot add application {}: {}", app.getAppId(), e.getMessage()); + +if (appsInfo != null && CollectionUtils.isNotEmpty(appsInfo.getApps())) { + for (AppInfo app : appsInfo.getApps()) { +try { + + String percent = String.format("%.1f", app.getProgress() * 100.0F); + String trackingURL = + app.getTrackingUrl() == null ? "#" : app.getTrackingUrl(); + + // AppID numerical value parsed by parseHadoopID in yarn.dt.plugins.js + appsTableData.append("[\"") + .append("") + .append(app.getAppId()).append("\",\"") + .append(escape(app.getUser())).append("\",\"") + .append(escape(app.getName())).append("\",\"") + .append(escape(app.getApplicationType())).append("\",\"") + .append(escape(app.getQueue())).append("\",\"") + .append(app.getPriority()).append("\",\"") + .append(app.getStartTime()).append("\",\"") + .append(app.getFinishTime()).append("\",\"") + .append(app.getState()).append("\",\"") + .append(app.getFinalStatus()).append("\",\"") + // Progress bar + .append(" ").append(" ") + // History link + .append("\",\"") + .append("History").append(""); + appsTableData.append("\"],\n"); + +} catch (Exception e) { + LOG.info("Cannot add application {}: {}", app.getAppId(), e.getMessage()); +} + } + + // The purpose of this part of the code is to remove redundant commas. Review Comment: Thanks for your suggestion, I will refactor this part of the code. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran merged pull request #5016: HDFS-16795. Use secure XML parsers (#4979)
steveloughran merged PR #5016: URL: https://github.com/apache/hadoop/pull/5016 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-18156) Address JavaDoc warnings in classes like MarkerTool, S3ObjectAttributes, etc.
[ https://issues.apache.org/jira/browse/HADOOP-18156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran resolved HADOOP-18156. - Fix Version/s: 3.3.9 Resolution: Fixed > Address JavaDoc warnings in classes like MarkerTool, S3ObjectAttributes, etc. > - > > Key: HADOOP-18156 > URL: https://issues.apache.org/jira/browse/HADOOP-18156 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.3.2 >Reporter: Mukund Thakur >Assignee: Ankit Saurabh >Priority: Minor > Labels: pull-request-available > Fix For: 3.3.9 > > > {noformat} > home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-4045@2/ubuntu-focal/src/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/tools/MarkerTool.java:856: > warning: empty tag > [ERROR]* > [ERROR] ^ > [ERROR] > /home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-4045@2/ubuntu-focal/src/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/tools/MarkerTool.java:150: > warning: empty tag > [ERROR]* > [ERROR] ^ > [ERROR] > /home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-4045@2/ubuntu-focal/src/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/tools/MarkerTool.java:964: > warning: no @param for source > [ERROR] public ScanArgsBuilder withSourceFS(final FileSystem source) { > [ERROR]^ > [ERROR] > /home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-4045@2/ubuntu-focal/src/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/tools/MarkerTool.java:964: > warning: no @return > [ERROR] public ScanArgsBuilder withSourceFS(final FileSystem source) { > [ERROR]^ > [ERROR] > /home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-4045@2/ubuntu-focal/src/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/tools/MarkerTool.java:970: > warning: no @param for p > [ERROR] public ScanArgsBuilder withPath(final Path p) { > [ERROR]^ > [ERROR] > /home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-4045@2/ubuntu-focal/src/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/tools/MarkerTool.java:970: > warning: no @return > [ERROR] public ScanArgsBuilder withPath(final Path p) { > [ERROR]^ > [ERROR] > /home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-4045@2/ubuntu-focal/src/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/tools/MarkerTool.java:976: > warning: no @param for d > [ERROR] public ScanArgsBuilder withDoPurge(final boolean d) { > [ERROR]^ > [ERROR] > /home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-4045@2/ubuntu-focal/src/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/tools/MarkerTool.java:976: > warning: no @return > [ERROR] public ScanArgsBuilder withDoPurge(final boolean d) { > [ERROR]^ > [ERROR] > /home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-4045@2/ubuntu-focal/src/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/tools/MarkerTool.java:982: > warning: no @param for min > [ERROR] public ScanArgsBuilder withMinMarkerCount(final int min) { > [ERROR]^ > [ERROR] > /home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-4045@2/ubuntu-focal/src/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/tools/MarkerTool.java:982: > warning: no @return > [ERROR] public ScanArgsBuilder withMinMarkerCount(final int min) { > [ERROR]^ > [ERROR] > /home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-4045@2/ubuntu-focal/src/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/tools/MarkerTool.java:988: > warning: no @param for max > [ERROR] public ScanArgsBuilder withMaxMarkerCount(final int max) { > [ERROR]^ > [ERROR] > /home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-4045@2/ubuntu-focal/src/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/tools/MarkerTool.java:988: > warning: no @return > [ERROR] public ScanArgsBuilder withMaxMarkerCount(final int max) { > [ERROR]^ > [ERROR] > /home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-4045@2/ubuntu-focal/src/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/tools/MarkerTool.java:994: > warning: no @param for l > [ERROR] public ScanArgsBuilder withLimit(final int l) { > [ERROR]^ > [ERROR] > /home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-4045@2/ubuntu-focal/src/hadoop-tools/hadoop-aws/src/main/java/
[jira] [Commented] (HADOOP-18156) Address JavaDoc warnings in classes like MarkerTool, S3ObjectAttributes, etc.
[ https://issues.apache.org/jira/browse/HADOOP-18156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17621239#comment-17621239 ] ASF GitHub Bot commented on HADOOP-18156: - steveloughran merged PR #5038: URL: https://github.com/apache/hadoop/pull/5038 > Address JavaDoc warnings in classes like MarkerTool, S3ObjectAttributes, etc. > - > > Key: HADOOP-18156 > URL: https://issues.apache.org/jira/browse/HADOOP-18156 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.3.2 >Reporter: Mukund Thakur >Assignee: Ankit Saurabh >Priority: Minor > Labels: pull-request-available > > {noformat} > home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-4045@2/ubuntu-focal/src/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/tools/MarkerTool.java:856: > warning: empty tag > [ERROR]* > [ERROR] ^ > [ERROR] > /home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-4045@2/ubuntu-focal/src/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/tools/MarkerTool.java:150: > warning: empty tag > [ERROR]* > [ERROR] ^ > [ERROR] > /home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-4045@2/ubuntu-focal/src/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/tools/MarkerTool.java:964: > warning: no @param for source > [ERROR] public ScanArgsBuilder withSourceFS(final FileSystem source) { > [ERROR]^ > [ERROR] > /home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-4045@2/ubuntu-focal/src/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/tools/MarkerTool.java:964: > warning: no @return > [ERROR] public ScanArgsBuilder withSourceFS(final FileSystem source) { > [ERROR]^ > [ERROR] > /home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-4045@2/ubuntu-focal/src/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/tools/MarkerTool.java:970: > warning: no @param for p > [ERROR] public ScanArgsBuilder withPath(final Path p) { > [ERROR]^ > [ERROR] > /home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-4045@2/ubuntu-focal/src/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/tools/MarkerTool.java:970: > warning: no @return > [ERROR] public ScanArgsBuilder withPath(final Path p) { > [ERROR]^ > [ERROR] > /home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-4045@2/ubuntu-focal/src/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/tools/MarkerTool.java:976: > warning: no @param for d > [ERROR] public ScanArgsBuilder withDoPurge(final boolean d) { > [ERROR]^ > [ERROR] > /home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-4045@2/ubuntu-focal/src/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/tools/MarkerTool.java:976: > warning: no @return > [ERROR] public ScanArgsBuilder withDoPurge(final boolean d) { > [ERROR]^ > [ERROR] > /home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-4045@2/ubuntu-focal/src/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/tools/MarkerTool.java:982: > warning: no @param for min > [ERROR] public ScanArgsBuilder withMinMarkerCount(final int min) { > [ERROR]^ > [ERROR] > /home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-4045@2/ubuntu-focal/src/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/tools/MarkerTool.java:982: > warning: no @return > [ERROR] public ScanArgsBuilder withMinMarkerCount(final int min) { > [ERROR]^ > [ERROR] > /home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-4045@2/ubuntu-focal/src/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/tools/MarkerTool.java:988: > warning: no @param for max > [ERROR] public ScanArgsBuilder withMaxMarkerCount(final int max) { > [ERROR]^ > [ERROR] > /home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-4045@2/ubuntu-focal/src/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/tools/MarkerTool.java:988: > warning: no @return > [ERROR] public ScanArgsBuilder withMaxMarkerCount(final int max) { > [ERROR]^ > [ERROR] > /home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-4045@2/ubuntu-focal/src/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/tools/MarkerTool.java:994: > warning: no @param for l > [ERROR] public ScanArgsBuilder withLimit(final int l) { > [ERROR]^ > [ERROR] > /home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-4045@
[GitHub] [hadoop] steveloughran merged pull request #5038: HADOOP-18156. Address JavaDoc warnings in classes like MarkerTool, S3ObjectAttributes, etc (#4965)
steveloughran merged PR #5038: URL: https://github.com/apache/hadoop/pull/5038 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18499) S3A to support setting proxy protocol
[ https://issues.apache.org/jira/browse/HADOOP-18499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17621238#comment-17621238 ] ASF GitHub Bot commented on HADOOP-18499: - steveloughran commented on code in PR #5051: URL: https://github.com/apache/hadoop/pull/5051#discussion_r1000877935 ## hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java: ## @@ -212,6 +212,8 @@ private Constants() { public static final String PROXY_PASSWORD = "fs.s3a.proxy.password"; public static final String PROXY_DOMAIN = "fs.s3a.proxy.domain"; public static final String PROXY_WORKSTATION = "fs.s3a.proxy.workstation"; + /** Is the proxy secured(proxyProtocol = HTTPS)? */ + public static final String PROXY_SECURED = "fs.s3a.proxy.secured"; Review Comment: use the same string as for the endpoint, e.g "fs.s3a.proxy.ssl.enabled"; ## hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AProxy.java: ## @@ -0,0 +1,104 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.s3a; + +import java.io.IOException; + +import com.amazonaws.ClientConfiguration; +import com.amazonaws.Protocol; +import org.assertj.core.api.Assertions; +import org.junit.Test; + +import org.apache.hadoop.conf.Configuration; + +import static org.apache.hadoop.fs.s3a.Constants.PROXY_HOST; +import static org.apache.hadoop.fs.s3a.Constants.PROXY_PORT; +import static org.apache.hadoop.fs.s3a.Constants.PROXY_SECURED; +import static org.apache.hadoop.fs.s3a.S3ATestUtils.getTestBucketName; +import static org.apache.hadoop.fs.s3a.S3ATestUtils.removeBaseAndBucketOverrides; +import static org.apache.hadoop.fs.s3a.S3AUtils.initProxySupport; + +/** + * Unit tests to verify {@link S3AUtils} translates the proxy configurations + * are set correctly to Client configurations which are later used to construct + * the proxy in AWS SDK. + */ +public class TestS3AProxy extends AbstractS3ATestBase{ Review Comment: this is actually an itest. make a subclass of AbstractHadoopTestBase, and add a space afterwards > S3A to support setting proxy protocol > - > > Key: HADOOP-18499 > URL: https://issues.apache.org/jira/browse/HADOOP-18499 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3 >Reporter: Mehakmeet Singh >Assignee: Mehakmeet Singh >Priority: Major > Labels: pull-request-available > > Currently, we cannot set the protocol for a proxy in S3A. The proxy protocol > is set to "http" by default and thus we lack the support for HTTPS proxy in > S3A. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on a diff in pull request #5051: HADOOP-18499. S3A to support setting proxy protocol
steveloughran commented on code in PR #5051: URL: https://github.com/apache/hadoop/pull/5051#discussion_r1000877935 ## hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java: ## @@ -212,6 +212,8 @@ private Constants() { public static final String PROXY_PASSWORD = "fs.s3a.proxy.password"; public static final String PROXY_DOMAIN = "fs.s3a.proxy.domain"; public static final String PROXY_WORKSTATION = "fs.s3a.proxy.workstation"; + /** Is the proxy secured(proxyProtocol = HTTPS)? */ + public static final String PROXY_SECURED = "fs.s3a.proxy.secured"; Review Comment: use the same string as for the endpoint, e.g "fs.s3a.proxy.ssl.enabled"; ## hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AProxy.java: ## @@ -0,0 +1,104 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.s3a; + +import java.io.IOException; + +import com.amazonaws.ClientConfiguration; +import com.amazonaws.Protocol; +import org.assertj.core.api.Assertions; +import org.junit.Test; + +import org.apache.hadoop.conf.Configuration; + +import static org.apache.hadoop.fs.s3a.Constants.PROXY_HOST; +import static org.apache.hadoop.fs.s3a.Constants.PROXY_PORT; +import static org.apache.hadoop.fs.s3a.Constants.PROXY_SECURED; +import static org.apache.hadoop.fs.s3a.S3ATestUtils.getTestBucketName; +import static org.apache.hadoop.fs.s3a.S3ATestUtils.removeBaseAndBucketOverrides; +import static org.apache.hadoop.fs.s3a.S3AUtils.initProxySupport; + +/** + * Unit tests to verify {@link S3AUtils} translates the proxy configurations + * are set correctly to Client configurations which are later used to construct + * the proxy in AWS SDK. + */ +public class TestS3AProxy extends AbstractS3ATestBase{ Review Comment: this is actually an itest. make a subclass of AbstractHadoopTestBase, and add a space afterwards -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15983) Use jersey-json that is built to use jackson2
[ https://issues.apache.org/jira/browse/HADOOP-15983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17621236#comment-17621236 ] ASF GitHub Bot commented on HADOOP-15983: - steveloughran commented on PR #5048: URL: https://github.com/apache/hadoop/pull/5048#issuecomment-1285856871 thanks; pushing through again for a 3.3.5 release in #5053 > Use jersey-json that is built to use jackson2 > - > > Key: HADOOP-15983 > URL: https://issues.apache.org/jira/browse/HADOOP-15983 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Akira Ajisaka >Assignee: PJ Fanning >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 3h 50m > Remaining Estimate: 0h > > moves to a fork of jersey 1 which removes the jackson 1 dependency. > when cherrypicking this, HADOOP-18219 MUST also be included -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on pull request #5048: HADOOP-15983. Use jersey-json that is built to use jackson2 (branch 3.3)
steveloughran commented on PR #5048: URL: https://github.com/apache/hadoop/pull/5048#issuecomment-1285856871 thanks; pushing through again for a 3.3.5 release in #5053 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bteke commented on a diff in pull request #5052: YARN-11356. Upgrade DataTables to 1.11.5 to fix CVEs
bteke commented on code in PR #5052: URL: https://github.com/apache/hadoop/pull/5052#discussion_r1000875531 ## hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.11.5/css/jquery.dataTables.css: ## @@ -1,21 +1,3 @@ -/** Review Comment: We shouldn't remove it, however I'm not sure about the second part of the sentence, what do you mean by "if the JS pair of this file this"? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15983) Use jersey-json that is built to use jackson2
[ https://issues.apache.org/jira/browse/HADOOP-15983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17621233#comment-17621233 ] ASF GitHub Bot commented on HADOOP-15983: - steveloughran opened a new pull request, #5053: URL: https://github.com/apache/hadoop/pull/5053 #3988 on branch-3.3.5 Moves from com.sun.jersey 1.19 to the artifact com.github.pjfanning:jersey-json:1.20 This allows jackson 1 to be removed from the classpath. Contains * HADOOP-16908. Prune Jackson 1 from the codebase and restrict its usage for future * HADOOP-18219. Fix shaded client test failure These are needed for the HADOOP-15983 changes to build. Contributed by PJ Fanning. ### Description of PR ### How was this patch tested? ### For code changes: - [X] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')? - [ ] Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation? - [X] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [X] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, `NOTICE-binary` files? > Use jersey-json that is built to use jackson2 > - > > Key: HADOOP-15983 > URL: https://issues.apache.org/jira/browse/HADOOP-15983 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Akira Ajisaka >Assignee: PJ Fanning >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 3h 50m > Remaining Estimate: 0h > > moves to a fork of jersey 1 which removes the jackson 1 dependency. > when cherrypicking this, HADOOP-18219 MUST also be included -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran opened a new pull request, #5053: HADOOP-15983. Use jersey-json that is built to use jackson2 ((#3988)
steveloughran opened a new pull request, #5053: URL: https://github.com/apache/hadoop/pull/5053 #3988 on branch-3.3.5 Moves from com.sun.jersey 1.19 to the artifact com.github.pjfanning:jersey-json:1.20 This allows jackson 1 to be removed from the classpath. Contains * HADOOP-16908. Prune Jackson 1 from the codebase and restrict its usage for future * HADOOP-18219. Fix shaded client test failure These are needed for the HADOOP-15983 changes to build. Contributed by PJ Fanning. ### Description of PR ### How was this patch tested? ### For code changes: - [X] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')? - [ ] Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation? - [X] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [X] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, `NOTICE-binary` files? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15983) Use jersey-json that is built to use jackson2
[ https://issues.apache.org/jira/browse/HADOOP-15983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17621230#comment-17621230 ] ASF GitHub Bot commented on HADOOP-15983: - steveloughran merged PR #5048: URL: https://github.com/apache/hadoop/pull/5048 > Use jersey-json that is built to use jackson2 > - > > Key: HADOOP-15983 > URL: https://issues.apache.org/jira/browse/HADOOP-15983 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Akira Ajisaka >Assignee: PJ Fanning >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 3h 50m > Remaining Estimate: 0h > > moves to a fork of jersey 1 which removes the jackson 1 dependency. > when cherrypicking this, HADOOP-18219 MUST also be included -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran merged pull request #5048: HADOOP-15983. Use jersey-json that is built to use jackson2 (branch 3.3)
steveloughran merged PR #5048: URL: https://github.com/apache/hadoop/pull/5048 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] goiri merged pull request #5005: YARN-11342. [Federation] Refactor getNewApplication, submitApplication Use FederationActionRetry.
goiri merged PR #5005: URL: https://github.com/apache/hadoop/pull/5005 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] goiri commented on a diff in pull request #5030: YARN-11345. [Federation] Refactoring Yarn Router's Application Web Page.
goiri commented on code in PR #5030: URL: https://github.com/apache/hadoop/pull/5030#discussion_r1000850995 ## hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/AppsBlock.java: ## @@ -81,53 +154,57 @@ protected void render(Block html) { // Render the applications StringBuilder appsTableData = new StringBuilder("[\n"); -for (AppInfo app : apps.getApps()) { - try { - -String percent = String.format("%.1f", app.getProgress() * 100.0F); -String trackingURL = -app.getTrackingUrl() == null ? "#" : app.getTrackingUrl(); -// AppID numerical value parsed by parseHadoopID in yarn.dt.plugins.js -appsTableData.append("[\"") -.append("") -.append(app.getAppId()).append("\",\"") -.append(escape(app.getUser())).append("\",\"") -.append(escape(app.getName())).append("\",\"") -.append(escape(app.getApplicationType())).append("\",\"") -.append(escape(app.getQueue())).append("\",\"") -.append(String.valueOf(app.getPriority())).append("\",\"") -.append(app.getStartTime()).append("\",\"") -.append(app.getFinishTime()).append("\",\"") -.append(app.getState()).append("\",\"") -.append(app.getFinalStatus()).append("\",\"") -// Progress bar -.append(" ").append(" ") -// History link -.append("\",\"") -.append("History").append(""); -appsTableData.append("\"],\n"); - - } catch (Exception e) { -LOG.info( -"Cannot add application {}: {}", app.getAppId(), e.getMessage()); + +if (appsInfo != null && CollectionUtils.isNotEmpty(appsInfo.getApps())) { + for (AppInfo app : appsInfo.getApps()) { +try { + + String percent = String.format("%.1f", app.getProgress() * 100.0F); + String trackingURL = + app.getTrackingUrl() == null ? "#" : app.getTrackingUrl(); + + // AppID numerical value parsed by parseHadoopID in yarn.dt.plugins.js + appsTableData.append("[\"") + .append("") + .append(app.getAppId()).append("\",\"") + .append(escape(app.getUser())).append("\",\"") + .append(escape(app.getName())).append("\",\"") + .append(escape(app.getApplicationType())).append("\",\"") + .append(escape(app.getQueue())).append("\",\"") + .append(app.getPriority()).append("\",\"") + .append(app.getStartTime()).append("\",\"") + .append(app.getFinishTime()).append("\",\"") + .append(app.getState()).append("\",\"") + .append(app.getFinalStatus()).append("\",\"") + // Progress bar + .append(" ").append(" ") + // History link + .append("\",\"") + .append("History").append(""); + appsTableData.append("\"],\n"); + +} catch (Exception e) { + LOG.info("Cannot add application {}: {}", app.getAppId(), e.getMessage()); +} + } + + // The purpose of this part of the code is to remove redundant commas. Review Comment: We know how many apps there will be, we should only add it if not the last: ``` if (appsInfo != null) { Collection apps = appsInfo.getApps(); if (CollectionUtils.isNotEmpty(apps)) { int numApps = apps.size(); for (AppInfo app: apps) { ... if (i < numApps - 1) { appsTableData.append(","); } } } } ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15983) Use jersey-json that is built to use jackson2
[ https://issues.apache.org/jira/browse/HADOOP-15983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17621204#comment-17621204 ] ASF GitHub Bot commented on HADOOP-15983: - hadoop-yetus commented on PR #5048: URL: https://github.com/apache/hadoop/pull/5048#issuecomment-1285788496 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 43s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 1s | | xmllint was not available. | | +0 :ok: | shelldocs | 0m 1s | | Shelldocs was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ branch-3.3 Compile Tests _ | | +0 :ok: | mvndep | 15m 49s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 24m 17s | | branch-3.3 passed | | +1 :green_heart: | compile | 18m 22s | | branch-3.3 passed | | +1 :green_heart: | checkstyle | 3m 0s | | branch-3.3 passed | | +1 :green_heart: | mvnsite | 21m 37s | | branch-3.3 passed | | +1 :green_heart: | javadoc | 7m 24s | | branch-3.3 passed | | +0 :ok: | spotbugs | 0m 28s | | branch/hadoop-project no spotbugs output file (spotbugsXml.xml) | | +0 :ok: | spotbugs | 0m 26s | | branch/hadoop-client-modules/hadoop-client no spotbugs output file (spotbugsXml.xml) | | +0 :ok: | spotbugs | 0m 26s | | branch/hadoop-client-modules/hadoop-client-minicluster no spotbugs output file (spotbugsXml.xml) | | +1 :green_heart: | shadedclient | 57m 28s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 47s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 34m 50s | | the patch passed | | +1 :green_heart: | compile | 17m 39s | | the patch passed | | -1 :x: | javac | 17m 39s | [/results-compile-javac-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5048/2/artifact/out/results-compile-javac-root.txt) | root generated 2 new + 1856 unchanged - 0 fixed = 1858 total (was 1856) | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 2m 54s | | root: The patch generated 0 new + 0 unchanged - 1 fixed = 0 total (was 1) | | +1 :green_heart: | mvnsite | 19m 57s | | the patch passed | | +1 :green_heart: | shellcheck | 0m 0s | | No new issues. | | +1 :green_heart: | javadoc | 6m 58s | | the patch passed | | +0 :ok: | spotbugs | 0m 28s | | hadoop-project has no data from spotbugs | | +0 :ok: | spotbugs | 0m 34s | | hadoop-client-modules/hadoop-client has no data from spotbugs | | +0 :ok: | spotbugs | 0m 37s | | hadoop-client-modules/hadoop-client-minicluster has no data from spotbugs | | +1 :green_heart: | shadedclient | 56m 25s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 107m 7s | [/patch-unit-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5048/2/artifact/out/patch-unit-root.txt) | root in the patch failed. | | +1 :green_heart: | asflicense | 1m 15s | | The patch does not generate ASF License warnings. | | | | 432m 34s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.TestDecommissionWithStripedBackoffMonitor | | | hadoop.hdfs.TestDFSInputStream | | | hadoop.hdfs.TestReplication | | | hadoop.hdfs.TestErasureCodingMultipleRacks | | | hadoop.hdfs.TestDFSStripedOutputStreamWithRandomECPolicy | | | hadoop.hdfs.TestReadStripedFileWithDecoding | | | hadoop.hdfs.TestReadStripedFileWithDecodingDeletedData | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5048/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5048 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient codespell detsecrets xmllint spotbugs
[GitHub] [hadoop] hadoop-yetus commented on pull request #5048: HADOOP-15983. Use jersey-json that is built to use jackson2 (branch 3.3)
hadoop-yetus commented on PR #5048: URL: https://github.com/apache/hadoop/pull/5048#issuecomment-1285788496 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 43s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 1s | | xmllint was not available. | | +0 :ok: | shelldocs | 0m 1s | | Shelldocs was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ branch-3.3 Compile Tests _ | | +0 :ok: | mvndep | 15m 49s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 24m 17s | | branch-3.3 passed | | +1 :green_heart: | compile | 18m 22s | | branch-3.3 passed | | +1 :green_heart: | checkstyle | 3m 0s | | branch-3.3 passed | | +1 :green_heart: | mvnsite | 21m 37s | | branch-3.3 passed | | +1 :green_heart: | javadoc | 7m 24s | | branch-3.3 passed | | +0 :ok: | spotbugs | 0m 28s | | branch/hadoop-project no spotbugs output file (spotbugsXml.xml) | | +0 :ok: | spotbugs | 0m 26s | | branch/hadoop-client-modules/hadoop-client no spotbugs output file (spotbugsXml.xml) | | +0 :ok: | spotbugs | 0m 26s | | branch/hadoop-client-modules/hadoop-client-minicluster no spotbugs output file (spotbugsXml.xml) | | +1 :green_heart: | shadedclient | 57m 28s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 47s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 34m 50s | | the patch passed | | +1 :green_heart: | compile | 17m 39s | | the patch passed | | -1 :x: | javac | 17m 39s | [/results-compile-javac-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5048/2/artifact/out/results-compile-javac-root.txt) | root generated 2 new + 1856 unchanged - 0 fixed = 1858 total (was 1856) | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 2m 54s | | root: The patch generated 0 new + 0 unchanged - 1 fixed = 0 total (was 1) | | +1 :green_heart: | mvnsite | 19m 57s | | the patch passed | | +1 :green_heart: | shellcheck | 0m 0s | | No new issues. | | +1 :green_heart: | javadoc | 6m 58s | | the patch passed | | +0 :ok: | spotbugs | 0m 28s | | hadoop-project has no data from spotbugs | | +0 :ok: | spotbugs | 0m 34s | | hadoop-client-modules/hadoop-client has no data from spotbugs | | +0 :ok: | spotbugs | 0m 37s | | hadoop-client-modules/hadoop-client-minicluster has no data from spotbugs | | +1 :green_heart: | shadedclient | 56m 25s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 107m 7s | [/patch-unit-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5048/2/artifact/out/patch-unit-root.txt) | root in the patch failed. | | +1 :green_heart: | asflicense | 1m 15s | | The patch does not generate ASF License warnings. | | | | 432m 34s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.TestDecommissionWithStripedBackoffMonitor | | | hadoop.hdfs.TestDFSInputStream | | | hadoop.hdfs.TestReplication | | | hadoop.hdfs.TestErasureCodingMultipleRacks | | | hadoop.hdfs.TestDFSStripedOutputStreamWithRandomECPolicy | | | hadoop.hdfs.TestReadStripedFileWithDecoding | | | hadoop.hdfs.TestReadStripedFileWithDecodingDeletedData | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5048/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5048 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient codespell detsecrets xmllint spotbugs checkstyle shellcheck shelldocs | | uname | Linux a90e2c60fcb3 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | bra
[jira] [Commented] (HADOOP-18197) Update protobuf 3.7.1 to a version without CVE-2021-22569
[ https://issues.apache.org/jira/browse/HADOOP-18197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17621198#comment-17621198 ] ASF GitHub Bot commented on HADOOP-18197: - hadoop-yetus commented on PR #4418: URL: https://github.com/apache/hadoop/pull/4418#issuecomment-1285780046 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 46m 14s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +0 :ok: | shellcheck | 0m 0s | | Shellcheck was not available. | | +0 :ok: | shelldocs | 0m 0s | | Shelldocs was not available. | | +0 :ok: | hadolint | 0m 0s | | hadolint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 15m 45s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 29m 26s | | trunk passed | | +1 :green_heart: | compile | 22m 17s | | trunk passed | | +1 :green_heart: | mvnsite | 20m 45s | | trunk passed | | +1 :green_heart: | javadoc | 7m 57s | | trunk passed | | +1 :green_heart: | shadedclient | 30m 33s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 34s | | Maven dependency ordering for patch | | -1 :x: | mvninstall | 1m 11s | [/patch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4418/3/artifact/out/patch-mvninstall-root.txt) | root in the patch failed. | | -1 :x: | compile | 0m 59s | [/patch-compile-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4418/3/artifact/out/patch-compile-root.txt) | root in the patch failed. | | -1 :x: | javac | 0m 59s | [/patch-compile-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4418/3/artifact/out/patch-compile-root.txt) | root in the patch failed. | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -1 :x: | mvnsite | 0m 48s | [/patch-mvnsite-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4418/3/artifact/out/patch-mvnsite-root.txt) | root in the patch failed. | | +1 :green_heart: | xmllint | 0m 0s | | No new issues. | | -1 :x: | javadoc | 7m 33s | [/results-javadoc-javadoc-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4418/3/artifact/out/results-javadoc-javadoc-root.txt) | root generated 534 new + 2269 unchanged - 0 fixed = 2803 total (was 2269) | | -1 :x: | shadedclient | 9m 53s | | patch has errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 7m 42s | [/patch-unit-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4418/3/artifact/out/patch-unit-root.txt) | root in the patch failed. | | +1 :green_heart: | asflicense | 1m 1s | | The patch does not generate ASF License warnings. | | | | 189m 56s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4418/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/4418 | | Optional Tests | dupname asflicense codespell detsecrets shellcheck shelldocs hadolint mvnsite unit compile javac javadoc mvninstall shadedclient xmllint | | uname | Linux eeeb6886f515 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 4f05bf48185e1cb3edce862286a3fc01b41ea451 | | Default Java | Red Hat, Inc.-1.8.0_345-b01 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4418/3/testReport/ | | Max. process+thread count | 530 (vs. ulimit of 5500) | | modules | C: hadoop-project . U: . | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4418/3/console | | versions | git=2.9.5 maven=3.6.3 xmllint=20901 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automati
[GitHub] [hadoop] hadoop-yetus commented on pull request #4418: HADOOP-18197. Upgrade protobuf to 3.21.7 (through upgraded hadoop-shaded-protobuf jar)
hadoop-yetus commented on PR #4418: URL: https://github.com/apache/hadoop/pull/4418#issuecomment-1285780046 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 46m 14s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +0 :ok: | shellcheck | 0m 0s | | Shellcheck was not available. | | +0 :ok: | shelldocs | 0m 0s | | Shelldocs was not available. | | +0 :ok: | hadolint | 0m 0s | | hadolint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 15m 45s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 29m 26s | | trunk passed | | +1 :green_heart: | compile | 22m 17s | | trunk passed | | +1 :green_heart: | mvnsite | 20m 45s | | trunk passed | | +1 :green_heart: | javadoc | 7m 57s | | trunk passed | | +1 :green_heart: | shadedclient | 30m 33s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 34s | | Maven dependency ordering for patch | | -1 :x: | mvninstall | 1m 11s | [/patch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4418/3/artifact/out/patch-mvninstall-root.txt) | root in the patch failed. | | -1 :x: | compile | 0m 59s | [/patch-compile-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4418/3/artifact/out/patch-compile-root.txt) | root in the patch failed. | | -1 :x: | javac | 0m 59s | [/patch-compile-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4418/3/artifact/out/patch-compile-root.txt) | root in the patch failed. | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -1 :x: | mvnsite | 0m 48s | [/patch-mvnsite-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4418/3/artifact/out/patch-mvnsite-root.txt) | root in the patch failed. | | +1 :green_heart: | xmllint | 0m 0s | | No new issues. | | -1 :x: | javadoc | 7m 33s | [/results-javadoc-javadoc-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4418/3/artifact/out/results-javadoc-javadoc-root.txt) | root generated 534 new + 2269 unchanged - 0 fixed = 2803 total (was 2269) | | -1 :x: | shadedclient | 9m 53s | | patch has errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 7m 42s | [/patch-unit-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4418/3/artifact/out/patch-unit-root.txt) | root in the patch failed. | | +1 :green_heart: | asflicense | 1m 1s | | The patch does not generate ASF License warnings. | | | | 189m 56s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4418/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/4418 | | Optional Tests | dupname asflicense codespell detsecrets shellcheck shelldocs hadolint mvnsite unit compile javac javadoc mvninstall shadedclient xmllint | | uname | Linux eeeb6886f515 4.15.0-191-generic #202-Ubuntu SMP Thu Aug 4 01:49:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 4f05bf48185e1cb3edce862286a3fc01b41ea451 | | Default Java | Red Hat, Inc.-1.8.0_345-b01 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4418/3/testReport/ | | Max. process+thread count | 530 (vs. ulimit of 5500) | | modules | C: hadoop-project . U: . | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4418/3/console | | versions | git=2.9.5 maven=3.6.3 xmllint=20901 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #4929: YARN-11229. [Federation] Add checkUserAccessToQueue REST APIs for Router.
hadoop-yetus commented on PR #4929: URL: https://github.com/apache/hadoop/pull/4929#issuecomment-1285674115 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 37s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 4 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 15m 47s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 25m 54s | | trunk passed | | +1 :green_heart: | compile | 4m 2s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 3m 30s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 1m 25s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 53s | | trunk passed | | +1 :green_heart: | javadoc | 1m 42s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 29s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 3m 27s | | trunk passed | | +1 :green_heart: | shadedclient | 20m 50s | | branch has no errors when building and testing our client artifacts. | | -0 :warning: | patch | 21m 15s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 31s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 27s | | the patch passed | | +1 :green_heart: | compile | 4m 12s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | -1 :x: | javac | 4m 12s | [/results-compile-javac-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server-jdkUbuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4929/11/artifact/out/results-compile-javac-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server-jdkUbuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04.txt) | hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server-jdkUbuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 generated 1 new + 437 unchanged - 1 fixed = 438 total (was 438) | | +1 :green_heart: | compile | 3m 29s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 3m 29s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 1m 9s | [/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4929/11/artifact/out/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server.txt) | hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server: The patch generated 1 new + 0 unchanged - 3 fixed = 1 total (was 3) | | +1 :green_heart: | mvnsite | 1m 35s | | the patch passed | | +1 :green_heart: | javadoc | 1m 11s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 6s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 3m 6s | | the patch passed | | +1 :green_heart: | shadedclient | 21m 33s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 99m 29s | | hadoop-yarn-server-resourcemanager in the patch passed. | | +1 :green_heart: | unit | 5m 15s | | hadoop-yarn-server-router in the patch passed. | | +1 :green_heart: | asflicense | 0m 43s | | The patch does not generate ASF License warnings. | | | | 227m 45s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4929/11/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/4929 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux baef3536a1fa 4.15.0-191
[GitHub] [hadoop] hadoop-yetus commented on pull request #4929: YARN-11229. [Federation] Add checkUserAccessToQueue REST APIs for Router.
hadoop-yetus commented on PR #4929: URL: https://github.com/apache/hadoop/pull/4929#issuecomment-1285672095 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 40s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 4 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 15m 48s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 25m 38s | | trunk passed | | +1 :green_heart: | compile | 3m 59s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 3m 26s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 1m 22s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 53s | | trunk passed | | +1 :green_heart: | javadoc | 1m 49s | | trunk passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 33s | | trunk passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 3m 16s | | trunk passed | | +1 :green_heart: | shadedclient | 20m 16s | | branch has no errors when building and testing our client artifacts. | | -0 :warning: | patch | 20m 40s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 28s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 25s | | the patch passed | | +1 :green_heart: | compile | 3m 53s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | -1 :x: | javac | 3m 53s | [/results-compile-javac-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server-jdkUbuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4929/12/artifact/out/results-compile-javac-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server-jdkUbuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04.txt) | hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server-jdkUbuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 generated 1 new + 437 unchanged - 1 fixed = 438 total (was 438) | | +1 :green_heart: | compile | 3m 20s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 3m 20s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 1m 8s | [/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4929/12/artifact/out/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server.txt) | hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server: The patch generated 1 new + 0 unchanged - 3 fixed = 1 total (was 3) | | +1 :green_heart: | mvnsite | 1m 35s | | the patch passed | | +1 :green_heart: | javadoc | 1m 18s | | the patch passed with JDK Ubuntu-11.0.16+8-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 8s | | the patch passed with JDK Private Build-1.8.0_342-8u342-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 3m 1s | | the patch passed | | +1 :green_heart: | shadedclient | 20m 15s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 99m 2s | | hadoop-yarn-server-resourcemanager in the patch passed. | | +1 :green_heart: | unit | 5m 5s | | hadoop-yarn-server-router in the patch passed. | | +1 :green_heart: | asflicense | 0m 47s | | The patch does not generate ASF License warnings. | | | | 224m 55s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4929/12/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/4929 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 4002234e1b50 4.15.0-191
[jira] [Updated] (HADOOP-18136) Verify FileUtils.unTar() handling of missing .tar files: Fixes CVE-2022-25168
[ https://issues.apache.org/jira/browse/HADOOP-18136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-18136: Description: add a test to verify FileUtils.unTar() of a non .gz fails meaningfully if file isn't present; fix if not. test both the unix and windows paths. This patch contains the fix (and tests to verify it) for CVE-2022-25168 [mitre CVE|https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-25168] hadoop branches without YARN-2185 are at risk in yarn downloads; those with the patch in are not h2. Announcement {code} Severity: important Versions affected: 2.0.0 to 2.10.1, 3.0.0-alpha to 3.2.3, 3.3.0 to 3.3.2 Description: Apache Hadoop's FileUtil.unTar(File, File) API does not escape the input file name before being passed to the shell. An attacker can inject arbitrary commands. This is only used in Hadoop 3.3 InMemoryAliasMap.completeBootstrapTransfer, which is only ever run by a local user. It has been used in Hadoop 2.x for yarn localization, which does enable remote code execution. It is used in Apache Spark, from the SQL command ADD ARCHIVE. As the ADD ARCHIVE command adds new binaries to the classpath, being able to execute shell scripts does not confer new permissions to the caller. SPARK-38305. "Check existence of file before untarring/zipping", which is included in 3.3.0, 3.1.4, 3.2.2, prevents shell commands being executed, regardless of which version of the hadoop libraries are in use. Mitigation: Users should upgrade to Apache Hadoop 2.10.2, 3.2.4, 3.3.3 or upper (including HADOOP-18136). Credit: Apache Hadoop would like to thank Kostya Kortchinsky for reporting this issue {code} was: add a test to verify FileUtils.unTar() of a non .gz fails meaningfully if file isn't present; fix if not. test both the unix and windows paths. This patch contains the fix (and tests to verify it) for CVE-2022-25168 [mitre CVE|https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-25168] h2. Announcement {code} Severity: important Versions affected: 2.0.0 to 2.10.1, 3.0.0-alpha to 3.2.3, 3.3.0 to 3.3.2 Description: Apache Hadoop's FileUtil.unTar(File, File) API does not escape the input file name before being passed to the shell. An attacker can inject arbitrary commands. This is only used in Hadoop 3.3 InMemoryAliasMap.completeBootstrapTransfer, which is only ever run by a local user. It has been used in Hadoop 2.x for yarn localization, which does enable remote code execution. It is used in Apache Spark, from the SQL command ADD ARCHIVE. As the ADD ARCHIVE command adds new binaries to the classpath, being able to execute shell scripts does not confer new permissions to the caller. SPARK-38305. "Check existence of file before untarring/zipping", which is included in 3.3.0, 3.1.4, 3.2.2, prevents shell commands being executed, regardless of which version of the hadoop libraries are in use. Mitigation: Users should upgrade to Apache Hadoop 2.10.2, 3.2.4, 3.3.3 or upper (including HADOOP-18136). Credit: Apache Hadoop would like to thank Kostya Kortchinsky for reporting this issue {code} > Verify FileUtils.unTar() handling of missing .tar files: Fixes CVE-2022-25168 > - > > Key: HADOOP-18136 > URL: https://issues.apache.org/jira/browse/HADOOP-18136 > Project: Hadoop Common > Issue Type: Improvement > Components: test, util >Affects Versions: 3.1.4, 2.10.1, 3.3.1, 3.2.3 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Fix For: 2.10.2, 3.2.4, 3.3.3 > > > add a test to verify FileUtils.unTar() of a non .gz fails meaningfully if > file isn't present; fix if not. > test both the unix and windows paths. > This patch contains the fix (and tests to verify it) for CVE-2022-25168 > [mitre CVE|https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-25168] > hadoop branches without YARN-2185 are at risk in yarn downloads; those with > the patch in are not > h2. Announcement > {code} > Severity: important > Versions affected: > 2.0.0 to 2.10.1, 3.0.0-alpha to 3.2.3, 3.3.0 to 3.3.2 > Description: > Apache Hadoop's FileUtil.unTar(File, File) API does not escape the > input file name before being passed to the shell. An attacker can > inject arbitrary commands. > This is only used in Hadoop 3.3 > InMemoryAliasMap.completeBootstrapTransfer, which is only ever run by > a local user. > It has been used in Hadoop 2.x for yarn localization, which does > enable remote code execution. > It is used in Apache Spark, from the SQL command ADD ARCHIVE. As the > ADD ARCHIVE command adds new binaries to the classpath, being able to > execute shell scripts does not confer new permissions to the caller. > SPARK-38305. "Check existence of file before untarring/zipping", which > is
[GitHub] [hadoop] tasanuma commented on pull request #5050: HDFS-16809. EC striped block is not sufficient when doing in maintenance.
tasanuma commented on PR #5050: URL: https://github.com/apache/hadoop/pull/5050#issuecomment-1285472477 @dingshun3016 Thanks for reporting the issue and submitting the PR. Is it possible to add a unit test? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15915) Report problems w/ local S3A buffer directory meaningfully
[ https://issues.apache.org/jira/browse/HADOOP-15915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17621073#comment-17621073 ] Zbigniew Kostrzewa commented on HADOOP-15915: - I've recently stumbled upon this with {{{}3.2.2{}}}. For me the problem was that I did not change {{hadoop.tmp.dir}} and so the {{s3ablock-0001-}} were created in {{/tmp/hadoop-/s3a}} directory. At the same time, on CentOS 7 in my case, there is a systemd service {{systemd-tmpfiles-clean.service}} run once a day which cleans up {{/tmp}} of files and directories older than 10 days. However, Node Manager after it caches that {{/tmp/hadoop-/s3a}} exists it does not re-check it and does not re-create that directory if it no longer exists, I believe the code responsible for this is: {code:java} /** This method gets called everytime before any read/write to make sure * that any change to localDirs is reflected immediately. */ private Context confChanged(Configuration conf) throws IOException { ... if (!newLocalDirs.equals(ctx.savedLocalDirs)) { {code} and when the directory is missing log aggregation fails with this {{DiskChecker}} error. > Report problems w/ local S3A buffer directory meaningfully > -- > > Key: HADOOP-15915 > URL: https://issues.apache.org/jira/browse/HADOOP-15915 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.1.1 >Reporter: Steve Loughran >Priority: Major > > When there's a problem working with the temp directory used for block output > and the staging committers the actual path (and indeed config option) aren't > printed. > Improvements: tell the user which directory isn't writeable -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org