[jira] [Commented] (YARN-11642) Fix Flaky Test TestTimelineAuthFilterForV2#testPutTimelineEntities
[ https://issues.apache.org/jira/browse/YARN-11642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17803762#comment-17803762 ] ASF GitHub Bot commented on YARN-11642: --- slfan1989 commented on PR #6417: URL: https://github.com/apache/hadoop/pull/6417#issuecomment-1879572817 @ayushtkn Can you help review this PR? Thank you very much! > Fix Flaky Test TestTimelineAuthFilterForV2#testPutTimelineEntities > -- > > Key: YARN-11642 > URL: https://issues.apache.org/jira/browse/YARN-11642 > Project: Hadoop YARN > Issue Type: Improvement > Components: timelineservice >Affects Versions: 3.5.0 >Reporter: Shilun Fan >Assignee: Shilun Fan >Priority: Major > Labels: pull-request-available > > Our current unit tests are all executed in parallel. > TestTimelineAuthFilterForV2#testPutTimelineEntities will report an error > during execution: > {code:java} > [main] collector.PerNodeTimelineCollectorsAuxService > (StringUtils.java:startupShutdownMessage(755)) - failed to register any UNIX > signal loggers: > java.lang.IllegalStateException: Can't re-install the signal handlers. > {code} > We can solve this problem by changing static initialization to new Object. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-11634) Speed-up TestTimelineClient
[ https://issues.apache.org/jira/browse/YARN-11634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17803761#comment-17803761 ] ASF GitHub Bot commented on YARN-11634: --- hadoop-yetus commented on PR #6419: URL: https://github.com/apache/hadoop/pull/6419#issuecomment-1879571378 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 20s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 31m 38s | | trunk passed | | +1 :green_heart: | compile | 0m 24s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 0m 20s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 23s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 28s | | trunk passed | | +1 :green_heart: | javadoc | 0m 37s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 31s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | -1 :x: | spotbugs | 1m 0s | [/branch-spotbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6419/1/artifact/out/branch-spotbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common-warnings.html) | hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common in trunk has 1 extant spotbugs warnings. | | +1 :green_heart: | shadedclient | 19m 37s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 23s | | the patch passed | | +1 :green_heart: | compile | 0m 21s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 0m 21s | | the patch passed | | +1 :green_heart: | compile | 0m 19s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 0m 19s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 16s | | hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common: The patch generated 0 new + 24 unchanged - 2 fixed = 24 total (was 26) | | +1 :green_heart: | mvnsite | 0m 20s | | the patch passed | | +1 :green_heart: | javadoc | 0m 23s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 27s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 1s | | hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common generated 0 new + 0 unchanged - 1 fixed = 0 total (was 1) | | +1 :green_heart: | shadedclient | 19m 37s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 4m 30s | | hadoop-yarn-common in the patch passed. | | +1 :green_heart: | asflicense | 0m 24s | | The patch does not generate ASF License warnings. | | | | 84m 44s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6419/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6419 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 4bf2cc097043 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / b4193ad58b4d46cf59d74721a08ea51fd25d997a | | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6419/1/test
[jira] [Commented] (YARN-11642) Fix Flaky Test TestTimelineAuthFilterForV2#testPutTimelineEntities
[ https://issues.apache.org/jira/browse/YARN-11642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17803760#comment-17803760 ] ASF GitHub Bot commented on YARN-11642: --- hadoop-yetus commented on PR #6417: URL: https://github.com/apache/hadoop/pull/6417#issuecomment-1879569732 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 49s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 48m 22s | | trunk passed | | +1 :green_heart: | compile | 0m 27s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 0m 25s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 25s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 29s | | trunk passed | | +1 :green_heart: | javadoc | 0m 28s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 22s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +0 :ok: | spotbugs | 0m 33s | | branch/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests no spotbugs output file (spotbugsXml.xml) | | +1 :green_heart: | shadedclient | 37m 14s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 19s | | the patch passed | | +1 :green_heart: | compile | 0m 19s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 0m 19s | | the patch passed | | +1 :green_heart: | compile | 0m 17s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 0m 17s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 14s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 19s | | the patch passed | | +1 :green_heart: | javadoc | 0m 15s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 14s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +0 :ok: | spotbugs | 0m 19s | | hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests has no data from spotbugs | | +1 :green_heart: | shadedclient | 37m 0s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 3m 17s | | hadoop-yarn-server-tests in the patch passed. | | +1 :green_heart: | asflicense | 0m 34s | | The patch does not generate ASF License warnings. | | | | 137m 25s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6417/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6417 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux b5c2b648e353 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 8f052ae6a9fd176248fe982425f7644791e797be | | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6417/1/testReport/ | | Max. process+thread count | 623 (vs. ulimit of 5500) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR
[jira] [Commented] (YARN-11634) Speed-up TestTimelineClient
[ https://issues.apache.org/jira/browse/YARN-11634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17803755#comment-17803755 ] ASF GitHub Bot commented on YARN-11634: --- slfan1989 commented on PR #6419: URL: https://github.com/apache/hadoop/pull/6419#issuecomment-1879552862 @brumi1024 @K0K0V0K In #6371, we introduced a sputbug. I tried to modify the code, can you help review this PR? Thank you very much! [ReportUrl](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6411/1/artifact/out/branch-spotbugs-hadoop-yarn-project-warnings.html) > Speed-up TestTimelineClient > --- > > Key: YARN-11634 > URL: https://issues.apache.org/jira/browse/YARN-11634 > Project: Hadoop YARN > Issue Type: Improvement > Components: yarn >Reporter: Bence Kosztolnik >Assignee: Bence Kosztolnik >Priority: Minor > Labels: pull-request-available > Fix For: 3.4.0 > > > The TimelineConnector.class has a hardcoded 1-minute connection time out, > which makes the TestTimelineClient a long-running test (~15:30 min). > Decreasing the timeout to 10ms will speed up the test run (~56 sec). -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-11634) Speed-up TestTimelineClient
[ https://issues.apache.org/jira/browse/YARN-11634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17803754#comment-17803754 ] ASF GitHub Bot commented on YARN-11634: --- slfan1989 opened a new pull request, #6419: URL: https://github.com/apache/hadoop/pull/6419 ### Description of PR JIRA: YARN-11634. [Addendum] Speed-up TestTimelineClient. ### How was this patch tested? ### For code changes: - [ ] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')? - [ ] Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, `NOTICE-binary` files? > Speed-up TestTimelineClient > --- > > Key: YARN-11634 > URL: https://issues.apache.org/jira/browse/YARN-11634 > Project: Hadoop YARN > Issue Type: Improvement > Components: yarn >Reporter: Bence Kosztolnik >Assignee: Bence Kosztolnik >Priority: Minor > Labels: pull-request-available > Fix For: 3.4.0 > > > The TimelineConnector.class has a hardcoded 1-minute connection time out, > which makes the TestTimelineClient a long-running test (~15:30 min). > Decreasing the timeout to 10ms will speed up the test run (~56 sec). -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Commented] (YARN-11631) [GPG] Add GPGWebServices
[ https://issues.apache.org/jira/browse/YARN-11631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17803745#comment-17803745 ] ASF GitHub Bot commented on YARN-11631: --- hadoop-yetus commented on PR #6354: URL: https://github.com/apache/hadoop/pull/6354#issuecomment-1879544980 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 33s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 1s | | xmllint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 41m 40s | | trunk passed | | +1 :green_heart: | compile | 0m 25s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 0m 25s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 25s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 30s | | trunk passed | | +1 :green_heart: | javadoc | 0m 34s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 26s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 0m 45s | | trunk passed | | +1 :green_heart: | shadedclient | 32m 3s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 19s | | the patch passed | | +1 :green_heart: | compile | 0m 18s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 0m 18s | | the patch passed | | +1 :green_heart: | compile | 0m 17s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 0m 17s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 14s | [/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-globalpolicygenerator.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6354/8/artifact/out/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-globalpolicygenerator.txt) | hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator: The patch generated 7 new + 0 unchanged - 0 fixed = 7 total (was 0) | | +1 :green_heart: | mvnsite | 0m 19s | | the patch passed | | +1 :green_heart: | javadoc | 0m 19s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 18s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 0m 44s | | the patch passed | | +1 :green_heart: | shadedclient | 33m 19s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 0m 59s | | hadoop-yarn-server-globalpolicygenerator in the patch passed. | | +1 :green_heart: | asflicense | 0m 36s | | The patch does not generate ASF License warnings. | | | | 119m 40s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6354/8/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6354 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient codespell detsecrets xmllint spotbugs checkstyle | | uname | Linux 2174a85c0b1d 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 737ce5c8e495cf78e25e26aa854a82e3330ee692 | | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/
[jira] [Commented] (YARN-11642) Fix Flaky Test TestTimelineAuthFilterForV2#testPutTimelineEntities
[ https://issues.apache.org/jira/browse/YARN-11642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17803741#comment-17803741 ] ASF GitHub Bot commented on YARN-11642: --- slfan1989 opened a new pull request, #6417: URL: https://github.com/apache/hadoop/pull/6417 ### Description of PR JIRA: Fix Flaky Test TestTimelineAuthFilterForV2#testPutTimelineEntities. ### How was this patch tested? ### For code changes: - [ ] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')? - [ ] Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, `NOTICE-binary` files? > Fix Flaky Test TestTimelineAuthFilterForV2#testPutTimelineEntities > -- > > Key: YARN-11642 > URL: https://issues.apache.org/jira/browse/YARN-11642 > Project: Hadoop YARN > Issue Type: Improvement > Components: timelineservice >Affects Versions: 3.5.0 >Reporter: Shilun Fan >Assignee: Shilun Fan >Priority: Major > > Our current unit tests are all executed in parallel. > TestTimelineAuthFilterForV2#testPutTimelineEntities will report an error > during execution: > {code:java} > [main] collector.PerNodeTimelineCollectorsAuxService > (StringUtils.java:startupShutdownMessage(755)) - failed to register any UNIX > signal loggers: > java.lang.IllegalStateException: Can't re-install the signal handlers. > {code} > We can solve this problem by changing static initialization to new Object. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-11642) Fix Flaky Test TestTimelineAuthFilterForV2#testPutTimelineEntities
[ https://issues.apache.org/jira/browse/YARN-11642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated YARN-11642: -- Labels: pull-request-available (was: ) > Fix Flaky Test TestTimelineAuthFilterForV2#testPutTimelineEntities > -- > > Key: YARN-11642 > URL: https://issues.apache.org/jira/browse/YARN-11642 > Project: Hadoop YARN > Issue Type: Improvement > Components: timelineservice >Affects Versions: 3.5.0 >Reporter: Shilun Fan >Assignee: Shilun Fan >Priority: Major > Labels: pull-request-available > > Our current unit tests are all executed in parallel. > TestTimelineAuthFilterForV2#testPutTimelineEntities will report an error > during execution: > {code:java} > [main] collector.PerNodeTimelineCollectorsAuxService > (StringUtils.java:startupShutdownMessage(755)) - failed to register any UNIX > signal loggers: > java.lang.IllegalStateException: Can't re-install the signal handlers. > {code} > We can solve this problem by changing static initialization to new Object. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-11642) Fix Flaky Test TestTimelineAuthFilterForV2#testPutTimelineEntities
[ https://issues.apache.org/jira/browse/YARN-11642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shilun Fan updated YARN-11642: -- Summary: Fix Flaky Test TestTimelineAuthFilterForV2#testPutTimelineEntities (was: Fix Flask Test TestTimelineAuthFilterForV2#testPutTimelineEntities) > Fix Flaky Test TestTimelineAuthFilterForV2#testPutTimelineEntities > -- > > Key: YARN-11642 > URL: https://issues.apache.org/jira/browse/YARN-11642 > Project: Hadoop YARN > Issue Type: Improvement > Components: timelineservice >Affects Versions: 3.5.0 >Reporter: Shilun Fan >Assignee: Shilun Fan >Priority: Major > > Our current unit tests are all executed in parallel. > TestTimelineAuthFilterForV2#testPutTimelineEntities will report an error > during execution: > {code:java} > [main] collector.PerNodeTimelineCollectorsAuxService > (StringUtils.java:startupShutdownMessage(755)) - failed to register any UNIX > signal loggers: > java.lang.IllegalStateException: Can't re-install the signal handlers. > {code} > We can solve this problem by changing static initialization to new Object. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Created] (YARN-11642) Fix Flask Test TestTimelineAuthFilterForV2#testPutTimelineEntities
Shilun Fan created YARN-11642: - Summary: Fix Flask Test TestTimelineAuthFilterForV2#testPutTimelineEntities Key: YARN-11642 URL: https://issues.apache.org/jira/browse/YARN-11642 Project: Hadoop YARN Issue Type: Improvement Components: timelineservice Affects Versions: 3.5.0 Reporter: Shilun Fan Assignee: Shilun Fan Our current unit tests are all executed in parallel. TestTimelineAuthFilterForV2#testPutTimelineEntities will report an error during execution: {code:java} [main] collector.PerNodeTimelineCollectorsAuxService (StringUtils.java:startupShutdownMessage(755)) - failed to register any UNIX signal loggers: java.lang.IllegalStateException: Can't re-install the signal handlers. {code} We can solve this problem by changing static initialization to new Object. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org
[jira] [Updated] (YARN-11641) Can't update a queue hierarchy in absolute mode when the configured capacities are zero
[ https://issues.apache.org/jira/browse/YARN-11641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tamas Domok updated YARN-11641: --- Attachment: hierarchy.png > Can't update a queue hierarchy in absolute mode when the configured > capacities are zero > --- > > Key: YARN-11641 > URL: https://issues.apache.org/jira/browse/YARN-11641 > Project: Hadoop YARN > Issue Type: Bug > Components: capacityscheduler >Affects Versions: 3.4.0 >Reporter: Tamas Domok >Assignee: Tamas Domok >Priority: Major > Attachments: hierarchy.png > > > h2. Error symptoms > It is not possible to modify a queue hierarchy in absolute mode when the > parent or every child queue of the parent has 0 min resource configured. > {noformat} > 2024-01-05 15:38:59,016 INFO > org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerQueueManager: > Initialized queue: root.a.c > 2024-01-05 15:38:59,016 ERROR > org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWebServices: Exception > thrown when modifying configuration. > java.io.IOException: Failed to re-init queues : Parent=root.a: When absolute > minResource is used, we must make sure both parent and child all use absolute > minResource > {noformat} > h2. Reproduction > capacity-scheduler.xml > {code:xml} > > > > yarn.scheduler.capacity.root.queues > default,a > > > yarn.scheduler.capacity.root.capacity > [memory=40960, vcores=16] > > > yarn.scheduler.capacity.root.default.capacity > [memory=1024, vcores=1] > > > yarn.scheduler.capacity.root.default.maximum-capacity > [memory=1024, vcores=1] > > > yarn.scheduler.capacity.root.a.capacity > [memory=0, vcores=0] > > > yarn.scheduler.capacity.root.a.maximum-capacity > [memory=39936, vcores=15] > > > yarn.scheduler.capacity.root.a.queues > b,c > > > yarn.scheduler.capacity.root.a.b.capacity > [memory=0, vcores=0] > > > yarn.scheduler.capacity.root.a.b.maximum-capacity > [memory=39936, vcores=15] > > > yarn.scheduler.capacity.root.a.c.capacity > [memory=0, vcores=0] > > > yarn.scheduler.capacity.root.a.c.maximum-capacity > [memory=39936, vcores=15] > > > {code} > updatequeue.xml > {code:xml} > > > > root.a > > > capacity > [memory=1024,vcores=1] > > > maximum-capacity > [memory=39936,vcores=15] > > > > > {code} > {code} > $ curl -X PUT -H 'Content-Type: application/xml' -d @updatequeue.xml > http://localhost:8088/ws/v1/cluster/scheduler-conf\?user.name\=yarn > Failed to re-init queues : Parent=root.a: When absolute minResource is used, > we must make sure both parent and child all use absolute minResource > {code} > h2. Root cause > setChildQueues is called during reinit, where: > {code:java} > void setChildQueues(Collection childQueues) throws IOException { > writeLock.lock(); > try { > boolean isLegacyQueueMode = > queueContext.getConfiguration().isLegacyQueueMode(); > if (isLegacyQueueMode) { > QueueCapacityType childrenCapacityType = > getCapacityConfigurationTypeForQueues(childQueues); > QueueCapacityType parentCapacityType = > getCapacityConfigurationTypeForQueues(ImmutableList.of(this)); > if (childrenCapacityType == QueueCapacityType.ABSOLUTE_RESOURCE > || parentCapacityType == QueueCapacityType.ABSOLUTE_RESOURCE) { > // We don't allow any mixed absolute + {weight, percentage} between > // children and parent > if (childrenCapacityType != parentCapacityType && > !this.getQueuePath() > .equals(CapacitySchedulerConfiguration.ROOT)) { > throw new IOException("Parent=" + this.getQueuePath() > + ": When absolute minResource is used, we must make sure > both " > + "parent and child all use absolute minResource"); > } > {code} > The parent or childrenCapacityType will be considered as PERCENTAGE, because > getCapacityConfigurationTypeForQueues fails to detect the absolute mode, here: > {code:java} > if > (!queue.getQueueResourceQuotas().getConfiguredMinResource(nodeLabel) > .equals(Resources.none())) { > absoluteMinResSet = true; > {code} > (It only happens in legacy queue mode.) > h2. Possible fixes > Possible fix in AbstractParentQueue.getCapacityConfigurationTypeForQueues > using the capacityVector: > {code:java} > for (CSQueue queue : queues) { > for (String nodeLabel : queueCapacities.getExistingNodeLabels()) { > Set > definedCapacityTypes = > > queue.getConfiguredCapacityVector(nodeLabel).getDefinedC
[jira] [Updated] (YARN-11641) Can't update a queue hierarchy in absolute mode when the configured capacities are zero
[ https://issues.apache.org/jira/browse/YARN-11641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tamas Domok updated YARN-11641: --- Description: h2. Error symptoms It is not possible to modify a queue hierarchy in absolute mode when the parent or every child queue of the parent has 0 min resource configured. {noformat} 2024-01-05 15:38:59,016 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerQueueManager: Initialized queue: root.a.c 2024-01-05 15:38:59,016 ERROR org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWebServices: Exception thrown when modifying configuration. java.io.IOException: Failed to re-init queues : Parent=root.a: When absolute minResource is used, we must make sure both parent and child all use absolute minResource {noformat} h2. Reproduction capacity-scheduler.xml {code:xml} yarn.scheduler.capacity.root.queues default,a yarn.scheduler.capacity.root.capacity [memory=40960, vcores=16] yarn.scheduler.capacity.root.default.capacity [memory=1024, vcores=1] yarn.scheduler.capacity.root.default.maximum-capacity [memory=1024, vcores=1] yarn.scheduler.capacity.root.a.capacity [memory=0, vcores=0] yarn.scheduler.capacity.root.a.maximum-capacity [memory=39936, vcores=15] yarn.scheduler.capacity.root.a.queues b,c yarn.scheduler.capacity.root.a.b.capacity [memory=0, vcores=0] yarn.scheduler.capacity.root.a.b.maximum-capacity [memory=39936, vcores=15] yarn.scheduler.capacity.root.a.c.capacity [memory=0, vcores=0] yarn.scheduler.capacity.root.a.c.maximum-capacity [memory=39936, vcores=15] {code} !hierarchy.png! updatequeue.xml {code:xml} root.a capacity [memory=1024,vcores=1] maximum-capacity [memory=39936,vcores=15] {code} {code} $ curl -X PUT -H 'Content-Type: application/xml' -d @updatequeue.xml http://localhost:8088/ws/v1/cluster/scheduler-conf\?user.name\=yarn Failed to re-init queues : Parent=root.a: When absolute minResource is used, we must make sure both parent and child all use absolute minResource {code} h2. Root cause setChildQueues is called during reinit, where: {code:java} void setChildQueues(Collection childQueues) throws IOException { writeLock.lock(); try { boolean isLegacyQueueMode = queueContext.getConfiguration().isLegacyQueueMode(); if (isLegacyQueueMode) { QueueCapacityType childrenCapacityType = getCapacityConfigurationTypeForQueues(childQueues); QueueCapacityType parentCapacityType = getCapacityConfigurationTypeForQueues(ImmutableList.of(this)); if (childrenCapacityType == QueueCapacityType.ABSOLUTE_RESOURCE || parentCapacityType == QueueCapacityType.ABSOLUTE_RESOURCE) { // We don't allow any mixed absolute + {weight, percentage} between // children and parent if (childrenCapacityType != parentCapacityType && !this.getQueuePath() .equals(CapacitySchedulerConfiguration.ROOT)) { throw new IOException("Parent=" + this.getQueuePath() + ": When absolute minResource is used, we must make sure both " + "parent and child all use absolute minResource"); } {code} The parent or childrenCapacityType will be considered as PERCENTAGE, because getCapacityConfigurationTypeForQueues fails to detect the absolute mode, here: {code:java} if (!queue.getQueueResourceQuotas().getConfiguredMinResource(nodeLabel) .equals(Resources.none())) { absoluteMinResSet = true; {code} (It only happens in legacy queue mode.) h2. Possible fixes Possible fix in AbstractParentQueue.getCapacityConfigurationTypeForQueues using the capacityVector: {code:java} for (CSQueue queue : queues) { for (String nodeLabel : queueCapacities.getExistingNodeLabels()) { Set definedCapacityTypes = queue.getConfiguredCapacityVector(nodeLabel).getDefinedCapacityTypes(); if (definedCapacityTypes.size() == 1) { QueueCapacityVector.ResourceUnitCapacityType next = definedCapacityTypes.iterator().next(); if (Objects.requireNonNull(next) == PERCENTAGE) { percentageIsSet = true; diagMsg.append("{Queue=").append(queue.getQueuePath()).append(", label=").append(nodeLabel) .append(" uses percentage mode}. "); } else if (next == QueueCapacityVector.ResourceUnitCapacityType.ABSOLUTE) { absoluteMinResSet = true; diagMsg.append("{Queue=").append(queue.getQueuePath()).append(", label=").append(nodeLabel) .append(" uses absolute mode}. "); } else if (next == QueueCapacityVector.ResourceUnitCapacityType.WEIGHT) { weightIsSet = tru
[jira] [Updated] (YARN-11641) Can't update a queue hierarchy in absolute mode when the configured capacities are zero
[ https://issues.apache.org/jira/browse/YARN-11641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tamas Domok updated YARN-11641: --- Description: h2. Error symptoms It is not possible to modify a queue hierarchy in absolute mode when the parent or every child queue of the parent has 0 min resource configured. {noformat} 2024-01-05 15:38:59,016 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerQueueManager: Initialized queue: root.a.c 2024-01-05 15:38:59,016 ERROR org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWebServices: Exception thrown when modifying configuration. java.io.IOException: Failed to re-init queues : Parent=root.a: When absolute minResource is used, we must make sure both parent and child all use absolute minResource {noformat} h2. Reproduction capacity-scheduler.xml {code:xml} yarn.scheduler.capacity.root.queues default,a yarn.scheduler.capacity.root.capacity [memory=40960, vcores=16] yarn.scheduler.capacity.root.default.capacity [memory=1024, vcores=1] yarn.scheduler.capacity.root.default.maximum-capacity [memory=1024, vcores=1] yarn.scheduler.capacity.root.a.capacity [memory=0, vcores=0] yarn.scheduler.capacity.root.a.maximum-capacity [memory=39936, vcores=15] yarn.scheduler.capacity.root.a.queues b,c yarn.scheduler.capacity.root.a.b.capacity [memory=0, vcores=0] yarn.scheduler.capacity.root.a.b.maximum-capacity [memory=39936, vcores=15] yarn.scheduler.capacity.root.a.c.capacity [memory=0, vcores=0] yarn.scheduler.capacity.root.a.c.maximum-capacity [memory=39936, vcores=15] {code} updatequeue.xml {code:xml} root.a capacity [memory=1024,vcores=1] maximum-capacity [memory=39936,vcores=15] {code} {code} $ curl -X PUT -H 'Content-Type: application/xml' -d @updatequeue.xml http://localhost:8088/ws/v1/cluster/scheduler-conf\?user.name\=yarn Failed to re-init queues : Parent=root.a: When absolute minResource is used, we must make sure both parent and child all use absolute minResource {code} h2. Root cause setChildQueues is called during reinit, where: {code:java} void setChildQueues(Collection childQueues) throws IOException { writeLock.lock(); try { boolean isLegacyQueueMode = queueContext.getConfiguration().isLegacyQueueMode(); if (isLegacyQueueMode) { QueueCapacityType childrenCapacityType = getCapacityConfigurationTypeForQueues(childQueues); QueueCapacityType parentCapacityType = getCapacityConfigurationTypeForQueues(ImmutableList.of(this)); if (childrenCapacityType == QueueCapacityType.ABSOLUTE_RESOURCE || parentCapacityType == QueueCapacityType.ABSOLUTE_RESOURCE) { // We don't allow any mixed absolute + {weight, percentage} between // children and parent if (childrenCapacityType != parentCapacityType && !this.getQueuePath() .equals(CapacitySchedulerConfiguration.ROOT)) { throw new IOException("Parent=" + this.getQueuePath() + ": When absolute minResource is used, we must make sure both " + "parent and child all use absolute minResource"); } {code} The parent or childrenCapacityType will be considered as PERCENTAGE, because getCapacityConfigurationTypeForQueues fails to detect the absolute mode, here: {code:java} if (!queue.getQueueResourceQuotas().getConfiguredMinResource(nodeLabel) .equals(Resources.none())) { absoluteMinResSet = true; {code} (It only happens in legacy queue mode.) h2. Possible fixes Possible fix in AbstractParentQueue.getCapacityConfigurationTypeForQueues using the capacityVector: {code:java} for (CSQueue queue : queues) { for (String nodeLabel : queueCapacities.getExistingNodeLabels()) { Set definedCapacityTypes = queue.getConfiguredCapacityVector(nodeLabel).getDefinedCapacityTypes(); if (definedCapacityTypes.size() == 1) { QueueCapacityVector.ResourceUnitCapacityType next = definedCapacityTypes.iterator().next(); if (Objects.requireNonNull(next) == PERCENTAGE) { percentageIsSet = true; diagMsg.append("{Queue=").append(queue.getQueuePath()).append(", label=").append(nodeLabel) .append(" uses percentage mode}. "); } else if (next == QueueCapacityVector.ResourceUnitCapacityType.ABSOLUTE) { absoluteMinResSet = true; diagMsg.append("{Queue=").append(queue.getQueuePath()).append(", label=").append(nodeLabel) .append(" uses absolute mode}. "); } else if (next == QueueCapacityVector.ResourceUnitCapacityType.WEIGHT) { weightIsSet = true; diag
[jira] [Created] (YARN-11641) Can't update a queue hierarchy in absolute mode when the configured capacities are zero
Tamas Domok created YARN-11641: -- Summary: Can't update a queue hierarchy in absolute mode when the configured capacities are zero Key: YARN-11641 URL: https://issues.apache.org/jira/browse/YARN-11641 Project: Hadoop YARN Issue Type: Bug Components: capacityscheduler Affects Versions: 3.4.0 Reporter: Tamas Domok Assignee: Tamas Domok h2. Error symptoms It is not possible to modify a queue hierarchy in absolute mode when the parent or every child queue of the parent has 0 min resource configured. {noformat} 2024-01-05 15:38:59,016 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerQueueManager: Initialized queue: root.a.c 2024-01-05 15:38:59,016 ERROR org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWebServices: Exception thrown when modifying configuration. java.io.IOException: Failed to re-init queues : Parent=root.a: When absolute minResource is used, we must make sure both parent and child all use absolute minResource {noformat} h2. Reproduction capacity-scheduler.xml {code:xml} yarn.scheduler.capacity.root.queues default,a yarn.scheduler.capacity.root.capacity [memory=40960, vcores=16] yarn.scheduler.capacity.root.default.capacity [memory=1024, vcores=1] yarn.scheduler.capacity.root.default.maximum-capacity [memory=1024, vcores=1] yarn.scheduler.capacity.root.a.capacity [memory=0, vcores=0] yarn.scheduler.capacity.root.a.maximum-capacity [memory=39936, vcores=15] yarn.scheduler.capacity.root.a.queues b,c yarn.scheduler.capacity.root.a.b.capacity [memory=0, vcores=0] yarn.scheduler.capacity.root.a.b.maximum-capacity [memory=39936, vcores=15] yarn.scheduler.capacity.root.a.c.capacity [memory=0, vcores=0] yarn.scheduler.capacity.root.a.c.maximum-capacity [memory=39936, vcores=15] {code} {code:xml} root.a capacity [memory=1024,vcores=1] maximum-capacity [memory=39936,vcores=15] {code} {code} $ curl -X PUT -H 'Content-Type: application/xml' -d @updatequeue.xml http://localhost:8088/ws/v1/cluster/scheduler-conf\?user.name\=yarn Failed to re-init queues : Parent=root.a: When absolute minResource is used, we must make sure both parent and child all use absolute minResource {code} h2. Root cause setChildQueues is called during reinit, where: {code:java} void setChildQueues(Collection childQueues) throws IOException { writeLock.lock(); try { boolean isLegacyQueueMode = queueContext.getConfiguration().isLegacyQueueMode(); if (isLegacyQueueMode) { QueueCapacityType childrenCapacityType = getCapacityConfigurationTypeForQueues(childQueues); QueueCapacityType parentCapacityType = getCapacityConfigurationTypeForQueues(ImmutableList.of(this)); if (childrenCapacityType == QueueCapacityType.ABSOLUTE_RESOURCE || parentCapacityType == QueueCapacityType.ABSOLUTE_RESOURCE) { // We don't allow any mixed absolute + {weight, percentage} between // children and parent if (childrenCapacityType != parentCapacityType && !this.getQueuePath() .equals(CapacitySchedulerConfiguration.ROOT)) { throw new IOException("Parent=" + this.getQueuePath() + ": When absolute minResource is used, we must make sure both " + "parent and child all use absolute minResource"); } {code} The parent or childrenCapacityType will be considered as PERCENTAGE, because getCapacityConfigurationTypeForQueues fails to detect the absolute mode, here: {code:java} if (!queue.getQueueResourceQuotas().getConfiguredMinResource(nodeLabel) .equals(Resources.none())) { absoluteMinResSet = true; {code} h2. Possible fixes Possible fix in AbstractParentQueue.getCapacityConfigurationTypeForQueues using the capacityVector: {code:java} for (CSQueue queue : queues) { for (String nodeLabel : queueCapacities.getExistingNodeLabels()) { Set definedCapacityTypes = queue.getConfiguredCapacityVector(nodeLabel).getDefinedCapacityTypes(); if (definedCapacityTypes.size() == 1) { QueueCapacityVector.ResourceUnitCapacityType next = definedCapacityTypes.iterator().next(); if (Objects.requireNonNull(next) == PERCENTAGE) { percentageIsSet = true; diagMsg.append("{Queue=").append(queue.getQueuePath()).append(", label=").append(nodeLabel) .append(" uses percentage mode}. "); } else if (next == QueueCapacityVector.ResourceUnitCapacityType.ABSOLUTE) { absoluteMinResSet = true; diagMsg.append("{Queue=").append(queue.getQueuePath()).append(",
[jira] [Commented] (YARN-11622) ResourceManager asynchronous switch from Standy to Active exception
[ https://issues.apache.org/jira/browse/YARN-11622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17803526#comment-17803526 ] ASF GitHub Bot commented on YARN-11622: --- hadoop-yetus commented on PR #6352: URL: https://github.com/apache/hadoop/pull/6352#issuecomment-1878567308 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 21s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ branch-3.3 Compile Tests _ | | +1 :green_heart: | mvninstall | 66m 37s | | branch-3.3 passed | | +1 :green_heart: | compile | 0m 33s | | branch-3.3 passed | | +1 :green_heart: | checkstyle | 0m 27s | | branch-3.3 passed | | +1 :green_heart: | mvnsite | 0m 37s | | branch-3.3 passed | | +1 :green_heart: | javadoc | 2m 20s | | branch-3.3 passed | | +1 :green_heart: | spotbugs | 1m 12s | | branch-3.3 passed | | +1 :green_heart: | shadedclient | 22m 16s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 31s | | the patch passed | | +1 :green_heart: | compile | 0m 27s | | the patch passed | | +1 :green_heart: | javac | 0m 27s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 20s | | hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 0 new + 66 unchanged - 1 fixed = 66 total (was 67) | | +1 :green_heart: | mvnsite | 0m 30s | | the patch passed | | +1 :green_heart: | javadoc | 0m 21s | | the patch passed | | -1 :x: | spotbugs | 1m 14s | [/new-spotbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6352/9/artifact/out/new-spotbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.html) | hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | | +1 :green_heart: | shadedclient | 21m 34s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 75m 34s | | hadoop-yarn-server-resourcemanager in the patch passed. | | +1 :green_heart: | asflicense | 0m 23s | | The patch does not generate ASF License warnings. | | | | 196m 13s | | | | Reason | Tests | |---:|:--| | SpotBugs | module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager | | | Exceptional return value of java.util.concurrent.ExecutorService.submit(Callable) ignored in org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.handleTransitionToStandByInNewThread() At ResourceManager.java:ignored in org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.handleTransitionToStandByInNewThread() At ResourceManager.java:[line 1131] | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6352/9/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6352 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 8243ad94cb2d 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | branch-3.3 / b1202a8f8f6e6d94a0319dfa54264a0a31e3825a | | Default Java | Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~18.04-b09 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6352/9/testReport/ | | Max. process+thread count | 939 (vs. ulimit of 5500) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager | | Console output | https://ci-hadoop.ap