Re: [PR] HDFS-17317. DebugAdmin metaOut not need multiple close [hadoop]
xuzifu666 commented on PR #6402: URL: https://github.com/apache/hadoop/pull/6402#issuecomment-1879586686 @ayushtkn Hi, how about recircle result? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HDFS-17302. RBF: ProportionRouterRpcFairnessPolicyController-Sharing and isolation. [hadoop]
hadoop-yetus commented on PR #6380: URL: https://github.com/apache/hadoop/pull/6380#issuecomment-1879579502 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 49s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 46m 33s | | trunk passed | | +1 :green_heart: | compile | 0m 41s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 0m 37s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 29s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 40s | | trunk passed | | +1 :green_heart: | javadoc | 0m 42s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 30s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 22s | | trunk passed | | +1 :green_heart: | shadedclient | 38m 14s | | branch has no errors when building and testing our client artifacts. | | -0 :warning: | patch | 38m 34s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 32s | | the patch passed | | +1 :green_heart: | compile | 0m 32s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 0m 32s | | the patch passed | | +1 :green_heart: | compile | 0m 29s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 0m 29s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 18s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 31s | | the patch passed | | +1 :green_heart: | javadoc | 0m 28s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 23s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 21s | | the patch passed | | +1 :green_heart: | shadedclient | 38m 36s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 22m 57s | | hadoop-hdfs-rbf in the patch passed. | | +1 :green_heart: | asflicense | 0m 35s | | The patch does not generate ASF License warnings. | | | | 161m 34s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6380/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6380 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 58649d70f692 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 64fb454c2b447cbda6d0e134edd4b18cebaa29cf | | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6380/3/testReport/ | | Max. process+thread count | 2376 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: hadoop-hdfs-project/hadoop-hdfs-rbf | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6380/3/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to g
Re: [PR] YARN-11642. Fix Flaky Test TestTimelineAuthFilterForV2#testPutTimelineEntities. [hadoop]
slfan1989 commented on PR #6417: URL: https://github.com/apache/hadoop/pull/6417#issuecomment-1879572817 @ayushtkn Can you help review this PR? Thank you very much! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] YARN-11634. [Addendum] Speed-up TestTimelineClient. [hadoop]
hadoop-yetus commented on PR #6419: URL: https://github.com/apache/hadoop/pull/6419#issuecomment-1879571378 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 20s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 31m 38s | | trunk passed | | +1 :green_heart: | compile | 0m 24s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 0m 20s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 23s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 28s | | trunk passed | | +1 :green_heart: | javadoc | 0m 37s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 31s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | -1 :x: | spotbugs | 1m 0s | [/branch-spotbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6419/1/artifact/out/branch-spotbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common-warnings.html) | hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common in trunk has 1 extant spotbugs warnings. | | +1 :green_heart: | shadedclient | 19m 37s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 23s | | the patch passed | | +1 :green_heart: | compile | 0m 21s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 0m 21s | | the patch passed | | +1 :green_heart: | compile | 0m 19s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 0m 19s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 16s | | hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common: The patch generated 0 new + 24 unchanged - 2 fixed = 24 total (was 26) | | +1 :green_heart: | mvnsite | 0m 20s | | the patch passed | | +1 :green_heart: | javadoc | 0m 23s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 27s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 1s | | hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common generated 0 new + 0 unchanged - 1 fixed = 0 total (was 1) | | +1 :green_heart: | shadedclient | 19m 37s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 4m 30s | | hadoop-yarn-common in the patch passed. | | +1 :green_heart: | asflicense | 0m 24s | | The patch does not generate ASF License warnings. | | | | 84m 44s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6419/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6419 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 4bf2cc097043 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / b4193ad58b4d46cf59d74721a08ea51fd25d997a | | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6419/1/testReport/ | | Max. process+thread count | 558 (vs. ulimit of 5500) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common | | Console output | https://ci-hadoop.apache.org/job/had
Re: [PR] YARN-11642. Fix Flaky Test TestTimelineAuthFilterForV2#testPutTimelineEntities. [hadoop]
hadoop-yetus commented on PR #6417: URL: https://github.com/apache/hadoop/pull/6417#issuecomment-1879569732 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 49s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 48m 22s | | trunk passed | | +1 :green_heart: | compile | 0m 27s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 0m 25s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 25s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 29s | | trunk passed | | +1 :green_heart: | javadoc | 0m 28s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 22s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +0 :ok: | spotbugs | 0m 33s | | branch/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests no spotbugs output file (spotbugsXml.xml) | | +1 :green_heart: | shadedclient | 37m 14s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 19s | | the patch passed | | +1 :green_heart: | compile | 0m 19s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 0m 19s | | the patch passed | | +1 :green_heart: | compile | 0m 17s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 0m 17s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 14s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 19s | | the patch passed | | +1 :green_heart: | javadoc | 0m 15s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 14s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +0 :ok: | spotbugs | 0m 19s | | hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests has no data from spotbugs | | +1 :green_heart: | shadedclient | 37m 0s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 3m 17s | | hadoop-yarn-server-tests in the patch passed. | | +1 :green_heart: | asflicense | 0m 34s | | The patch does not generate ASF License warnings. | | | | 137m 25s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6417/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6417 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux b5c2b648e353 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 8f052ae6a9fd176248fe982425f7644791e797be | | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6417/1/testReport/ | | Max. process+thread count | 623 (vs. ulimit of 5500) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6417/1/console | | versions | git=2.25.1 maven=3.6.3 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To res
Re: [PR] Preparing for 3.5.0 development [hadoop]
slfan1989 commented on PR #6411: URL: https://github.com/apache/hadoop/pull/6411#issuecomment-1879561986 Spotbugs Report shows 2 Spotbugs: 1. The first one is caused by YARN-11634(#6371), and I have submitted a PR to fix it. ``` Code Warning MS org.apache.hadoop.yarn.client.api.impl.TimelineConnector.DEFAULT_SOCKET_TIMEOUT isn't final but should be [Bug type MS_SHOULD_BE_FINAL (click for details)](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6411/1/artifact/out/branch-spotbugs-root-warnings.html#MS_SHOULD_BE_FINAL) In class org.apache.hadoop.yarn.client.api.impl.TimelineConnector Field org.apache.hadoop.yarn.client.api.impl.TimelineConnector.DEFAULT_SOCKET_TIMEOUT At TimelineConnector.java:[line 82] ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HDFS-17326. Fix NameNode Spotbug. [hadoop]
hadoop-yetus commented on PR #6420: URL: https://github.com/apache/hadoop/pull/6420#issuecomment-1879561355 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 0s | | Docker mode activated. | | -1 :x: | patch | 0m 14s | | https://github.com/apache/hadoop/pull/6420 does not apply to trunk. Rebase required? Wrong Branch? See https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute for help. | | Subsystem | Report/Notes | |--:|:-| | GITHUB PR | https://github.com/apache/hadoop/pull/6420 | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6420/1/console | | versions | git=2.34.1 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[PR] HDFS-17326. Fix NameNode Spotbugs. [hadoop]
slfan1989 opened a new pull request, #6420: URL: https://github.com/apache/hadoop/pull/6420 ### Description of PR JIRA: HDFS-17326. Fix NameNode Spotbugs. While preparing for 3.5.0 development, the report showed that there was a sputbug in NameNode. [SpotBugs Report](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6411/1/artifact/out/branch-spotbugs-root-warnings.html) ``` Code Warning DLS Dead store to sharedDirs in org.apache.hadoop.hdfs.server.namenode.NameNode.format(Configuration, boolean, boolean) [Bug type DLS_DEAD_LOCAL_STORE (click for details)](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6411/1/artifact/out/branch-spotbugs-root-warnings.html#DLS_DEAD_LOCAL_STORE) In class org.apache.hadoop.hdfs.server.namenode.NameNode In method org.apache.hadoop.hdfs.server.namenode.NameNode.format(Configuration, boolean, boolean) Local variable named sharedDirs At NameNode.java:[line 1383] ``` ### How was this patch tested? ### For code changes: - [ ] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')? - [ ] Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, `NOTICE-binary` files? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-19029) Migrate abstract permission tests to AssertJ
[ https://issues.apache.org/jira/browse/HADOOP-19029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17803756#comment-17803756 ] ASF GitHub Bot commented on HADOOP-19029: - slfan1989 commented on PR #6418: URL: https://github.com/apache/hadoop/pull/6418#issuecomment-1879554146 @huangzhaobo99 Thank you for your attention to the upgrade of unit tests, but we cannot modify them one by one like this. I plan to initiate a discussion over time to upgrade unit tests according to modules. If you would like to participate in this upgrade, we can contribute later. > Migrate abstract permission tests to AssertJ > > > Key: HADOOP-19029 > URL: https://issues.apache.org/jira/browse/HADOOP-19029 > Project: Hadoop Common > Issue Type: Improvement >Reporter: huangzhaobo >Priority: Major > Labels: pull-request-available > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HADOOP-19029. Migrate abstract permission tests to AssertJ [hadoop]
slfan1989 commented on PR #6418: URL: https://github.com/apache/hadoop/pull/6418#issuecomment-1879554146 @huangzhaobo99 Thank you for your attention to the upgrade of unit tests, but we cannot modify them one by one like this. I plan to initiate a discussion over time to upgrade unit tests according to modules. If you would like to participate in this upgrade, we can contribute later. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] YARN-11634. [Addendum] Speed-up TestTimelineClient. [hadoop]
slfan1989 commented on PR #6419: URL: https://github.com/apache/hadoop/pull/6419#issuecomment-1879552862 @brumi1024 @K0K0V0K In #6371, we introduced a sputbug. I tried to modify the code, can you help review this PR? Thank you very much! [ReportUrl](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6411/1/artifact/out/branch-spotbugs-hadoop-yarn-project-warnings.html) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[PR] YARN-11634. [Addendum] Speed-up TestTimelineClient. [hadoop]
slfan1989 opened a new pull request, #6419: URL: https://github.com/apache/hadoop/pull/6419 ### Description of PR JIRA: YARN-11634. [Addendum] Speed-up TestTimelineClient. ### How was this patch tested? ### For code changes: - [ ] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')? - [ ] Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, `NOTICE-binary` files? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-19029) Migrate abstract permission tests to AssertJ
[ https://issues.apache.org/jira/browse/HADOOP-19029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17803753#comment-17803753 ] ASF GitHub Bot commented on HADOOP-19029: - huangzhaobo99 opened a new pull request, #6418: URL: https://github.com/apache/hadoop/pull/6418 ### Description of PR JIRA: https://issues.apache.org/jira/browse/HADOOP-19029 ### How was this patch tested? > Migrate abstract permission tests to AssertJ > > > Key: HADOOP-19029 > URL: https://issues.apache.org/jira/browse/HADOOP-19029 > Project: Hadoop Common > Issue Type: Improvement >Reporter: huangzhaobo >Priority: Major > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-19029) Migrate abstract permission tests to AssertJ
[ https://issues.apache.org/jira/browse/HADOOP-19029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HADOOP-19029: Labels: pull-request-available (was: ) > Migrate abstract permission tests to AssertJ > > > Key: HADOOP-19029 > URL: https://issues.apache.org/jira/browse/HADOOP-19029 > Project: Hadoop Common > Issue Type: Improvement >Reporter: huangzhaobo >Priority: Major > Labels: pull-request-available > -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[PR] HADOOP-19029. Migrate abstract permission tests to AssertJ [hadoop]
huangzhaobo99 opened a new pull request, #6418: URL: https://github.com/apache/hadoop/pull/6418 ### Description of PR JIRA: https://issues.apache.org/jira/browse/HADOOP-19029 ### How was this patch tested? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] Preparing for 3.5.0 development [hadoop]
hadoop-yetus commented on PR #6411: URL: https://github.com/apache/hadoop/pull/6411#issuecomment-1879545671 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 35s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 2s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 1s | | xmllint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 1s | | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 30m 16s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 32m 12s | | trunk passed | | +1 :green_heart: | compile | 16m 29s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 15m 26s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 4m 16s | | trunk passed | | +1 :green_heart: | mvnsite | 18m 48s | | trunk passed | | +1 :green_heart: | javadoc | 8m 31s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 7m 34s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +0 :ok: | spotbugs | 0m 20s | | branch/hadoop-build-tools no spotbugs output file (spotbugsXml.xml) | | +0 :ok: | spotbugs | 0m 19s | | branch/hadoop-project no spotbugs output file (spotbugsXml.xml) | | +0 :ok: | spotbugs | 0m 19s | | branch/hadoop-project-dist no spotbugs output file (spotbugsXml.xml) | | +0 :ok: | spotbugs | 0m 19s | | branch/hadoop-assemblies no spotbugs output file (spotbugsXml.xml) | | -1 :x: | spotbugs | 3m 8s | [/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6411/1/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html) | hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant spotbugs warnings. | | +0 :ok: | spotbugs | 0m 21s | | branch/hadoop-hdfs-project/hadoop-hdfs-native-client no spotbugs output file (spotbugsXml.xml) | | -1 :x: | spotbugs | 10m 53s | [/branch-spotbugs-hadoop-yarn-project_hadoop-yarn-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6411/1/artifact/out/branch-spotbugs-hadoop-yarn-project_hadoop-yarn-warnings.html) | hadoop-yarn-project/hadoop-yarn in trunk has 1 extant spotbugs warnings. | | -1 :x: | spotbugs | 1m 34s | [/branch-spotbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6411/1/artifact/out/branch-spotbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common-warnings.html) | hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common in trunk has 1 extant spotbugs warnings. | | +0 :ok: | spotbugs | 0m 27s | | branch/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests no spotbugs output file (spotbugsXml.xml) | | +0 :ok: | spotbugs | 0m 25s | | branch/hadoop-minicluster no spotbugs output file (spotbugsXml.xml) | | -1 :x: | spotbugs | 6m 27s | [/branch-spotbugs-hadoop-hdfs-project-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6411/1/artifact/out/branch-spotbugs-hadoop-hdfs-project-warnings.html) | hadoop-hdfs-project in trunk has 1 extant spotbugs warnings. | | +0 :ok: | spotbugs | 0m 24s | | branch/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-docker no spotbugs output file (spotbugsXml.xml) | | +0 :ok: | spotbugs | 0m 25s | | branch/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site no spotbugs output file (spotbugsXml.xml) | | +0 :ok: | spotbugs | 0m 25s | | branch/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry no spotbugs output file (spotbugsXml.xml) | | +0 :ok: | spotbugs | 0m 24s | | branch/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui no spotbugs output file (spotbugsXml.xml) | | -1 :x: | spotbugs | 10m 32s | [/branch-spotbugs-hadoop-yarn-project-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6411/1/artifact/out/branch-spotbugs-hadoop-yarn-project-warnings.html) | hadoop-yarn-project in trunk has 1 extant spotbugs warnings. | | +0 :ok: | spotbugs | 0m 26s | | branch/hadoop-client-modules/hadoop-client no spotbugs output file (spotb
Re: [PR] HDFS-17302. RBF: ProportionRouterRpcFairnessPolicyController-Sharing and isolation. [hadoop]
KeeProMise commented on code in PR #6380: URL: https://github.com/apache/hadoop/pull/6380#discussion_r1443614558 ## hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/fairness/TestRouterHandlersFairness.java: ## @@ -308,4 +375,28 @@ private void innerCalls(URI address, int numOps, boolean isConcurrent, overloadException.get(); } } + + private static Map expectedHandlerPerNs(String str) { +Map handlersPerNsMap = new HashMap<>(); +if (str != null) { + String[] tmpStrs = str.split(", "); + for(String tmpStr : tmpStrs) { +String[] handlersPerNs = tmpStr.split(":"); +handlersPerNsMap.put(handlersPerNs[0], Integer.valueOf(handlersPerNs[1])); + } +} +return handlersPerNsMap; + } + + private static Map setConfiguration(String str) { +Map conf = new HashMap<>(); +if (str != null) { Review Comment: Thank you for your advice, done. ## hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/fairness/TestRouterHandlersFairness.java: ## @@ -308,4 +375,28 @@ private void innerCalls(URI address, int numOps, boolean isConcurrent, overloadException.get(); } } + + private static Map expectedHandlerPerNs(String str) { +Map handlersPerNsMap = new HashMap<>(); +if (str != null) { + String[] tmpStrs = str.split(", "); + for(String tmpStr : tmpStrs) { +String[] handlersPerNs = tmpStr.split(":"); +handlersPerNsMap.put(handlersPerNs[0], Integer.valueOf(handlersPerNs[1])); + } +} +return handlersPerNsMap; + } + + private static Map setConfiguration(String str) { +Map conf = new HashMap<>(); +if (str != null) { + String[] tmpStrs = str.split(", "); + for(String tmpStr : tmpStrs) { +String[] configKV = tmpStr.split("="); +conf.put(configKV[0], configKV[1]); Review Comment: done. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HDFS-17302. RBF: ProportionRouterRpcFairnessPolicyController-Sharing and isolation. [hadoop]
KeeProMise commented on code in PR #6380: URL: https://github.com/apache/hadoop/pull/6380#discussion_r1443614602 ## hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/fairness/ProportionRouterRpcFairnessPolicyController.java: ## @@ -0,0 +1,101 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.hdfs.server.federation.fairness; + +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.hdfs.server.federation.router.FederationUtil; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; +import java.util.Set; + +import static org.apache.hadoop.hdfs.server.federation.fairness.RouterRpcFairnessConstants.CONCURRENT_NS; +import static org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys.DFS_ROUTER_FAIR_HANDLER_PROPORTION_DEFAULT; +import static org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys.DFS_ROUTER_FAIR_HANDLER_PROPORTION_KEY_PREFIX; +import static org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys.DFS_ROUTER_HANDLER_COUNT_DEFAULT; +import static org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys.DFS_ROUTER_HANDLER_COUNT_KEY; + +/** + * Proportion fairness policy extending {@link AbstractRouterRpcFairnessPolicyController} + * and fetching proportion of handlers from configuration for all available name services, + * based on the proportion and the total number of handlers, calculate the handlers of all ns. + * The handlers count will not change for this controller. + */ +public class ProportionRouterRpcFairnessPolicyController extends +AbstractRouterRpcFairnessPolicyController{ + + private static final Logger LOG = + LoggerFactory.getLogger(ProportionRouterRpcFairnessPolicyController.class); + // For unregistered ns, the default ns is used, + // so the configuration can be simplified if the handler ratio of all ns is 1, + // and transparent expansion of new ns can be supported. + private static final String DEFAULT_NS = "default_ns"; + + public ProportionRouterRpcFairnessPolicyController(Configuration conf){ +init(conf); + } + + @Override + public void init(Configuration conf) { +super.init(conf); +// Total handlers configured to process all incoming Rpc. +int handlerCount = conf.getInt(DFS_ROUTER_HANDLER_COUNT_KEY, DFS_ROUTER_HANDLER_COUNT_DEFAULT); + +LOG.info("Handlers available for fairness assignment {} ", handlerCount); + +// Get all name services configured +Set allConfiguredNS = FederationUtil.getAllConfiguredNS(conf); + +// Insert the concurrent nameservice into the set to process together +allConfiguredNS.add(CONCURRENT_NS); + +// Insert the default nameservice into the set to process together +allConfiguredNS.add(DEFAULT_NS); +for (String nsId : allConfiguredNS) { + double dedicatedHandlerProportion = conf.getDouble( + DFS_ROUTER_FAIR_HANDLER_PROPORTION_KEY_PREFIX + nsId, +DFS_ROUTER_FAIR_HANDLER_PROPORTION_DEFAULT); + int dedicatedHandlers = (int) (dedicatedHandlerProportion * handlerCount); + LOG.info("Dedicated handlers {} for ns {} ", dedicatedHandlers, nsId); + // Each NS should have at least one handler assigned. + if (dedicatedHandlers <= 0) { +dedicatedHandlers = 1; + } + insertNameServiceWithPermits(nsId, dedicatedHandlers); + LOG.info("Assigned {} handlers to nsId {} ", dedicatedHandlers, nsId); +} + } + + @Override + public boolean acquirePermit(String nsId) { +if (contains(nsId)) { + return super.acquirePermit(nsId); +}else { Review Comment: done. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] YARN-11631. [GPG] Add GPGWebServices. [hadoop]
hadoop-yetus commented on PR #6354: URL: https://github.com/apache/hadoop/pull/6354#issuecomment-1879544980 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 33s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 1s | | xmllint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 41m 40s | | trunk passed | | +1 :green_heart: | compile | 0m 25s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 0m 25s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 25s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 30s | | trunk passed | | +1 :green_heart: | javadoc | 0m 34s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 26s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 0m 45s | | trunk passed | | +1 :green_heart: | shadedclient | 32m 3s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 19s | | the patch passed | | +1 :green_heart: | compile | 0m 18s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 0m 18s | | the patch passed | | +1 :green_heart: | compile | 0m 17s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 0m 17s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 14s | [/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-globalpolicygenerator.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6354/8/artifact/out/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-globalpolicygenerator.txt) | hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator: The patch generated 7 new + 0 unchanged - 0 fixed = 7 total (was 0) | | +1 :green_heart: | mvnsite | 0m 19s | | the patch passed | | +1 :green_heart: | javadoc | 0m 19s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 18s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 0m 44s | | the patch passed | | +1 :green_heart: | shadedclient | 33m 19s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 0m 59s | | hadoop-yarn-server-globalpolicygenerator in the patch passed. | | +1 :green_heart: | asflicense | 0m 36s | | The patch does not generate ASF License warnings. | | | | 119m 40s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6354/8/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6354 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient codespell detsecrets xmllint spotbugs checkstyle | | uname | Linux 2174a85c0b1d 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 737ce5c8e495cf78e25e26aa854a82e3330ee692 | | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6354/8/testReport/ | | Max. process+thread count | 555 (vs. ulimit of 5500) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator U: hadoop-yarn-project/hadoop-y
[PR] YARN-11642. Fix Flaky Test TestTimelineAuthFilterForV2#testPutTimelineEntities. [hadoop]
slfan1989 opened a new pull request, #6417: URL: https://github.com/apache/hadoop/pull/6417 ### Description of PR JIRA: Fix Flaky Test TestTimelineAuthFilterForV2#testPutTimelineEntities. ### How was this patch tested? ### For code changes: - [ ] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')? - [ ] Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, `NOTICE-binary` files? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-19024) change to bouncy castle jdk1.8 jars
[ https://issues.apache.org/jira/browse/HADOOP-19024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17803740#comment-17803740 ] ASF GitHub Bot commented on HADOOP-19024: - slfan1989 commented on PR #6410: URL: https://github.com/apache/hadoop/pull/6410#issuecomment-1879513161 @pjfanning Thank you for your contribution! LGTM. > change to bouncy castle jdk1.8 jars > --- > > Key: HADOOP-19024 > URL: https://issues.apache.org/jira/browse/HADOOP-19024 > Project: Hadoop Common > Issue Type: Task >Reporter: PJ Fanning >Priority: Major > Labels: pull-request-available > > They have stopped patching the JDK 1.5 jars that Hadoop uses (see > https://issues.apache.org/jira/browse/HADOOP-18540). > The new artifacts have similar names - but the names are like bcprov-jdk18on > as opposed to bcprov-jdk15on. > CVE-2023-33201 is an example of a security issue that seems only to be fixed > in the JDK 1.8 artifacts (ie no JDK 1.5 jar has the fix). > https://www.bouncycastle.org/releasenotes.html#r1rv77 latest current release > but the CVE was fixed in 1.74. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HADOOP-19024. Use bouncycastle jdk18 1.77 [hadoop]
slfan1989 commented on PR #6410: URL: https://github.com/apache/hadoop/pull/6410#issuecomment-1879513161 @pjfanning Thank you for your contribution! LGTM. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-19029) Migrate abstract permission tests to AssertJ
huangzhaobo created HADOOP-19029: Summary: Migrate abstract permission tests to AssertJ Key: HADOOP-19029 URL: https://issues.apache.org/jira/browse/HADOOP-19029 Project: Hadoop Common Issue Type: Improvement Reporter: huangzhaobo -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-19025) Migrate abstract contract tests to AssertJ
[ https://issues.apache.org/jira/browse/HADOOP-19025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17803739#comment-17803739 ] huangzhaobo commented on HADOOP-19025: -- Thanks [~adoroszlai], I happen to be learning about ACL, and I will updates this `hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/permission` module. > Migrate abstract contract tests to AssertJ > -- > > Key: HADOOP-19025 > URL: https://issues.apache.org/jira/browse/HADOOP-19025 > Project: Hadoop Common > Issue Type: Improvement > Components: test >Reporter: Attila Doroszlai >Assignee: Attila Doroszlai >Priority: Major > Labels: pull-request-available > > Replace assertions from JUnit4 with equivalent functionality from AssertJ, to > make {{ContractTestUtils}} independent of JUnit version. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-19025) Migrate abstract contract tests to AssertJ
[ https://issues.apache.org/jira/browse/HADOOP-19025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17803714#comment-17803714 ] ASF GitHub Bot commented on HADOOP-19025: - adoroszlai commented on PR #6415: URL: https://github.com/apache/hadoop/pull/6415#issuecomment-1879318637 Test failures in Jenkins run are unrelated: ``` [ERROR] Errors: [ERROR] org.apache.hadoop.security.TestRaceWhenRelogin.test(org.apache.hadoop.security.TestRaceWhenRelogin) [ERROR] Run 1: TestRaceWhenRelogin.setUp:84 » Krb Failed to load or create keytab /home/jenki... [ERROR] Run 2: TestRaceWhenRelogin.setUp:90 » KerberosAuth failure to login: for principal: c... [ERROR] Run 3: TestRaceWhenRelogin.setUp:84 » IllegalArgument The value contains non ASCII ch... ``` > Migrate abstract contract tests to AssertJ > -- > > Key: HADOOP-19025 > URL: https://issues.apache.org/jira/browse/HADOOP-19025 > Project: Hadoop Common > Issue Type: Improvement > Components: test >Reporter: Attila Doroszlai >Assignee: Attila Doroszlai >Priority: Major > Labels: pull-request-available > > Replace assertions from JUnit4 with equivalent functionality from AssertJ, to > make {{ContractTestUtils}} independent of JUnit version. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HADOOP-19025. Migrate contract tests in hadoop-common to AssertJ [hadoop]
adoroszlai commented on PR #6415: URL: https://github.com/apache/hadoop/pull/6415#issuecomment-1879318637 Test failures in Jenkins run are unrelated: ``` [ERROR] Errors: [ERROR] org.apache.hadoop.security.TestRaceWhenRelogin.test(org.apache.hadoop.security.TestRaceWhenRelogin) [ERROR] Run 1: TestRaceWhenRelogin.setUp:84 » Krb Failed to load or create keytab /home/jenki... [ERROR] Run 2: TestRaceWhenRelogin.setUp:90 » KerberosAuth failure to login: for principal: c... [ERROR] Run 3: TestRaceWhenRelogin.setUp:84 » IllegalArgument The value contains non ASCII ch... ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18883) Expect-100 JDK bug resolution: prevent multiple server calls
[ https://issues.apache.org/jira/browse/HADOOP-18883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17803712#comment-17803712 ] ASF GitHub Bot commented on HADOOP-18883: - mukund-thakur commented on code in PR #6022: URL: https://github.com/apache/hadoop/pull/6022#discussion_r1443402179 ## hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsHttpOperation.java: ## @@ -340,8 +344,11 @@ public void sendRequest(byte[] buffer, int offset, int length) throws IOExceptio If expect header is not enabled, we throw back the exception. */ String expectHeader = getConnProperty(EXPECT); -if (expectHeader != null && expectHeader.equals(HUNDRED_CONTINUE)) { +if (expectHeader != null && expectHeader.equals(HUNDRED_CONTINUE) +&& e instanceof ProtocolException +&& EXPECT_100_JDK_ERROR.equals(e.getMessage())) { Review Comment: I guess the question by @snvijaya is Do we want to prevent later API calls that trigger connections irrespective of any failures? If yes then why? > Expect-100 JDK bug resolution: prevent multiple server calls > > > Key: HADOOP-18883 > URL: https://issues.apache.org/jira/browse/HADOOP-18883 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Reporter: Pranav Saxena >Assignee: Pranav Saxena >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > > This is inline to JDK bug: [https://bugs.openjdk.org/browse/JDK-8314978]. > > With the current implementation of HttpURLConnection if server rejects the > “Expect 100-continue” then there will be ‘java.net.ProtocolException’ will be > thrown from 'expect100Continue()' method. > After the exception thrown, If we call any other method on the same instance > (ex getHeaderField(), or getHeaderFields()). They will internally call > getOuputStream() which invokes writeRequests(), which make the actual server > call. > In the AbfsHttpOperation, after sendRequest() we call processResponse() > method from AbfsRestOperation. Even if the conn.getOutputStream() fails due > to expect-100 error, we consume the exception and let the code go ahead. So, > we can have getHeaderField() / getHeaderFields() / getHeaderFieldLong() which > will be triggered after getOutputStream is failed. These invocation will lead > to server calls. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HADOOP-18883. [ABFS]: Expect-100 JDK bug resolution: prevent multiple server calls [hadoop]
mukund-thakur commented on code in PR #6022: URL: https://github.com/apache/hadoop/pull/6022#discussion_r1443402179 ## hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsHttpOperation.java: ## @@ -340,8 +344,11 @@ public void sendRequest(byte[] buffer, int offset, int length) throws IOExceptio If expect header is not enabled, we throw back the exception. */ String expectHeader = getConnProperty(EXPECT); -if (expectHeader != null && expectHeader.equals(HUNDRED_CONTINUE)) { +if (expectHeader != null && expectHeader.equals(HUNDRED_CONTINUE) +&& e instanceof ProtocolException +&& EXPECT_100_JDK_ERROR.equals(e.getMessage())) { Review Comment: I guess the question by @snvijaya is Do we want to prevent later API calls that trigger connections irrespective of any failures? If yes then why? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HDFS-17315. Optimize the namenode format code logic. [hadoop]
hadoop-yetus commented on PR #6400: URL: https://github.com/apache/hadoop/pull/6400#issuecomment-1879299355 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 34s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 45m 41s | | trunk passed | | +1 :green_heart: | compile | 1m 28s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 1m 23s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 1m 13s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 27s | | trunk passed | | +1 :green_heart: | javadoc | 1m 9s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 36s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | -1 :x: | spotbugs | 3m 36s | [/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6400/4/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html) | hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant spotbugs warnings. | | +1 :green_heart: | shadedclient | 41m 4s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 16s | | the patch passed | | +1 :green_heart: | compile | 1m 17s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 1m 17s | | the patch passed | | +1 :green_heart: | compile | 1m 7s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 1m 7s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 1m 10s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 23s | | the patch passed | | +1 :green_heart: | javadoc | 0m 58s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 25s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 3m 25s | | hadoop-hdfs-project/hadoop-hdfs generated 0 new + 0 unchanged - 1 fixed = 0 total (was 1) | | +1 :green_heart: | shadedclient | 40m 13s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 223m 25s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6400/4/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 43s | | The patch does not generate ASF License warnings. | | | | 375m 23s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStream | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6400/4/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6400 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux c6acfa5e1d09 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 738b6f31bf85b37c539b9414d1a3bdfa62bd0ecd | | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6400/4/testReport/ | | Max. process+thread
[jira] [Commented] (HADOOP-18883) Expect-100 JDK bug resolution: prevent multiple server calls
[ https://issues.apache.org/jira/browse/HADOOP-18883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17803710#comment-17803710 ] ASF GitHub Bot commented on HADOOP-18883: - mukund-thakur commented on code in PR #6022: URL: https://github.com/apache/hadoop/pull/6022#discussion_r1443395651 ## hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsHttpOperation.java: ## @@ -324,14 +328,26 @@ public void sendRequest(byte[] buffer, int offset, int length) throws IOExceptio */ outputStream = getConnOutputStream(); } catch (IOException e) { -/* If getOutputStream fails with an exception and expect header - is enabled, we return back without throwing an exception to - the caller. The caller is responsible for setting the correct status code. - If expect header is not enabled, we throw back the exception. +connectionDisconnectedOnError = true; Review Comment: setting this field here and using in processResponse() means that we won't be processing response for any IOException. But isn't the intent to not process only in case of JDK error? So shouldn't this go inside the if (EXPECT_100_JDK_ERROR.equals(e.getMessage()...) check? > Expect-100 JDK bug resolution: prevent multiple server calls > > > Key: HADOOP-18883 > URL: https://issues.apache.org/jira/browse/HADOOP-18883 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Reporter: Pranav Saxena >Assignee: Pranav Saxena >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > > This is inline to JDK bug: [https://bugs.openjdk.org/browse/JDK-8314978]. > > With the current implementation of HttpURLConnection if server rejects the > “Expect 100-continue” then there will be ‘java.net.ProtocolException’ will be > thrown from 'expect100Continue()' method. > After the exception thrown, If we call any other method on the same instance > (ex getHeaderField(), or getHeaderFields()). They will internally call > getOuputStream() which invokes writeRequests(), which make the actual server > call. > In the AbfsHttpOperation, after sendRequest() we call processResponse() > method from AbfsRestOperation. Even if the conn.getOutputStream() fails due > to expect-100 error, we consume the exception and let the code go ahead. So, > we can have getHeaderField() / getHeaderFields() / getHeaderFieldLong() which > will be triggered after getOutputStream is failed. These invocation will lead > to server calls. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HADOOP-18883. [ABFS]: Expect-100 JDK bug resolution: prevent multiple server calls [hadoop]
mukund-thakur commented on code in PR #6022: URL: https://github.com/apache/hadoop/pull/6022#discussion_r1443395651 ## hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsHttpOperation.java: ## @@ -324,14 +328,26 @@ public void sendRequest(byte[] buffer, int offset, int length) throws IOExceptio */ outputStream = getConnOutputStream(); } catch (IOException e) { -/* If getOutputStream fails with an exception and expect header - is enabled, we return back without throwing an exception to - the caller. The caller is responsible for setting the correct status code. - If expect header is not enabled, we throw back the exception. +connectionDisconnectedOnError = true; Review Comment: setting this field here and using in processResponse() means that we won't be processing response for any IOException. But isn't the intent to not process only in case of JDK error? So shouldn't this go inside the if (EXPECT_100_JDK_ERROR.equals(e.getMessage()...) check? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13147) Constructors must not call overrideable methods
[ https://issues.apache.org/jira/browse/HADOOP-13147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17803708#comment-17803708 ] ASF GitHub Bot commented on HADOOP-13147: - hadoop-yetus commented on PR #6408: URL: https://github.com/apache/hadoop/pull/6408#issuecomment-1879269064 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 21s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 32m 7s | | trunk passed | | +1 :green_heart: | compile | 8m 22s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 7m 29s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 40s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 53s | | trunk passed | | +1 :green_heart: | javadoc | 0m 44s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 29s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 25s | | trunk passed | | +1 :green_heart: | shadedclient | 20m 13s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 30s | | the patch passed | | +1 :green_heart: | compile | 7m 56s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 7m 56s | | the patch passed | | +1 :green_heart: | compile | 7m 34s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 7m 34s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 36s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 50s | | the patch passed | | +1 :green_heart: | javadoc | 0m 38s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 31s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 26s | | the patch passed | | +1 :green_heart: | shadedclient | 20m 27s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 16m 35s | | hadoop-common in the patch passed. | | +1 :green_heart: | asflicense | 0m 31s | | The patch does not generate ASF License warnings. | | | | 132m 4s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6408/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6408 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 7c7d603ffa37 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 140642b9752f0f702e76a61a1eee6a478eaf33ee | | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6408/3/testReport/ | | Max. process+thread count | 3153 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6408/3/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. > Constructors must not call overrideable methods >
Re: [PR] HADOOP-13147 - Constructors must not call overrideable methods [hadoop]
hadoop-yetus commented on PR #6408: URL: https://github.com/apache/hadoop/pull/6408#issuecomment-1879269064 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 21s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 32m 7s | | trunk passed | | +1 :green_heart: | compile | 8m 22s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 7m 29s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 40s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 53s | | trunk passed | | +1 :green_heart: | javadoc | 0m 44s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 29s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 25s | | trunk passed | | +1 :green_heart: | shadedclient | 20m 13s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 30s | | the patch passed | | +1 :green_heart: | compile | 7m 56s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 7m 56s | | the patch passed | | +1 :green_heart: | compile | 7m 34s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 7m 34s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 36s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 50s | | the patch passed | | +1 :green_heart: | javadoc | 0m 38s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 31s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 1m 26s | | the patch passed | | +1 :green_heart: | shadedclient | 20m 27s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 16m 35s | | hadoop-common in the patch passed. | | +1 :green_heart: | asflicense | 0m 31s | | The patch does not generate ASF License warnings. | | | | 132m 4s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6408/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6408 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 7c7d603ffa37 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 140642b9752f0f702e76a61a1eee6a478eaf33ee | | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6408/3/testReport/ | | Max. process+thread count | 3153 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6408/3/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (HADOOP-19025) Migrate abstract contract tests to AssertJ
[ https://issues.apache.org/jira/browse/HADOOP-19025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17803705#comment-17803705 ] ASF GitHub Bot commented on HADOOP-19025: - hadoop-yetus commented on PR #6415: URL: https://github.com/apache/hadoop/pull/6415#issuecomment-1879251962 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 48s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 18 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 48m 32s | | trunk passed | | +1 :green_heart: | compile | 18m 9s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 16m 35s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 1m 15s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 40s | | trunk passed | | +1 :green_heart: | javadoc | 1m 12s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 49s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 2m 32s | | trunk passed | | +1 :green_heart: | shadedclient | 39m 29s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 56s | | the patch passed | | +1 :green_heart: | compile | 17m 20s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 17m 20s | | the patch passed | | +1 :green_heart: | compile | 17m 0s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 17m 0s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 1m 13s | [/results-checkstyle-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6415/1/artifact/out/results-checkstyle-hadoop-common-project_hadoop-common.txt) | hadoop-common-project/hadoop-common: The patch generated 26 new + 35 unchanged - 4 fixed = 61 total (was 39) | | +1 :green_heart: | mvnsite | 1m 34s | | the patch passed | | +1 :green_heart: | javadoc | 1m 7s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 47s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 2m 41s | | the patch passed | | +1 :green_heart: | shadedclient | 40m 2s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 19m 20s | [/patch-unit-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6415/1/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt) | hadoop-common in the patch passed. | | +1 :green_heart: | asflicense | 1m 0s | | The patch does not generate ASF License warnings. | | | | 237m 1s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.security.TestRaceWhenRelogin | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6415/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6415 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux fe9ab0e5d11c 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 1f4e29df348c81b6518c3bd2f377f92ce881c399 | | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Test Results | http
Re: [PR] HADOOP-19025. Migrate contract tests in hadoop-common to AssertJ [hadoop]
hadoop-yetus commented on PR #6415: URL: https://github.com/apache/hadoop/pull/6415#issuecomment-1879251962 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 48s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 18 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 48m 32s | | trunk passed | | +1 :green_heart: | compile | 18m 9s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 16m 35s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 1m 15s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 40s | | trunk passed | | +1 :green_heart: | javadoc | 1m 12s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 49s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 2m 32s | | trunk passed | | +1 :green_heart: | shadedclient | 39m 29s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 56s | | the patch passed | | +1 :green_heart: | compile | 17m 20s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 17m 20s | | the patch passed | | +1 :green_heart: | compile | 17m 0s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 17m 0s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 1m 13s | [/results-checkstyle-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6415/1/artifact/out/results-checkstyle-hadoop-common-project_hadoop-common.txt) | hadoop-common-project/hadoop-common: The patch generated 26 new + 35 unchanged - 4 fixed = 61 total (was 39) | | +1 :green_heart: | mvnsite | 1m 34s | | the patch passed | | +1 :green_heart: | javadoc | 1m 7s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 47s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 2m 41s | | the patch passed | | +1 :green_heart: | shadedclient | 40m 2s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 19m 20s | [/patch-unit-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6415/1/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt) | hadoop-common in the patch passed. | | +1 :green_heart: | asflicense | 1m 0s | | The patch does not generate ASF License warnings. | | | | 237m 1s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.security.TestRaceWhenRelogin | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6415/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6415 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux fe9ab0e5d11c 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 1f4e29df348c81b6518c3bd2f377f92ce881c399 | | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6415/1/testReport/ | | Max. process+thread count | 1264 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https
[jira] [Commented] (HADOOP-19014) use jsr311-compat jar to allow us to use Jackson 2.14.3
[ https://issues.apache.org/jira/browse/HADOOP-19014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17803704#comment-17803704 ] ASF GitHub Bot commented on HADOOP-19014: - pjfanning opened a new pull request, #6416: URL: https://github.com/apache/hadoop/pull/6416 ### Description of PR Alternative to #6370. jersey-json 1.21.0 has a transitive dependency on the jsr311-compat jar. ### How was this patch tested? ### For code changes: - [x] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')? - [ ] Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [x] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, `NOTICE-binary` files? > use jsr311-compat jar to allow us to use Jackson 2.14.3 > --- > > Key: HADOOP-19014 > URL: https://issues.apache.org/jira/browse/HADOOP-19014 > Project: Hadoop Common > Issue Type: Task > Components: common >Reporter: PJ Fanning >Priority: Major > Labels: pull-request-available > > An alternative to HADOOP-18619 > See https://github.com/pjfanning/jsr311-compat -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[PR] HADOOP-19014. Upgrade to Jackson 2.14.3. [hadoop]
pjfanning opened a new pull request, #6416: URL: https://github.com/apache/hadoop/pull/6416 ### Description of PR Alternative to #6370. jersey-json 1.21.0 has a transitive dependency on the jsr311-compat jar. ### How was this patch tested? ### For code changes: - [x] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')? - [ ] Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [x] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, `NOTICE-binary` files? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18975) AWS SDK v2: extend support for FIPS endpoints
[ https://issues.apache.org/jira/browse/HADOOP-18975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17803690#comment-17803690 ] ASF GitHub Bot commented on HADOOP-18975: - hadoop-yetus commented on PR #6277: URL: https://github.com/apache/hadoop/pull/6277#issuecomment-1879179572 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 22s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +0 :ok: | markdownlint | 0m 0s | | markdownlint was not available. | | +0 :ok: | xmllint | 0m 0s | | xmllint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 4 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 31m 58s | | trunk passed | | +1 :green_heart: | compile | 0m 25s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 0m 20s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 20s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 25s | | trunk passed | | +1 :green_heart: | javadoc | 0m 17s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 24s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 0m 43s | | trunk passed | | +1 :green_heart: | shadedclient | 19m 30s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 17s | | the patch passed | | +1 :green_heart: | compile | 0m 20s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 0m 20s | | the patch passed | | +1 :green_heart: | compile | 0m 16s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 0m 16s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 11s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 21s | | the patch passed | | +1 :green_heart: | javadoc | 0m 10s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 16s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 0m 43s | | the patch passed | | +1 :green_heart: | shadedclient | 19m 47s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 2m 8s | | hadoop-aws in the patch passed. | | +1 :green_heart: | asflicense | 0m 24s | | The patch does not generate ASF License warnings. | | | | 81m 41s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6277/10/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6277 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets markdownlint xmllint | | uname | Linux 2d0c80614a57 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 2045468f0294bfe686e2890af45dcafc56749696 | | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6277/10/testReport/ | | Max. process+thread count | 553 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6277/10/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0 https
Re: [PR] HADOOP-18975. AWS SDK v2: extend support for FIPS endpoints [hadoop]
hadoop-yetus commented on PR #6277: URL: https://github.com/apache/hadoop/pull/6277#issuecomment-1879179572 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 22s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +0 :ok: | markdownlint | 0m 0s | | markdownlint was not available. | | +0 :ok: | xmllint | 0m 0s | | xmllint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 4 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 31m 58s | | trunk passed | | +1 :green_heart: | compile | 0m 25s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 0m 20s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 20s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 25s | | trunk passed | | +1 :green_heart: | javadoc | 0m 17s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 24s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 0m 43s | | trunk passed | | +1 :green_heart: | shadedclient | 19m 30s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 17s | | the patch passed | | +1 :green_heart: | compile | 0m 20s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 0m 20s | | the patch passed | | +1 :green_heart: | compile | 0m 16s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 0m 16s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 11s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 21s | | the patch passed | | +1 :green_heart: | javadoc | 0m 10s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 16s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 0m 43s | | the patch passed | | +1 :green_heart: | shadedclient | 19m 47s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 2m 8s | | hadoop-aws in the patch passed. | | +1 :green_heart: | asflicense | 0m 24s | | The patch does not generate ASF License warnings. | | | | 81m 41s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6277/10/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6277 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets markdownlint xmllint | | uname | Linux 2d0c80614a57 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 2045468f0294bfe686e2890af45dcafc56749696 | | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6277/10/testReport/ | | Max. process+thread count | 553 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6277/10/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubs
[jira] [Updated] (HADOOP-19025) Migrate abstract contract tests to AssertJ
[ https://issues.apache.org/jira/browse/HADOOP-19025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Attila Doroszlai updated HADOOP-19025: -- Summary: Migrate abstract contract tests to AssertJ (was: Migrate ContractTestUtils to AssertJ) > Migrate abstract contract tests to AssertJ > -- > > Key: HADOOP-19025 > URL: https://issues.apache.org/jira/browse/HADOOP-19025 > Project: Hadoop Common > Issue Type: Improvement > Components: test >Reporter: Attila Doroszlai >Assignee: Attila Doroszlai >Priority: Major > Labels: pull-request-available > > Replace assertions from JUnit4 with equivalent functionality from AssertJ, to > make {{ContractTestUtils}} independent of JUnit version. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-19028) Bind abstract contract tests into JUnit5 lifecycle
Attila Doroszlai created HADOOP-19028: - Summary: Bind abstract contract tests into JUnit5 lifecycle Key: HADOOP-19028 URL: https://issues.apache.org/jira/browse/HADOOP-19028 Project: Hadoop Common Issue Type: Improvement Components: test Reporter: Attila Doroszlai Assignee: Attila Doroszlai I plan to add JUnit5 lifecycle annotations while keeping the existing JUnit4 ones, too. This would allow downstream contract tests to be implemented in / migrated to JUnit5 gradually, without breaking other implementations. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-19025) Migrate ContractTestUtils to AssertJ
[ https://issues.apache.org/jira/browse/HADOOP-19025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17803662#comment-17803662 ] Attila Doroszlai commented on HADOOP-19025: --- [~huangzhaobo99], the first PR updates classes in {{hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract}}. There are many subclasses in other modules with lots of assertions, I think you are welcome to create a ticket for any of those. > Migrate ContractTestUtils to AssertJ > > > Key: HADOOP-19025 > URL: https://issues.apache.org/jira/browse/HADOOP-19025 > Project: Hadoop Common > Issue Type: Improvement > Components: test >Reporter: Attila Doroszlai >Assignee: Attila Doroszlai >Priority: Major > Labels: pull-request-available > > Replace assertions from JUnit4 with equivalent functionality from AssertJ, to > make {{ContractTestUtils}} independent of JUnit version. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-19027) S3A: S3AInputStream doesn't recover from HTTP exceptions
[ https://issues.apache.org/jira/browse/HADOOP-19027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17803659#comment-17803659 ] Steve Loughran commented on HADOOP-19027: - The retry logic in S3AInputStream only seems to retry on the first GET, later ones is treats as fatal and assumes this is a file version issue. This is from https://github.com/apache/hadoop/pull/794 {code} // With S3Guard, the metadatastore gave us metadata for the file in // open(), so we use a slightly different retry policy, but only on initial // open. After that, an exception generally means the file has changed // and there is no point retrying anymore. Invoker invoker = context.getReadInvoker(); invoker.maybeRetry(streamStatistics.openOperations == 0, "lazySeek", pathStr, true, ... {code} # We want to retry on failures, except for version change events. # we need to map stream problems (closed, no response) to retryable > S3A: S3AInputStream doesn't recover from HTTP exceptions > > > Key: HADOOP-19027 > URL: https://issues.apache.org/jira/browse/HADOOP-19027 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.4.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > > S3AInputStream doesn't seem to recover from Http exceptions raised through > HttpClient or through OpenSSL. > * review the recovery code to make sure it is retrying enough, it looks > suspiciously like it doesn't > * detect the relevant openssl, shaded httpclient and unshaded httpclient > exceptions, map to a standard one and treat as comms error in our retry policy > This is not the same as the load balancer/proxy returning 443/444 which we > map to AWSNoResponseException. We can't reuse that as it expects to be > created from an > {{software.amazon.awssdk.awscore.exception.AwsServiceException}} exception > with the relevant fields...changing it could potentially be incompatible. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-19027) S3A: S3AInputStream doesn't recover from HTTP exceptions
[ https://issues.apache.org/jira/browse/HADOOP-19027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17803654#comment-17803654 ] Steve Loughran commented on HADOOP-19027: - full stack trace {code} java.lang.RuntimeException: software.amazon.awssdk.thirdparty.org.apache.http.NoHttpResponseException: software.amazon.awssdk.core.exception.SdkClientException: Unable to execute HTTP request: The target server failed to respond: The target server failed to respond at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:351) at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:280) at org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:374) at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:84) at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:70) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1899) at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:70) at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:40) at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36) at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125) at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:69) at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: java.io.IOException: java.lang.RuntimeException: software.amazon.awssdk.thirdparty.org.apache.http.NoHttpResponseException: software.amazon.awssdk.core.exception.SdkClientException: Unable to execute HTTP request: The target server failed to respond: The target server failed to respond at org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.pushRecord(MapRecordSource.java:80) at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.run(MapRecordProcessor.java:437) at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:297) ... 16 more Caused by: java.io.IOException: java.lang.RuntimeException: software.amazon.awssdk.thirdparty.org.apache.http.NoHttpResponseException: software.amazon.awssdk.core.exception.SdkClientException: Unable to execute HTTP request: The target server failed to respond: The target server failed to respond at org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderNextException(HiveIOExceptionHandlerChain.java:121) at org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderNextException(HiveIOExceptionHandlerUtil.java:77) at org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:381) at org.apache.hadoop.hive.ql.io.HiveRecordReader.doNext(HiveRecordReader.java:82) at org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.next(HiveContextAwareRecordReader.java:119) at org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.next(HiveContextAwareRecordReader.java:59) at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.next(TezGroupedSplitsInputFormat.java:151) at org.apache.tez.mapreduce.lib.MRReaderMapred.next(MRReaderMapred.java:116) at org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.pushRecord(MapRecordSource.java:68) ... 18 more Caused by: java.lang.RuntimeException: software.amazon.awssdk.thirdparty.org.apache.http.NoHttpResponseException: software.amazon.awssdk.core.exception.SdkClientException: Unable to execute HTTP request: The target server failed to respond: The target server failed to respond at org.apache.iceberg.mr.hive.vector.HiveBatchIterator.advance(HiveBatchIterator.java:129) at org.apache.iceberg.mr.hive.vector.HiveBatchIterator.hasNext(HiveBatchIterator.java:137) at org.apache.iceberg.mr.hive.vector.HiveDeleteFilter$1$1.hasNext(HiveDeleteFilter.java:98) at org.apache.iceberg.mr.mapreduce.IcebergInputFormat$IcebergRecordReader.nextKeyValue(IcebergInputFormat.java:299) at org.apache.iceberg.mr.hive.vector.HiveIcebergVectorizedRecordReader.next(HiveI
[jira] [Commented] (HADOOP-18830) S3A: Cut S3 Select
[ https://issues.apache.org/jira/browse/HADOOP-18830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17803652#comment-17803652 ] ASF GitHub Bot commented on HADOOP-18830: - hadoop-yetus commented on PR #6144: URL: https://github.com/apache/hadoop/pull/6144#issuecomment-1879075825 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 53s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 0s | | xmllint was not available. | | +0 :ok: | markdownlint | 0m 0s | | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 13 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 14m 38s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 30m 43s | | trunk passed | | +1 :green_heart: | compile | 16m 11s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 14m 51s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 4m 13s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 33s | | trunk passed | | +1 :green_heart: | javadoc | 1m 15s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 22s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +0 :ok: | spotbugs | 0m 42s | | branch/hadoop-project no spotbugs output file (spotbugsXml.xml) | | +1 :green_heart: | shadedclient | 33m 7s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 31s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 0m 43s | | the patch passed | | +1 :green_heart: | compile | 15m 38s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 15m 38s | | the patch passed | | +1 :green_heart: | compile | 14m 47s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 14m 47s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 4m 2s | | root: The patch generated 0 new + 6 unchanged - 8 fixed = 6 total (was 14) | | +1 :green_heart: | mvnsite | 1m 29s | | the patch passed | | +1 :green_heart: | javadoc | 1m 10s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 21s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +0 :ok: | spotbugs | 0m 36s | | hadoop-project has no data from spotbugs | | +1 :green_heart: | shadedclient | 32m 59s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 0m 32s | | hadoop-project in the patch passed. | | +1 :green_heart: | unit | 2m 55s | | hadoop-aws in the patch passed. | | +1 :green_heart: | asflicense | 0m 59s | | The patch does not generate ASF License warnings. | | | | 206m 3s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6144/8/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6144 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient codespell detsecrets xmllint spotbugs checkstyle markdownlint | | uname | Linux e7bd09825574 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / a254800c511f9a3cd3f78e399e6bb834c69132fc | | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-mult
Re: [PR] HADOOP-18830. Cut S3 Select [hadoop]
hadoop-yetus commented on PR #6144: URL: https://github.com/apache/hadoop/pull/6144#issuecomment-1879075825 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 53s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 0s | | xmllint was not available. | | +0 :ok: | markdownlint | 0m 0s | | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 13 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 14m 38s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 30m 43s | | trunk passed | | +1 :green_heart: | compile | 16m 11s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 14m 51s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 4m 13s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 33s | | trunk passed | | +1 :green_heart: | javadoc | 1m 15s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 22s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +0 :ok: | spotbugs | 0m 42s | | branch/hadoop-project no spotbugs output file (spotbugsXml.xml) | | +1 :green_heart: | shadedclient | 33m 7s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 31s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 0m 43s | | the patch passed | | +1 :green_heart: | compile | 15m 38s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 15m 38s | | the patch passed | | +1 :green_heart: | compile | 14m 47s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 14m 47s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 4m 2s | | root: The patch generated 0 new + 6 unchanged - 8 fixed = 6 total (was 14) | | +1 :green_heart: | mvnsite | 1m 29s | | the patch passed | | +1 :green_heart: | javadoc | 1m 10s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 21s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +0 :ok: | spotbugs | 0m 36s | | hadoop-project has no data from spotbugs | | +1 :green_heart: | shadedclient | 32m 59s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 0m 32s | | hadoop-project in the patch passed. | | +1 :green_heart: | unit | 2m 55s | | hadoop-aws in the patch passed. | | +1 :green_heart: | asflicense | 0m 59s | | The patch does not generate ASF License warnings. | | | | 206m 3s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6144/8/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6144 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient codespell detsecrets xmllint spotbugs checkstyle markdownlint | | uname | Linux e7bd09825574 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / a254800c511f9a3cd3f78e399e6bb834c69132fc | | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6144/8/testReport/ | | Max. process+thread count | 750 (vs. ulimit of 5500) | | modules | C: hadoop-project hadoop-tools/hadoop-aws U: . | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6144/8/console |
[jira] [Commented] (HADOOP-19027) S3A: S3AInputStream doesn't recover from HTTP exceptions
[ https://issues.apache.org/jira/browse/HADOOP-19027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17803649#comment-17803649 ] Steve Loughran commented on HADOOP-19027: - openssl, "stream is closed" {code} Error while running task ( failure ) : attempt_1703842027450_0084_4_02_04_0:java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: java.io.IOException: java.lang.RuntimeException: java.io.IOException: software.amazon.awssdk.core.exception.SdkClientException: Unable to execute HTTP request: WFOPENSSL0035 Stream is closed: WFOPENSSL0035 Stream is closed at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:351) at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:280) at org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:374) at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:84) at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:70) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1899) at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:70) at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:40) at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36) at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125) at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:69) at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: java.io.IOException: java.lang.RuntimeException: java.io.IOException: software.amazon.awssdk.core.exception.SdkClientException: Unable to execute HTTP request: WFOPENSSL0035 Stream is closed: WFOPENSSL0035 Stream is closed at org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.pushRecord(MapRecordSource.java:80) at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.run(MapRecordProcessor.java:437) at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:297) ... 16 more Caused by: java.io.IOException: java.lang.RuntimeException: java.io.IOException: software.amazon.awssdk.core.exception.SdkClientException: Unable to execute HTTP request: WFOPENSSL0035 Stream is closed: WFOPENSSL0035 Stream is closed at org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderNextException(HiveIOExceptionHandlerChain.java:121) at org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderNextException(HiveIOExceptionHandlerUtil.java:77) at org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:381) at org.apache.hadoop.hive.ql.io.HiveRecordReader.doNext(HiveRecordReader.java:82) at org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.next(HiveContextAwareRecordReader.java:119) at org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.next(HiveContextAwareRecordReader.java:59) at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.next(TezGroupedSplitsInputFormat.java:151) at org.apache.tez.mapreduce.lib.MRReaderMapred.next(MRReaderMapred.java:116) at org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.pushRecord(MapRecordSource.java:68) ... 18 more Caused by: java.lang.RuntimeException: java.io.IOException: software.amazon.awssdk.core.exception.SdkClientException: Unable to execute HTTP request: WFOPENSSL0035 Stream is closed: WFOPENS SL0035 Stream is closed at org.apache.iceberg.mr.hive.vector.HiveBatchIterator.advance(HiveBatchIterator.java:129){noformat} {code} and httpclient software.amazon.awssdk.thirdparty.org.apache.http.NoHttpResponseException with The target server failed to respond"" {code} Error while running task ( failure ) : attempt_1704316855291_0218_13_02_04_0:java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: java.io.IOException: java.lang.RuntimeException: software.amazon.awssdk.thirdparty.org.apache.http.NoHttpResponseException: software.amazon.awssdk.core.exception.SdkClientException: Unable to execute HTTP request: The target server failed to respond: The target server failed to respond at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initi
[jira] [Created] (HADOOP-19027) S3A: S3AInputStream doesn't recover from HTTP exceptions
Steve Loughran created HADOOP-19027: --- Summary: S3A: S3AInputStream doesn't recover from HTTP exceptions Key: HADOOP-19027 URL: https://issues.apache.org/jira/browse/HADOOP-19027 Project: Hadoop Common Issue Type: Sub-task Components: fs/s3 Affects Versions: 3.4.0 Reporter: Steve Loughran Assignee: Steve Loughran S3AInputStream doesn't seem to recover from Http exceptions raised through HttpClient or through OpenSSL. * review the recovery code to make sure it is retrying enough, it looks suspiciously like it doesn't * detect the relevant openssl, shaded httpclient and unshaded httpclient exceptions, map to a standard one and treat as comms error in our retry policy This is not the same as the load balancer/proxy returning 443/444 which we map to AWSNoResponseException. We can't reuse that as it expects to be created from an {{software.amazon.awssdk.awscore.exception.AwsServiceException}} exception with the relevant fields...changing it could potentially be incompatible. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-19025) Migrate ContractTestUtils to AssertJ
[ https://issues.apache.org/jira/browse/HADOOP-19025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17803644#comment-17803644 ] huangzhaobo99 commented on HADOOP-19025: [~adoroszlai] Can this task be split into multiple tickets? If possible, would you mind assigning me some? > Migrate ContractTestUtils to AssertJ > > > Key: HADOOP-19025 > URL: https://issues.apache.org/jira/browse/HADOOP-19025 > Project: Hadoop Common > Issue Type: Improvement > Components: test >Reporter: Attila Doroszlai >Assignee: Attila Doroszlai >Priority: Major > Labels: pull-request-available > > Replace assertions from JUnit4 with equivalent functionality from AssertJ, to > make {{ContractTestUtils}} independent of JUnit version. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HDFS-17324. RBF: Router should not return nameservices that not enable observer r… [hadoop]
goiri commented on code in PR #6412: URL: https://github.com/apache/hadoop/pull/6412#discussion_r1443127143 ## hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcClient.java: ## @@ -136,9 +136,9 @@ public class RouterRpcClient { /** Field separator of CallerContext. */ private final String contextFieldSeparator; /** Observer read enabled. Default for all nameservices. */ - private final boolean observerReadEnabledDefault; Review Comment: Making these things static is not very good practice. ## hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterStateIdContext.java: ## @@ -86,7 +86,7 @@ public void setResponseHeaderState(RpcResponseHeaderProto.Builder headerBuilder) } RouterFederatedStateProto.Builder builder = RouterFederatedStateProto.newBuilder(); namespaceIdMap.forEach((k, v) -> { - if (v.get() != Long.MIN_VALUE) { + if ((v.get() != Long.MIN_VALUE) && RouterRpcClient.isNamespaceObserverReadEligible(k)) { Review Comment: Are you making all the other stuff static because of this? I don't think this is clean. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-19025) Migrate ContractTestUtils to AssertJ
[ https://issues.apache.org/jira/browse/HADOOP-19025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HADOOP-19025: Labels: pull-request-available (was: ) > Migrate ContractTestUtils to AssertJ > > > Key: HADOOP-19025 > URL: https://issues.apache.org/jira/browse/HADOOP-19025 > Project: Hadoop Common > Issue Type: Improvement > Components: test >Reporter: Attila Doroszlai >Assignee: Attila Doroszlai >Priority: Major > Labels: pull-request-available > > Replace assertions from JUnit4 with equivalent functionality from AssertJ, to > make {{ContractTestUtils}} independent of JUnit version. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-19025) Migrate ContractTestUtils to AssertJ
[ https://issues.apache.org/jira/browse/HADOOP-19025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17803631#comment-17803631 ] ASF GitHub Bot commented on HADOOP-19025: - adoroszlai opened a new pull request, #6415: URL: https://github.com/apache/hadoop/pull/6415 ## What changes were proposed in this pull request? Replace assertions in `ContractTestUtils` and abstract contract tests with `assertThat` from AssertJ, to reduce dependency on JUnit4. Kept `extends Assert` for compatibility, but I'd like to get rid of that in the long run. https://issues.apache.org/jira/browse/HADOOP-19025 ## How was this patch tested? ``` mvn -DskipShade -am -pl :hadoop-hdfs -Dtest='TestLocalFSContract*,TestHDFSContract*,TestRawLocal*' clean test ``` > Migrate ContractTestUtils to AssertJ > > > Key: HADOOP-19025 > URL: https://issues.apache.org/jira/browse/HADOOP-19025 > Project: Hadoop Common > Issue Type: Improvement > Components: test >Reporter: Attila Doroszlai >Assignee: Attila Doroszlai >Priority: Major > > Replace assertions from JUnit4 with equivalent functionality from AssertJ, to > make {{ContractTestUtils}} independent of JUnit version. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[PR] HADOOP-19025. Migrate contract tests in hadoop-common to AssertJ [hadoop]
adoroszlai opened a new pull request, #6415: URL: https://github.com/apache/hadoop/pull/6415 ## What changes were proposed in this pull request? Replace assertions in `ContractTestUtils` and abstract contract tests with `assertThat` from AssertJ, to reduce dependency on JUnit4. Kept `extends Assert` for compatibility, but I'd like to get rid of that in the long run. https://issues.apache.org/jira/browse/HADOOP-19025 ## How was this patch tested? ``` mvn -DskipShade -am -pl :hadoop-hdfs -Dtest='TestLocalFSContract*,TestHDFSContract*,TestRawLocal*' clean test ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HDFS-17302. RBF: ProportionRouterRpcFairnessPolicyController-Sharing and isolation. [hadoop]
goiri commented on code in PR #6380: URL: https://github.com/apache/hadoop/pull/6380#discussion_r1443121381 ## hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/fairness/TestRouterHandlersFairness.java: ## @@ -308,4 +375,28 @@ private void innerCalls(URI address, int numOps, boolean isConcurrent, overloadException.get(); } } + + private static Map expectedHandlerPerNs(String str) { +Map handlersPerNsMap = new HashMap<>(); +if (str != null) { + String[] tmpStrs = str.split(", "); + for(String tmpStr : tmpStrs) { +String[] handlersPerNs = tmpStr.split(":"); +handlersPerNsMap.put(handlersPerNs[0], Integer.valueOf(handlersPerNs[1])); + } +} +return handlersPerNsMap; + } + + private static Map setConfiguration(String str) { +Map conf = new HashMap<>(); +if (str != null) { + String[] tmpStrs = str.split(", "); + for(String tmpStr : tmpStrs) { +String[] configKV = tmpStr.split("="); +conf.put(configKV[0], configKV[1]); Review Comment: Extract and probably check the length. ## hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/fairness/TestRouterHandlersFairness.java: ## @@ -308,4 +375,28 @@ private void innerCalls(URI address, int numOps, boolean isConcurrent, overloadException.get(); } } + + private static Map expectedHandlerPerNs(String str) { +Map handlersPerNsMap = new HashMap<>(); +if (str != null) { + String[] tmpStrs = str.split(", "); + for(String tmpStr : tmpStrs) { +String[] handlersPerNs = tmpStr.split(":"); +handlersPerNsMap.put(handlersPerNs[0], Integer.valueOf(handlersPerNs[1])); + } +} +return handlersPerNsMap; + } + + private static Map setConfiguration(String str) { +Map conf = new HashMap<>(); +if (str != null) { Review Comment: Early exit: ``` if (str == null) { return conf; } ``` ## hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/fairness/ProportionRouterRpcFairnessPolicyController.java: ## @@ -0,0 +1,101 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.hdfs.server.federation.fairness; + +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.hdfs.server.federation.router.FederationUtil; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; +import java.util.Set; + +import static org.apache.hadoop.hdfs.server.federation.fairness.RouterRpcFairnessConstants.CONCURRENT_NS; +import static org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys.DFS_ROUTER_FAIR_HANDLER_PROPORTION_DEFAULT; +import static org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys.DFS_ROUTER_FAIR_HANDLER_PROPORTION_KEY_PREFIX; +import static org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys.DFS_ROUTER_HANDLER_COUNT_DEFAULT; +import static org.apache.hadoop.hdfs.server.federation.router.RBFConfigKeys.DFS_ROUTER_HANDLER_COUNT_KEY; + +/** + * Proportion fairness policy extending {@link AbstractRouterRpcFairnessPolicyController} + * and fetching proportion of handlers from configuration for all available name services, + * based on the proportion and the total number of handlers, calculate the handlers of all ns. + * The handlers count will not change for this controller. + */ +public class ProportionRouterRpcFairnessPolicyController extends +AbstractRouterRpcFairnessPolicyController{ + + private static final Logger LOG = + LoggerFactory.getLogger(ProportionRouterRpcFairnessPolicyController.class); + // For unregistered ns, the default ns is used, + // so the configuration can be simplified if the handler ratio of all ns is 1, + // and transparent expansion of new ns can be supported. + private static final String DEFAULT_NS = "default_ns"; + + public ProportionRouterRpcFairnessPolicyController(Configuration conf){ +init(conf); + } + + @Override + public void init(Configuration conf) { +super.init(conf); +// Total handlers configured to process all incoming Rpc. +int hand
Re: [PR] HDFS-17315. Optimize the namenode format code logic. [hadoop]
huangzhaobo99 commented on PR #6400: URL: https://github.com/apache/hadoop/pull/6400#issuecomment-1878972507 > Thanks for fixing the spotbugs warning and refactor the code, @huangzhaobo99! @tasanuma Thanks for you merge to trunk! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HDFS-17315. Optimize the namenode format code logic. [hadoop]
tasanuma commented on PR #6400: URL: https://github.com/apache/hadoop/pull/6400#issuecomment-1878968475 Thanks for fixing the spotbugs warning and refactor the code, @huangzhaobo99! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HDFS-17315. Optimize the namenode format code logic. [hadoop]
tasanuma merged PR #6400: URL: https://github.com/apache/hadoop/pull/6400 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HDFS-17315. Optimize the namenode format code logic. [hadoop]
tasanuma commented on PR #6400: URL: https://github.com/apache/hadoop/pull/6400#issuecomment-1878966616 The CI results would be the same as https://github.com/apache/hadoop/pull/6400#issuecomment-1873682217. The spotbugs warnings should disappear after this PR is merged. I'm merging it. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HDFS-17300. [SBN READ] Observer should throw ObserverRetryOnActiveException if stateid is always delayed with Active Namenode for a configured time [hadoop]
hadoop-yetus commented on PR #6414: URL: https://github.com/apache/hadoop/pull/6414#issuecomment-1878913920 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 21s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 0s | | xmllint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 14m 2s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 20m 14s | | trunk passed | | +1 :green_heart: | compile | 8m 40s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 7m 45s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 2m 3s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 40s | | trunk passed | | +1 :green_heart: | javadoc | 1m 18s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 33s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | -1 :x: | spotbugs | 1m 45s | [/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6414/1/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html) | hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant spotbugs warnings. | | +1 :green_heart: | shadedclient | 21m 1s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 19s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 4s | | the patch passed | | +1 :green_heart: | compile | 8m 9s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 8m 9s | | the patch passed | | +1 :green_heart: | compile | 7m 37s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 7m 37s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 2m 2s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 44s | | the patch passed | | +1 :green_heart: | javadoc | 1m 14s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 1m 43s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 3m 22s | | the patch passed | | +1 :green_heart: | shadedclient | 21m 31s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 15m 58s | [/patch-unit-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6414/1/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt) | hadoop-common in the patch passed. | | -1 :x: | unit | 186m 47s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6414/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 41s | | The patch does not generate ASF License warnings. | | | | 335m 31s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.conf.TestCommonConfigurationFields | | | hadoop.hdfs.server.datanode.TestDirectoryScanner | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6414/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6414 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint | | uname | Linux 388e1a351c90 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / c589b62974209b5fbf08716838757f56b9882dda | | Default Jav
[jira] [Commented] (HADOOP-18975) AWS SDK v2: extend support for FIPS endpoints
[ https://issues.apache.org/jira/browse/HADOOP-18975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17803588#comment-17803588 ] ASF GitHub Bot commented on HADOOP-18975: - steveloughran commented on code in PR #6277: URL: https://github.com/apache/hadoop/pull/6277#discussion_r1443045900 ## hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/DefaultS3ClientFactory.java: ## @@ -165,6 +175,8 @@ private , ClientT> Build .pathStyleAccessEnabled(parameters.isPathStyleAccess()) .build(); +builder.fipsEnabled(parameters.isFipsEnabled()); Review Comment: ahh > AWS SDK v2: extend support for FIPS endpoints > -- > > Key: HADOOP-18975 > URL: https://issues.apache.org/jira/browse/HADOOP-18975 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.4.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Labels: pull-request-available > > v1 SDK supported FIPS just by changing the endpoint. > Now we have a new builder setting to use. > * add new fs.s3a.endpoint.fips option > * pass it down > * test -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HADOOP-18975. AWS SDK v2: extend support for FIPS endpoints [hadoop]
steveloughran commented on code in PR #6277: URL: https://github.com/apache/hadoop/pull/6277#discussion_r1443045900 ## hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/DefaultS3ClientFactory.java: ## @@ -165,6 +175,8 @@ private , ClientT> Build .pathStyleAccessEnabled(parameters.isPathStyleAccess()) .build(); +builder.fipsEnabled(parameters.isFipsEnabled()); Review Comment: ahh -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HDFS-17315. Optimize the namenode format code logic. [hadoop]
huangzhaobo99 commented on PR #6400: URL: https://github.com/apache/hadoop/pull/6400#issuecomment-1878864458 > @huangzhaobo99 Thanks for the PR. It seems that the spotbugs warning is not targeting the latest change. I think there might be an issue with the CI configuration. Let's ignore it for now. I believe the state of [759fb6e](https://github.com/apache/hadoop/commit/759fb6e63d346baaa513c0ec0a3445ff79db9cc2) is good. Could you revert it back to that state? Thank you for your reply. I have revert it. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HDFS-17315. Optimize the namenode format code logic. [hadoop]
tasanuma commented on PR #6400: URL: https://github.com/apache/hadoop/pull/6400#issuecomment-1878852193 @huangzhaobo99 Thanks for the PR. It seems that the spotbugs warning is not targeting the latest change. I think there might be an issue with the CI configuration. Let's ignore it for now. I believe the state of 759fb6e is good. Could you revert it back to that state? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18830) S3A: Cut S3 Select
[ https://issues.apache.org/jira/browse/HADOOP-18830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17803573#comment-17803573 ] ASF GitHub Bot commented on HADOOP-18830: - steveloughran commented on PR #6144: URL: https://github.com/apache/hadoop/pull/6144#issuecomment-1878842405 rebased pr with retest. failures unrelated; the signing one has an active pr to fix, the committer one looks like my config is at fault (bucket overrides not being cut) ``` [ERROR] Failures: [ERROR] ITestS3ACommitterFactory.testEverything:115->testInvalidFileBinding:165 Expected a org.apache.hadoop.fs.s3a.commit.PathCommitException to be thrown, but got the result: : FileOutputCommitter{PathOutputCommitter{context=TaskAttemptContextImpl{JobContextImpl{jobId=job_202401050108_0001}; taskId=attempt_202401050108_0001_m_00_0, status=''}; org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter@61fa8914}; outputPath=s3a://stevel--usw2-az1--x-s3/fork-0001/test/testEverything, workPath=s3a://stevel--usw2-az1--x-s3/fork-0001/test/testEverything/_temporary/1/_temporary/attempt_202401050108_0001_m_00_0, algorithmVersion=1, skipCleanup=false, ignoreCleanupFailures=false} [ERROR] Errors: [ERROR] ITestCustomSigner.testCustomSignerAndInitializer:135->runStoreOperationsAndVerify:155->lambda$runStoreOperationsAndVerify$0:160 » AWSBadRequest [ERROR] ITestCustomSigner.testCustomSignerAndInitializer:135->runStoreOperationsAndVerify:155->lambda$runStoreOperationsAndVerify$0:160 » AWSBadRequest [INFO] ``` > S3A: Cut S3 Select > -- > > Key: HADOOP-18830 > URL: https://issues.apache.org/jira/browse/HADOOP-18830 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.4.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Labels: pull-request-available > > getting s3 select to work with the v2 sdk is tricky, we need to add extra > libraries to the classpath beyond just bundle.jar. we can do this but > * AFAIK nobody has ever done CSV predicate pushdown, as it breaks split logic > completely > * CSV is a bad format > * one-line JSON more structured but also way less efficient > ORC/Parquet benefit from vectored IO and work spanning the cluster. > accordingly, I'm wondering what to do about s3 select > # cut? > # downgrade to optional and document the extra classes on the classpath > Option #2 is straightforward and effectively the default. we can also declare > the feature deprecated. > {code} > [ERROR] > testReadLandsatRecordsNoMatch(org.apache.hadoop.fs.s3a.select.ITestS3SelectLandsat) > Time elapsed: 147.958 s <<< ERROR! > java.io.IOException: java.lang.NoClassDefFoundError: > software/amazon/eventstream/MessageDecoder > at > org.apache.hadoop.fs.s3a.select.SelectObjectContentHelper.select(SelectObjectContentHelper.java:75) > at > org.apache.hadoop.fs.s3a.WriteOperationHelper.lambda$select$10(WriteOperationHelper.java:660) > at > org.apache.hadoop.fs.store.audit.AuditingFunctions.lambda$withinAuditSpan$0(AuditingFunctions.java:62) > at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:122) > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HADOOP-18830. Cut S3 Select [hadoop]
steveloughran commented on PR #6144: URL: https://github.com/apache/hadoop/pull/6144#issuecomment-1878842405 rebased pr with retest. failures unrelated; the signing one has an active pr to fix, the committer one looks like my config is at fault (bucket overrides not being cut) ``` [ERROR] Failures: [ERROR] ITestS3ACommitterFactory.testEverything:115->testInvalidFileBinding:165 Expected a org.apache.hadoop.fs.s3a.commit.PathCommitException to be thrown, but got the result: : FileOutputCommitter{PathOutputCommitter{context=TaskAttemptContextImpl{JobContextImpl{jobId=job_202401050108_0001}; taskId=attempt_202401050108_0001_m_00_0, status=''}; org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter@61fa8914}; outputPath=s3a://stevel--usw2-az1--x-s3/fork-0001/test/testEverything, workPath=s3a://stevel--usw2-az1--x-s3/fork-0001/test/testEverything/_temporary/1/_temporary/attempt_202401050108_0001_m_00_0, algorithmVersion=1, skipCleanup=false, ignoreCleanupFailures=false} [ERROR] Errors: [ERROR] ITestCustomSigner.testCustomSignerAndInitializer:135->runStoreOperationsAndVerify:155->lambda$runStoreOperationsAndVerify$0:160 » AWSBadRequest [ERROR] ITestCustomSigner.testCustomSignerAndInitializer:135->runStoreOperationsAndVerify:155->lambda$runStoreOperationsAndVerify$0:160 » AWSBadRequest [INFO] ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-19024) change to bouncy castle jdk1.8 jars
[ https://issues.apache.org/jira/browse/HADOOP-19024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17803571#comment-17803571 ] ASF GitHub Bot commented on HADOOP-19024: - hadoop-yetus commented on PR #6410: URL: https://github.com/apache/hadoop/pull/6410#issuecomment-1878836593 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 49s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 1s | | xmllint was not available. | | +0 :ok: | markdownlint | 0m 1s | | markdownlint was not available. | | +0 :ok: | shelldocs | 0m 1s | | Shelldocs was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 14m 49s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 35m 34s | | trunk passed | | +1 :green_heart: | compile | 18m 34s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 16m 43s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | mvnsite | 19m 44s | | trunk passed | | +1 :green_heart: | javadoc | 8m 54s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 7m 38s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | shadedclient | 53m 29s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 38s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 44m 34s | | the patch passed | | +1 :green_heart: | compile | 17m 46s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 17m 46s | | the patch passed | | +1 :green_heart: | compile | 16m 37s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 16m 37s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | mvnsite | 15m 45s | | the patch passed | | +1 :green_heart: | shellcheck | 0m 0s | | No new issues. | | +1 :green_heart: | javadoc | 8m 52s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 7m 32s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | shadedclient | 54m 47s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 781m 59s | [/patch-unit-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6410/1/artifact/out/patch-unit-root.txt) | root in the patch passed. | | +1 :green_heart: | asflicense | 1m 29s | | The patch does not generate ASF License warnings. | | | | 1098m 59s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 | | | hadoop.hdfs.server.datanode.TestDirectoryScanner | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6410/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6410 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient codespell detsecrets xmllint markdownlint shellcheck shelldocs | | uname | Linux f3e66b6999b8 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / c5d946ceb3e225448ce9932002da794a0127 | | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private
Re: [PR] HADOOP-19024. Use bouncycastle jdk18 1.77 [hadoop]
hadoop-yetus commented on PR #6410: URL: https://github.com/apache/hadoop/pull/6410#issuecomment-1878836593 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 49s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 1s | | xmllint was not available. | | +0 :ok: | markdownlint | 0m 1s | | markdownlint was not available. | | +0 :ok: | shelldocs | 0m 1s | | Shelldocs was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 14m 49s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 35m 34s | | trunk passed | | +1 :green_heart: | compile | 18m 34s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 16m 43s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | mvnsite | 19m 44s | | trunk passed | | +1 :green_heart: | javadoc | 8m 54s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 7m 38s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | shadedclient | 53m 29s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 38s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 44m 34s | | the patch passed | | +1 :green_heart: | compile | 17m 46s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 17m 46s | | the patch passed | | +1 :green_heart: | compile | 16m 37s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 16m 37s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | mvnsite | 15m 45s | | the patch passed | | +1 :green_heart: | shellcheck | 0m 0s | | No new issues. | | +1 :green_heart: | javadoc | 8m 52s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 7m 32s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | shadedclient | 54m 47s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 781m 59s | [/patch-unit-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6410/1/artifact/out/patch-unit-root.txt) | root in the patch passed. | | +1 :green_heart: | asflicense | 1m 29s | | The patch does not generate ASF License warnings. | | | | 1098m 59s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 | | | hadoop.hdfs.server.datanode.TestDirectoryScanner | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6410/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6410 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient codespell detsecrets xmllint markdownlint shellcheck shelldocs | | uname | Linux f3e66b6999b8 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / c5d946ceb3e225448ce9932002da794a0127 | | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6410/1/testReport/ | | Max. process+thread count | 2517 (vs. ulimit of 5500) | | modules | C: hadoop-project hadoop-common-project/hado
[jira] [Commented] (HADOOP-18708) AWS SDK V2 - Implement CSE
[ https://issues.apache.org/jira/browse/HADOOP-18708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17803562#comment-17803562 ] ASF GitHub Bot commented on HADOOP-18708: - steveloughran commented on code in PR #6164: URL: https://github.com/apache/hadoop/pull/6164#discussion_r1442966873 ## hadoop-project/pom.xml: ## @@ -188,6 +188,7 @@ 900 1.12.565 2.20.160 + 3.1.0 Review Comment: so this isn't in the big bundle? what does it depend on transitively? ## hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/impl/TestErrorTranslation.java: ## @@ -153,4 +156,20 @@ public void testMultiObjectExceptionFilledIn() throws Throwable { .describedAs("retry policy of MultiObjectException") .isFalse(); } + + @Test + public void testEncryptionClientExceptionExtraction() throws Throwable { Review Comment: add a test for a false match: string contains the pattern looked for, but inner cause is some RTE, verify translation doesn't crash ## hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/ErrorTranslation.java: ## @@ -106,6 +110,24 @@ public static IOException maybeExtractIOException(String path, Throwable thrown) } + /** + * Extracts the underlying exception from an S3EncryptionClientException. Review Comment: nit: wrong class named ## hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/ErrorTranslation.java: ## @@ -106,6 +110,24 @@ public static IOException maybeExtractIOException(String path, Throwable thrown) } + /** + * Extracts the underlying exception from an S3EncryptionClientException. + * @param exception amazon exception raised + * @return extractedException + */ + public static SdkException maybeExtractSdkException(SdkException exception) { +SdkException extractedException = exception; +if (exception.toString().contains(ENCRYPTION_CLIENT_EXCEPTION)) { Review Comment: 1. should this be in the classname, or is it one of those things which gets passed down as strings? 2. also include check for cause being instanceof SdkException so class cast problems don't lose stack trace of any other problem > AWS SDK V2 - Implement CSE > -- > > Key: HADOOP-18708 > URL: https://issues.apache.org/jira/browse/HADOOP-18708 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.4.0 >Reporter: Ahmar Suhail >Assignee: Ahmar Suhail >Priority: Major > Labels: pull-request-available > > S3 Encryption client for SDK V2 is now available, so add client side > encryption back in. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HADOOP-18708. AWS SDK V2 - Implement CSE [hadoop]
steveloughran commented on code in PR #6164: URL: https://github.com/apache/hadoop/pull/6164#discussion_r1442966873 ## hadoop-project/pom.xml: ## @@ -188,6 +188,7 @@ 900 1.12.565 2.20.160 + 3.1.0 Review Comment: so this isn't in the big bundle? what does it depend on transitively? ## hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/impl/TestErrorTranslation.java: ## @@ -153,4 +156,20 @@ public void testMultiObjectExceptionFilledIn() throws Throwable { .describedAs("retry policy of MultiObjectException") .isFalse(); } + + @Test + public void testEncryptionClientExceptionExtraction() throws Throwable { Review Comment: add a test for a false match: string contains the pattern looked for, but inner cause is some RTE, verify translation doesn't crash ## hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/ErrorTranslation.java: ## @@ -106,6 +110,24 @@ public static IOException maybeExtractIOException(String path, Throwable thrown) } + /** + * Extracts the underlying exception from an S3EncryptionClientException. Review Comment: nit: wrong class named ## hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/ErrorTranslation.java: ## @@ -106,6 +110,24 @@ public static IOException maybeExtractIOException(String path, Throwable thrown) } + /** + * Extracts the underlying exception from an S3EncryptionClientException. + * @param exception amazon exception raised + * @return extractedException + */ + public static SdkException maybeExtractSdkException(SdkException exception) { +SdkException extractedException = exception; +if (exception.toString().contains(ENCRYPTION_CLIENT_EXCEPTION)) { Review Comment: 1. should this be in the classname, or is it one of those things which gets passed down as strings? 2. also include check for cause being instanceof SdkException so class cast problems don't lose stack trace of any other problem -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-18886) S3A: AWS SDK V2 Migration: stabilization and S3Express
[ https://issues.apache.org/jira/browse/HADOOP-18886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-18886: Description: The final stabilisation changes to the V2 SDK MIgration; those moved off the HADOOP-18073 JIRA so we can close that. also adds support to Amazon S3 Express One Zone storage was:The final stabilisation changes to the V2 SDK MIgration; those moved off the HADOOP-18073 JIRA so we can close that. > S3A: AWS SDK V2 Migration: stabilization and S3Express > -- > > Key: HADOOP-18886 > URL: https://issues.apache.org/jira/browse/HADOOP-18886 > Project: Hadoop Common > Issue Type: New Feature > Components: fs/s3 >Affects Versions: 3.4.0 >Reporter: Steve Loughran >Assignee: ahmar#1 >Priority: Major > > The final stabilisation changes to the V2 SDK MIgration; those moved off the > HADOOP-18073 JIRA so we can close that. > also adds support to Amazon S3 Express One Zone storage -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-18135) Produce Windows binaries of Hadoop
[ https://issues.apache.org/jira/browse/HADOOP-18135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gautham Banasandra updated HADOOP-18135: Description: We currently only provide Linux libraries and binaries. We need to provide the same for Windows. We need to port the [create-release script|https://github.com/apache/hadoop/blob/5f9932acc4fa2b36a3005e587637c53f2da1618d/dev-support/bin/create-release] to run on Windows and produce the Windows binaries. (was: We currently only provide Linux libraries and binaries. We need to provide the same for Windows.) > Produce Windows binaries of Hadoop > -- > > Key: HADOOP-18135 > URL: https://issues.apache.org/jira/browse/HADOOP-18135 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Affects Versions: 3.4.0 > Environment: Windows 10 >Reporter: Gautham Banasandra >Assignee: Gautham Banasandra >Priority: Major > > We currently only provide Linux libraries and binaries. We need to provide > the same for Windows. We need to port the [create-release > script|https://github.com/apache/hadoop/blob/5f9932acc4fa2b36a3005e587637c53f2da1618d/dev-support/bin/create-release] > to run on Windows and produce the Windows binaries. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HDFS-17324. RBF: Router should not return nameservices that not enable observer r… [hadoop]
hadoop-yetus commented on PR #6412: URL: https://github.com/apache/hadoop/pull/6412#issuecomment-1878570815 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 20s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 43m 59s | | trunk passed | | +1 :green_heart: | compile | 0m 26s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | compile | 0m 19s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 20s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 28s | | trunk passed | | +1 :green_heart: | javadoc | 0m 29s | | trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 21s | | trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | spotbugs | 0m 52s | | trunk passed | | +1 :green_heart: | shadedclient | 22m 9s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 21s | | the patch passed | | +1 :green_heart: | compile | 0m 19s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javac | 0m 19s | | the patch passed | | +1 :green_heart: | compile | 0m 18s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | +1 :green_heart: | javac | 0m 18s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 12s | [/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6412/2/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt) | hadoop-hdfs-project/hadoop-hdfs-rbf: The patch generated 1 new + 4 unchanged - 0 fixed = 5 total (was 4) | | +1 :green_heart: | mvnsite | 0m 21s | | the patch passed | | +1 :green_heart: | javadoc | 0m 18s | | the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 | | +1 :green_heart: | javadoc | 0m 17s | | the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 | | -1 :x: | spotbugs | 0m 51s | [/new-spotbugs-hadoop-hdfs-project_hadoop-hdfs-rbf.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6412/2/artifact/out/new-spotbugs-hadoop-hdfs-project_hadoop-hdfs-rbf.html) | hadoop-hdfs-project/hadoop-hdfs-rbf generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | | +1 :green_heart: | shadedclient | 22m 19s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 19m 13s | | hadoop-hdfs-rbf in the patch passed. | | +1 :green_heart: | asflicense | 0m 23s | | The patch does not generate ASF License warnings. | | | | 117m 1s | | | | Reason | Tests | |---:|:--| | SpotBugs | module:hadoop-hdfs-project/hadoop-hdfs-rbf | | | Write to static field org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.observerReadEnabledDefault from instance method new org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient(Configuration, Router, ActiveNamenodeResolver, RouterRpcMonitor, RouterStateIdContext) At RouterRpcClient.java:from instance method new org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient(Configuration, Router, ActiveNamenodeResolver, RouterRpcMonitor, RouterStateIdContext) At RouterRpcClient.java:[line 224] | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6412/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6412 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux d495ec4b5c0a 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/
Re: [PR] YARN-11622. Fix ResourceManager asynchronous switch from Standy to Active exception [hadoop]
hadoop-yetus commented on PR #6352: URL: https://github.com/apache/hadoop/pull/6352#issuecomment-1878567308 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 21s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ branch-3.3 Compile Tests _ | | +1 :green_heart: | mvninstall | 66m 37s | | branch-3.3 passed | | +1 :green_heart: | compile | 0m 33s | | branch-3.3 passed | | +1 :green_heart: | checkstyle | 0m 27s | | branch-3.3 passed | | +1 :green_heart: | mvnsite | 0m 37s | | branch-3.3 passed | | +1 :green_heart: | javadoc | 2m 20s | | branch-3.3 passed | | +1 :green_heart: | spotbugs | 1m 12s | | branch-3.3 passed | | +1 :green_heart: | shadedclient | 22m 16s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 31s | | the patch passed | | +1 :green_heart: | compile | 0m 27s | | the patch passed | | +1 :green_heart: | javac | 0m 27s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 20s | | hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager: The patch generated 0 new + 66 unchanged - 1 fixed = 66 total (was 67) | | +1 :green_heart: | mvnsite | 0m 30s | | the patch passed | | +1 :green_heart: | javadoc | 0m 21s | | the patch passed | | -1 :x: | spotbugs | 1m 14s | [/new-spotbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6352/9/artifact/out/new-spotbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.html) | hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | | +1 :green_heart: | shadedclient | 21m 34s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 75m 34s | | hadoop-yarn-server-resourcemanager in the patch passed. | | +1 :green_heart: | asflicense | 0m 23s | | The patch does not generate ASF License warnings. | | | | 196m 13s | | | | Reason | Tests | |---:|:--| | SpotBugs | module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager | | | Exceptional return value of java.util.concurrent.ExecutorService.submit(Callable) ignored in org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.handleTransitionToStandByInNewThread() At ResourceManager.java:ignored in org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.handleTransitionToStandByInNewThread() At ResourceManager.java:[line 1131] | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6352/9/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6352 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 8243ad94cb2d 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | branch-3.3 / b1202a8f8f6e6d94a0319dfa54264a0a31e3825a | | Default Java | Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~18.04-b09 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6352/9/testReport/ | | Max. process+thread count | 939 (vs. ulimit of 5500) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6352/9/console | | versions | git=2.17.1 maven=3.6.0 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. -- This is an
[jira] [Commented] (HADOOP-19014) use jsr311-compat jar to allow us to use Jackson 2.14.3
[ https://issues.apache.org/jira/browse/HADOOP-19014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17803525#comment-17803525 ] ASF GitHub Bot commented on HADOOP-19014: - pjfanning commented on code in PR #6370: URL: https://github.com/apache/hadoop/pull/6370#discussion_r1442799261 ## hadoop-project/pom.xml: ## @@ -921,6 +921,11 @@ + Review Comment: I'll add some comments and recheck the dependency scopes. > use jsr311-compat jar to allow us to use Jackson 2.14.3 > --- > > Key: HADOOP-19014 > URL: https://issues.apache.org/jira/browse/HADOOP-19014 > Project: Hadoop Common > Issue Type: Task > Components: common >Reporter: PJ Fanning >Priority: Major > Labels: pull-request-available > > An alternative to HADOOP-18619 > See https://github.com/pjfanning/jsr311-compat -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HADOOP-19014. Jackson 2.14.3 requiring jsr311-compat [hadoop]
pjfanning commented on code in PR #6370: URL: https://github.com/apache/hadoop/pull/6370#discussion_r1442799261 ## hadoop-project/pom.xml: ## @@ -921,6 +921,11 @@ + Review Comment: I'll add some comments and recheck the dependency scopes. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-19024) change to bouncy castle jdk1.8 jars
[ https://issues.apache.org/jira/browse/HADOOP-19024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17803523#comment-17803523 ] ASF GitHub Bot commented on HADOOP-19024: - steveloughran commented on PR #6410: URL: https://github.com/apache/hadoop/pull/6410#issuecomment-1878553469 I agree that this is needed > change to bouncy castle jdk1.8 jars > --- > > Key: HADOOP-19024 > URL: https://issues.apache.org/jira/browse/HADOOP-19024 > Project: Hadoop Common > Issue Type: Task >Reporter: PJ Fanning >Priority: Major > Labels: pull-request-available > > They have stopped patching the JDK 1.5 jars that Hadoop uses (see > https://issues.apache.org/jira/browse/HADOOP-18540). > The new artifacts have similar names - but the names are like bcprov-jdk18on > as opposed to bcprov-jdk15on. > CVE-2023-33201 is an example of a security issue that seems only to be fixed > in the JDK 1.8 artifacts (ie no JDK 1.5 jar has the fix). > https://www.bouncycastle.org/releasenotes.html#r1rv77 latest current release > but the CVE was fixed in 1.74. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HADOOP-19024. Use bouncycastle jdk18 1.77 [hadoop]
steveloughran commented on PR #6410: URL: https://github.com/apache/hadoop/pull/6410#issuecomment-1878553469 I agree that this is needed -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HADOOP-19014. Jackson 2.14.3 requiring jsr311-compat [hadoop]
steveloughran commented on PR #6370: URL: https://github.com/apache/hadoop/pull/6370#issuecomment-1878552124 I'm happy with this, just need to understand if there are any problems with it being optional we will need to add in release notes as incompatible and explain the issue and why the jar should be excluded if you do have the jax-rs stuff on the cp, and that findclass can always be used to work out where the class is from -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-19023) ITestS3AConcurrentOps#testParallelRename intermittent timeout failure
[ https://issues.apache.org/jira/browse/HADOOP-19023?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17803521#comment-17803521 ] Steve Loughran commented on HADOOP-19023: - * make sure you've not got a site config with an aggressive timeout * do set version/component in the issue fields...it's not picked up from the parent > ITestS3AConcurrentOps#testParallelRename intermittent timeout failure > - > > Key: HADOOP-19023 > URL: https://issues.apache.org/jira/browse/HADOOP-19023 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.4.0 >Reporter: Viraj Jasani >Priority: Major > > Need to configure higher timeout for the test. > > {code:java} > [ERROR] Tests run: 2, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: > 256.281 s <<< FAILURE! - in > org.apache.hadoop.fs.s3a.scale.ITestS3AConcurrentOps > [ERROR] > testParallelRename(org.apache.hadoop.fs.s3a.scale.ITestS3AConcurrentOps) > Time elapsed: 72.565 s <<< ERROR! > org.apache.hadoop.fs.s3a.AWSApiCallTimeoutException: Writing Object on > fork-0005/test/testParallelRename-source0: > software.amazon.awssdk.core.exception.ApiCallTimeoutException: Client > execution did not complete before the specified timeout configuration: 15000 > millis > at > org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:215) > at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:124) > at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$4(Invoker.java:376) > at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:468) > at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:372) > at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:347) > at > org.apache.hadoop.fs.s3a.WriteOperationHelper.retry(WriteOperationHelper.java:214) > at > org.apache.hadoop.fs.s3a.WriteOperationHelper.putObject(WriteOperationHelper.java:532) > at > org.apache.hadoop.fs.s3a.S3ABlockOutputStream.lambda$putObject$0(S3ABlockOutputStream.java:620) > at > org.apache.hadoop.thirdparty.com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125) > at > org.apache.hadoop.thirdparty.com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:69) > at > org.apache.hadoop.thirdparty.com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78) > at > org.apache.hadoop.util.SemaphoredDelegatingExecutor$RunnableWithPermitRelease.run(SemaphoredDelegatingExecutor.java:225) > at > org.apache.hadoop.util.SemaphoredDelegatingExecutor$RunnableWithPermitRelease.run(SemaphoredDelegatingExecutor.java:225) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:750) > Caused by: software.amazon.awssdk.core.exception.ApiCallTimeoutException: > Client execution did not complete before the specified timeout configuration: > 15000 millis > at > software.amazon.awssdk.core.exception.ApiCallTimeoutException$BuilderImpl.build(ApiCallTimeoutException.java:97) > at > software.amazon.awssdk.core.exception.ApiCallTimeoutException.create(ApiCallTimeoutException.java:38) > at > software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallTimeoutTrackingStage.generateApiCallTimeoutException(ApiCallTimeoutTrackingStage.java:151) > at > software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallTimeoutTrackingStage.handleInterruptedException(ApiCallTimeoutTrackingStage.java:139) > at > software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallTimeoutTrackingStage.translatePipelineException(ApiCallTimeoutTrackingStage.java:107) > at > software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallTimeoutTrackingStage.execute(ApiCallTimeoutTrackingStage.java:62) > at > software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallTimeoutTrackingStage.execute(ApiCallTimeoutTrackingStage.java:42) > at > software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallMetricCollectionStage.execute(ApiCallMetricCollectionStage.java:50) > at > software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallMetricCollectionStage.execute(ApiCallMetricCollectionStage.java:32) > at > software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute(RequestPipelineBuilder.java:206) > at > software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineS
[jira] [Updated] (HADOOP-19023) ITestS3AConcurrentOps#testParallelRename intermittent timeout failure
[ https://issues.apache.org/jira/browse/HADOOP-19023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-19023: Component/s: fs/s3 > ITestS3AConcurrentOps#testParallelRename intermittent timeout failure > - > > Key: HADOOP-19023 > URL: https://issues.apache.org/jira/browse/HADOOP-19023 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.4.0 >Reporter: Viraj Jasani >Priority: Major > > Need to configure higher timeout for the test. > > {code:java} > [ERROR] Tests run: 2, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: > 256.281 s <<< FAILURE! - in > org.apache.hadoop.fs.s3a.scale.ITestS3AConcurrentOps > [ERROR] > testParallelRename(org.apache.hadoop.fs.s3a.scale.ITestS3AConcurrentOps) > Time elapsed: 72.565 s <<< ERROR! > org.apache.hadoop.fs.s3a.AWSApiCallTimeoutException: Writing Object on > fork-0005/test/testParallelRename-source0: > software.amazon.awssdk.core.exception.ApiCallTimeoutException: Client > execution did not complete before the specified timeout configuration: 15000 > millis > at > org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:215) > at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:124) > at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$4(Invoker.java:376) > at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:468) > at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:372) > at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:347) > at > org.apache.hadoop.fs.s3a.WriteOperationHelper.retry(WriteOperationHelper.java:214) > at > org.apache.hadoop.fs.s3a.WriteOperationHelper.putObject(WriteOperationHelper.java:532) > at > org.apache.hadoop.fs.s3a.S3ABlockOutputStream.lambda$putObject$0(S3ABlockOutputStream.java:620) > at > org.apache.hadoop.thirdparty.com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125) > at > org.apache.hadoop.thirdparty.com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:69) > at > org.apache.hadoop.thirdparty.com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78) > at > org.apache.hadoop.util.SemaphoredDelegatingExecutor$RunnableWithPermitRelease.run(SemaphoredDelegatingExecutor.java:225) > at > org.apache.hadoop.util.SemaphoredDelegatingExecutor$RunnableWithPermitRelease.run(SemaphoredDelegatingExecutor.java:225) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:750) > Caused by: software.amazon.awssdk.core.exception.ApiCallTimeoutException: > Client execution did not complete before the specified timeout configuration: > 15000 millis > at > software.amazon.awssdk.core.exception.ApiCallTimeoutException$BuilderImpl.build(ApiCallTimeoutException.java:97) > at > software.amazon.awssdk.core.exception.ApiCallTimeoutException.create(ApiCallTimeoutException.java:38) > at > software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallTimeoutTrackingStage.generateApiCallTimeoutException(ApiCallTimeoutTrackingStage.java:151) > at > software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallTimeoutTrackingStage.handleInterruptedException(ApiCallTimeoutTrackingStage.java:139) > at > software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallTimeoutTrackingStage.translatePipelineException(ApiCallTimeoutTrackingStage.java:107) > at > software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallTimeoutTrackingStage.execute(ApiCallTimeoutTrackingStage.java:62) > at > software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallTimeoutTrackingStage.execute(ApiCallTimeoutTrackingStage.java:42) > at > software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallMetricCollectionStage.execute(ApiCallMetricCollectionStage.java:50) > at > software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallMetricCollectionStage.execute(ApiCallMetricCollectionStage.java:32) > at > software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute(RequestPipelineBuilder.java:206) > at > software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute(RequestPipelineBuilder.java:206) > at > software.amazon.awssdk.core.internal.http.pipeline.stages.ExecutionFailureExceptionReportingStage.execute(ExecutionFailureExcepti
[jira] [Updated] (HADOOP-19023) ITestS3AConcurrentOps#testParallelRename intermittent timeout failure
[ https://issues.apache.org/jira/browse/HADOOP-19023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-19023: Affects Version/s: 3.4.0 > ITestS3AConcurrentOps#testParallelRename intermittent timeout failure > - > > Key: HADOOP-19023 > URL: https://issues.apache.org/jira/browse/HADOOP-19023 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.4.0 >Reporter: Viraj Jasani >Priority: Major > > Need to configure higher timeout for the test. > > {code:java} > [ERROR] Tests run: 2, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: > 256.281 s <<< FAILURE! - in > org.apache.hadoop.fs.s3a.scale.ITestS3AConcurrentOps > [ERROR] > testParallelRename(org.apache.hadoop.fs.s3a.scale.ITestS3AConcurrentOps) > Time elapsed: 72.565 s <<< ERROR! > org.apache.hadoop.fs.s3a.AWSApiCallTimeoutException: Writing Object on > fork-0005/test/testParallelRename-source0: > software.amazon.awssdk.core.exception.ApiCallTimeoutException: Client > execution did not complete before the specified timeout configuration: 15000 > millis > at > org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:215) > at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:124) > at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$4(Invoker.java:376) > at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:468) > at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:372) > at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:347) > at > org.apache.hadoop.fs.s3a.WriteOperationHelper.retry(WriteOperationHelper.java:214) > at > org.apache.hadoop.fs.s3a.WriteOperationHelper.putObject(WriteOperationHelper.java:532) > at > org.apache.hadoop.fs.s3a.S3ABlockOutputStream.lambda$putObject$0(S3ABlockOutputStream.java:620) > at > org.apache.hadoop.thirdparty.com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125) > at > org.apache.hadoop.thirdparty.com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:69) > at > org.apache.hadoop.thirdparty.com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78) > at > org.apache.hadoop.util.SemaphoredDelegatingExecutor$RunnableWithPermitRelease.run(SemaphoredDelegatingExecutor.java:225) > at > org.apache.hadoop.util.SemaphoredDelegatingExecutor$RunnableWithPermitRelease.run(SemaphoredDelegatingExecutor.java:225) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:750) > Caused by: software.amazon.awssdk.core.exception.ApiCallTimeoutException: > Client execution did not complete before the specified timeout configuration: > 15000 millis > at > software.amazon.awssdk.core.exception.ApiCallTimeoutException$BuilderImpl.build(ApiCallTimeoutException.java:97) > at > software.amazon.awssdk.core.exception.ApiCallTimeoutException.create(ApiCallTimeoutException.java:38) > at > software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallTimeoutTrackingStage.generateApiCallTimeoutException(ApiCallTimeoutTrackingStage.java:151) > at > software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallTimeoutTrackingStage.handleInterruptedException(ApiCallTimeoutTrackingStage.java:139) > at > software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallTimeoutTrackingStage.translatePipelineException(ApiCallTimeoutTrackingStage.java:107) > at > software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallTimeoutTrackingStage.execute(ApiCallTimeoutTrackingStage.java:62) > at > software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallTimeoutTrackingStage.execute(ApiCallTimeoutTrackingStage.java:42) > at > software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallMetricCollectionStage.execute(ApiCallMetricCollectionStage.java:50) > at > software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallMetricCollectionStage.execute(ApiCallMetricCollectionStage.java:32) > at > software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute(RequestPipelineBuilder.java:206) > at > software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute(RequestPipelineBuilder.java:206) > at > software.amazon.awssdk.core.internal.http.pipeline.stages.ExecutionFailureExceptionReportingStage.execute(ExecutionFailureExceptionReportingStage.java:3
[jira] [Updated] (HADOOP-19022) ITestS3AConfiguration#testRequestTimeout failure
[ https://issues.apache.org/jira/browse/HADOOP-19022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-19022: Component/s: fs/s3 test > ITestS3AConfiguration#testRequestTimeout failure > > > Key: HADOOP-19022 > URL: https://issues.apache.org/jira/browse/HADOOP-19022 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3, test >Affects Versions: 3.4.0 >Reporter: Viraj Jasani >Priority: Minor > > "fs.s3a.connection.request.timeout" should be specified in milliseconds as per > {code:java} > Duration apiCallTimeout = getDuration(conf, REQUEST_TIMEOUT, > DEFAULT_REQUEST_TIMEOUT_DURATION, TimeUnit.MILLISECONDS, Duration.ZERO); > {code} > The test fails consistently because it sets 120 ms timeout which is less than > 15s (min network operation duration), and hence gets reset to 15000 ms based > on the enforcement. > > {code:java} > [ERROR] testRequestTimeout(org.apache.hadoop.fs.s3a.ITestS3AConfiguration) > Time elapsed: 0.016 s <<< FAILURE! > java.lang.AssertionError: Configured fs.s3a.connection.request.timeout is > different than what AWS sdk configuration uses internally expected:<12> > but was:<15000> > at org.junit.Assert.fail(Assert.java:89) > at org.junit.Assert.failNotEquals(Assert.java:835) > at org.junit.Assert.assertEquals(Assert.java:647) > at > org.apache.hadoop.fs.s3a.ITestS3AConfiguration.testRequestTimeout(ITestS3AConfiguration.java:444) > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-19022) ITestS3AConfiguration#testRequestTimeout failure
[ https://issues.apache.org/jira/browse/HADOOP-19022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17803520#comment-17803520 ] Steve Loughran edited comment on HADOOP-19022 at 1/5/24 11:51 AM: -- should be a string now, e.g "20s". have you explicitly set it in your site config? was (Author: ste...@apache.org): should be a string now, e.g "20s" > ITestS3AConfiguration#testRequestTimeout failure > > > Key: HADOOP-19022 > URL: https://issues.apache.org/jira/browse/HADOOP-19022 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3, test >Affects Versions: 3.4.0 >Reporter: Viraj Jasani >Priority: Minor > > "fs.s3a.connection.request.timeout" should be specified in milliseconds as per > {code:java} > Duration apiCallTimeout = getDuration(conf, REQUEST_TIMEOUT, > DEFAULT_REQUEST_TIMEOUT_DURATION, TimeUnit.MILLISECONDS, Duration.ZERO); > {code} > The test fails consistently because it sets 120 ms timeout which is less than > 15s (min network operation duration), and hence gets reset to 15000 ms based > on the enforcement. > > {code:java} > [ERROR] testRequestTimeout(org.apache.hadoop.fs.s3a.ITestS3AConfiguration) > Time elapsed: 0.016 s <<< FAILURE! > java.lang.AssertionError: Configured fs.s3a.connection.request.timeout is > different than what AWS sdk configuration uses internally expected:<12> > but was:<15000> > at org.junit.Assert.fail(Assert.java:89) > at org.junit.Assert.failNotEquals(Assert.java:835) > at org.junit.Assert.assertEquals(Assert.java:647) > at > org.apache.hadoop.fs.s3a.ITestS3AConfiguration.testRequestTimeout(ITestS3AConfiguration.java:444) > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-19022) ITestS3AConfiguration#testRequestTimeout failure
[ https://issues.apache.org/jira/browse/HADOOP-19022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-19022: Affects Version/s: 3.4.0 > ITestS3AConfiguration#testRequestTimeout failure > > > Key: HADOOP-19022 > URL: https://issues.apache.org/jira/browse/HADOOP-19022 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.4.0 >Reporter: Viraj Jasani >Priority: Minor > > "fs.s3a.connection.request.timeout" should be specified in milliseconds as per > {code:java} > Duration apiCallTimeout = getDuration(conf, REQUEST_TIMEOUT, > DEFAULT_REQUEST_TIMEOUT_DURATION, TimeUnit.MILLISECONDS, Duration.ZERO); > {code} > The test fails consistently because it sets 120 ms timeout which is less than > 15s (min network operation duration), and hence gets reset to 15000 ms based > on the enforcement. > > {code:java} > [ERROR] testRequestTimeout(org.apache.hadoop.fs.s3a.ITestS3AConfiguration) > Time elapsed: 0.016 s <<< FAILURE! > java.lang.AssertionError: Configured fs.s3a.connection.request.timeout is > different than what AWS sdk configuration uses internally expected:<12> > but was:<15000> > at org.junit.Assert.fail(Assert.java:89) > at org.junit.Assert.failNotEquals(Assert.java:835) > at org.junit.Assert.assertEquals(Assert.java:647) > at > org.apache.hadoop.fs.s3a.ITestS3AConfiguration.testRequestTimeout(ITestS3AConfiguration.java:444) > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-19022) ITestS3AConfiguration#testRequestTimeout failure
[ https://issues.apache.org/jira/browse/HADOOP-19022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17803520#comment-17803520 ] Steve Loughran commented on HADOOP-19022: - should be a string now, e.g "20s" > ITestS3AConfiguration#testRequestTimeout failure > > > Key: HADOOP-19022 > URL: https://issues.apache.org/jira/browse/HADOOP-19022 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Viraj Jasani >Priority: Minor > > "fs.s3a.connection.request.timeout" should be specified in milliseconds as per > {code:java} > Duration apiCallTimeout = getDuration(conf, REQUEST_TIMEOUT, > DEFAULT_REQUEST_TIMEOUT_DURATION, TimeUnit.MILLISECONDS, Duration.ZERO); > {code} > The test fails consistently because it sets 120 ms timeout which is less than > 15s (min network operation duration), and hence gets reset to 15000 ms based > on the enforcement. > > {code:java} > [ERROR] testRequestTimeout(org.apache.hadoop.fs.s3a.ITestS3AConfiguration) > Time elapsed: 0.016 s <<< FAILURE! > java.lang.AssertionError: Configured fs.s3a.connection.request.timeout is > different than what AWS sdk configuration uses internally expected:<12> > but was:<15000> > at org.junit.Assert.fail(Assert.java:89) > at org.junit.Assert.failNotEquals(Assert.java:835) > at org.junit.Assert.assertEquals(Assert.java:647) > at > org.apache.hadoop.fs.s3a.ITestS3AConfiguration.testRequestTimeout(ITestS3AConfiguration.java:444) > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-19025) Migrate ContractTestUtils to AssertJ
[ https://issues.apache.org/jira/browse/HADOOP-19025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17803519#comment-17803519 ] Steve Loughran commented on HADOOP-19025: - looking forward to this! > Migrate ContractTestUtils to AssertJ > > > Key: HADOOP-19025 > URL: https://issues.apache.org/jira/browse/HADOOP-19025 > Project: Hadoop Common > Issue Type: Improvement > Components: test >Reporter: Attila Doroszlai >Assignee: Attila Doroszlai >Priority: Major > > Replace assertions from JUnit4 with equivalent functionality from AssertJ, to > make {{ContractTestUtils}} independent of JUnit version. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-19026) S3A: TestIAMInstanceCredentialsProvider.testIAMInstanceCredentialsInstantiate failure
[ https://issues.apache.org/jira/browse/HADOOP-19026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17803518#comment-17803518 ] Steve Loughran commented on HADOOP-19026: - {code} java.lang.AssertionError: Cause not a IOException at org.apache.hadoop.fs.s3a.auth.TestIAMInstanceCredentialsProvider.__CLR4_4_1ey8o37g7b(TestIAMInstanceCredentialsProvider.java:100) at org.apache.hadoop.fs.s3a.auth.TestIAMInstanceCredentialsProvider.testIAMInstanceCredentialsInstantiate(TestIAMInstanceCredentialsProvider.java:72) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.lang.Thread.run(Thread.java:750) Caused by: software.amazon.awssdk.core.exception.SdkClientException: Failed to load credentials from IMDS. at software.amazon.awssdk.core.exception.SdkClientException$BuilderImpl.build(SdkClientException.java:111) at software.amazon.awssdk.core.exception.SdkClientException.create(SdkClientException.java:47) at software.amazon.awssdk.auth.credentials.InstanceProfileCredentialsProvider.refreshCredentials(InstanceProfileCredentialsProvider.java:159) at software.amazon.awssdk.utils.cache.CachedSupplier.lambda$jitteredPrefetchValueSupplier$8(CachedSupplier.java:300) at software.amazon.awssdk.utils.cache.NonBlocking.fetch(NonBlocking.java:141) at software.amazon.awssdk.utils.cache.CachedSupplier.refreshCache(CachedSupplier.java:208) at software.amazon.awssdk.utils.cache.CachedSupplier.get(CachedSupplier.java:135) at software.amazon.awssdk.auth.credentials.InstanceProfileCredentialsProvider.resolveCredentials(InstanceProfileCredentialsProvider.java:141) at org.apache.hadoop.fs.s3a.auth.IAMInstanceCredentialsProvider.getCredentials(IAMInstanceCredentialsProvider.java:135) at org.apache.hadoop.fs.s3a.auth.IAMInstanceCredentialsProvider.resolveCredentials(IAMInstanceCredentialsProvider.java:98) at org.apache.hadoop.fs.s3a.auth.TestIAMInstanceCredentialsProvider.__CLR4_4_1ey8o37g7b(TestIAMInstanceCredentialsProvider.java:75) ... 14 more Caused by: software.amazon.awssdk.core.exception.SdkClientException: The requested metadata is not found at http://169.254.169.254/latest/meta-data/iam/security-credentials/ at software.amazon.awssdk.core.exception.SdkClientException$BuilderImpl.build(SdkClientException.java:111) at software.amazon.awssdk.regions.util.HttpResourcesUtils.readResource(HttpResourcesUtils.java:125) at software.amazon.awssdk.regions.util.HttpResourcesUtils.readResource(HttpResourcesUtils.java:91) at software.amazon.awssdk.auth.credentials.InstanceProfileCredentialsProvider.lambda$getSecurityCredentials$3(InstanceProfileCredentialsProvider.java:256) at software.amazon.awssdk.utils.FunctionalUtils.lambda$safeSupplier$4(FunctionalUtils.java:108) at software.amazon.awssdk.utils.FunctionalUtils.invokeSafely(FunctionalUtils.java:136) at software.amazon.awssdk.auth.credentials.InstanceProfileCredentialsProvider.getSecurityCredentials(InstanceProfileCredentialsProvider.java:256) at software.amazon.awssdk.auth.credentials.InstanceProfileCredentialsProvider.createEndpointProvider(InstanceProfileCredentialsProvider.java:204) at software.amazon.awssdk.auth.credentials.InstanceProfileCredentialsProvider.refreshCredentials(InstanceProfileCredentialsProvider.java:150) ... 22 more {code} > S3A: TestIAMInstanceCredentialsProvider.testIAMInstanceCredentialsInstantiate > failure > - > > Key: HADOOP-19026 > URL: https://issues.apache.org/jira/browse/HADOOP-19026 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3, test >Affects Versions: 3.4.0 >
[jira] [Created] (HADOOP-19026) S3A: TestIAMInstanceCredentialsProvider.testIAMInstanceCredentialsInstantiate failure
Steve Loughran created HADOOP-19026: --- Summary: S3A: TestIAMInstanceCredentialsProvider.testIAMInstanceCredentialsInstantiate failure Key: HADOOP-19026 URL: https://issues.apache.org/jira/browse/HADOOP-19026 Project: Hadoop Common Issue Type: Sub-task Components: fs/s3, test Affects Versions: 3.4.0 Reporter: Steve Loughran test failure in TestIAMInstanceCredentialsProvider; looks like the test is running in an EC2 VM whose IAM service isn't providing credentials -and the test isn't set up to ignore that. {code} Caused by: software.amazon.awssdk.core.exception.SdkClientException: The requested metadata is not found at http://169.254.169.254/latest/meta-data/iam/security-credentials/ at software.amazon.awssdk.core.exception.SdkClientException$BuilderImpl.build(SdkClientException.java:111) at software.amazon.awssdk.regions.util.HttpResourcesUtils.readResource(HttpResourcesUtils.java:125) at software.amazon.awssdk.regions.util.HttpResourcesUtils.readResource(HttpResourcesUtils.java:91) at software.amazon.awssdk.auth.credentials.InstanceProfileCredentialsProvider.lambda$getSecurityCredentials$3(InstanceProfileCredentialsProvider.java:256) at software.amazon.awssdk.utils.FunctionalUtils.lambda$safeSupplier$4(FunctionalUtils.java:108) at software.amazon.awssdk.utils.FunctionalUtils.invokeSafely(FunctionalUtils.java:136) at software.amazon.awssdk.auth.credentials.InstanceProfileCredentialsProvider.getSecurityCredentials(InstanceProfileCredentialsProvider.java:256) at software.amazon.awssdk.auth.credentials.InstanceProfileCredentialsProvider.createEndpointProvider(InstanceProfileCredentialsProvider.java:204) at software.amazon.awssdk.auth.credentials.InstanceProfileCredentialsProvider.refreshCredentials(InstanceProfileCredentialsProvider.java:150) {code} -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HDFS-17269. RBF: Listing trash directory should return subdirs from all subclusters. [hadoop]
slfan1989 commented on PR #6312: URL: https://github.com/apache/hadoop/pull/6312#issuecomment-1878479567 > @LiuGuH Would it be more chaotic to display the trash directories of all clusters at once? Should we allow users to choose whether to display the trash directories of all subgroups through more configuration options, instead of enforcing such behavior I agree with @zhtttylz idea. For a cluster with 7-8 NS, it is not friendly. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HDFS-17309. RBF: Fix Router Safemode check condition error [hadoop]
LiuGuH commented on PR #6390: URL: https://github.com/apache/hadoop/pull/6390#issuecomment-1878479270 > @LiuGuH When we submit a PR again, a better way is to write the JIRA corresponding to this PR in the description information. > > Like: `JIRA: HDFS-17309. RBF: Fix Router Safemode check condition error.` Ok, I will submit new PR according to this format in the future. Thank you. @slfan1989 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HDFS-17325. Doc: Fix the documentation of fs expunge command in FileSystemShell.md [hadoop]
slfan1989 commented on PR #6413: URL: https://github.com/apache/hadoop/pull/6413#issuecomment-1878467995 @LiuGuH Thanks for the contribution! @ayushtkn Thanks for reviewing the code! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HDFS-17325. Doc: Fix the documentation of fs expunge command in FileSystemShell.md [hadoop]
slfan1989 merged PR #6413: URL: https://github.com/apache/hadoop/pull/6413 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HDFS-17309. RBF: Fix Router Safemode check condition error [hadoop]
slfan1989 commented on PR #6390: URL: https://github.com/apache/hadoop/pull/6390#issuecomment-1878463127 @LiuGuH When we submit a PR again, a better way is to write the JIRA corresponding to this PR in the description information. Like: `JIRA: HDFS-17309. RBF: Fix Router Safemode check condition error.` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HDFS-17309. RBF: Fix Router Safemode check condition error [hadoop]
slfan1989 commented on PR #6390: URL: https://github.com/apache/hadoop/pull/6390#issuecomment-1878460283 @LiuGuH Thank you for your contribution! Merged Into trunk. @goiri @simbadzina Thanks for the review! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HDFS-17309. RBF: Fix Router Safemode check condition error [hadoop]
slfan1989 merged PR #6390: URL: https://github.com/apache/hadoop/pull/6390 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HDFS-17325. Doc: Fix the documentation of fs expunge command in FileSystemShell.md [hadoop]
hadoop-yetus commented on PR #6413: URL: https://github.com/apache/hadoop/pull/6413#issuecomment-1878458100 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 49s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +0 :ok: | markdownlint | 0m 0s | | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 73m 14s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 25s | | trunk passed | | +1 :green_heart: | shadedclient | 111m 44s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 54s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | mvnsite | 1m 15s | | the patch passed | | +1 :green_heart: | shadedclient | 38m 34s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 35s | | The patch does not generate ASF License warnings. | | | | 157m 0s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6413/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/6413 | | Optional Tests | dupname asflicense mvnsite codespell detsecrets markdownlint | | uname | Linux 2b34a2308049 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 823c24a46768c03d4737022fcf6c94d23915fb5c | | Max. process+thread count | 529 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6413/1/console | | versions | git=2.25.1 maven=3.6.3 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[PR] HDFS-17300. [SBN READ] Observer should throw ObserverRetryOnActiveException if stateid is always delayed with Active Namenode for a configured time [hadoop]
LiuGuH opened a new pull request, #6414: URL: https://github.com/apache/hadoop/pull/6414 …ateid is always delayed with Active Namenode for a period of time ### Description of PR Now when Observer NN is used, if the stateid is delayed , the rpcServer will be requeued into callqueue. If EditLogTailer is broken or something else wrong , the call will be requeued again and again. So Observer should throw ObserverRetryOnActiveException if stateid is always delayed with Active Namenode for a configured time. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HDFS-17300. [SBN READ] Observer should throw ObserverRetryOnActiveException if stateid is always delayed with Active Namenode for a configured time [hadoop]
LiuGuH closed pull request #6383: HDFS-17300. [SBN READ] Observer should throw ObserverRetryOnActiveException if stateid is always delayed with Active Namenode for a configured time URL: https://github.com/apache/hadoop/pull/6383 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HDFS-17321. RBF: Add RouterAutoMsyncService for auto msync in Router [hadoop]
slfan1989 commented on PR #6404: URL: https://github.com/apache/hadoop/pull/6404#issuecomment-1878445162 @simbadzina Thanks for helping review the code! @LiuGuH Thanks for the contribution! but I think it's better if we don't introduce a new one if the original one works. Personally, I'm worried about the synchronization state of a single thread, it doesn't seem to be a good design. If some boundaries occur (such as NN restart, hang, etc.), how do we deal with it? If we are going to design a more complex mechanism to handle boundary issues, I think it is better to use the mechanisms that are already available. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
Re: [PR] HDFS-17325 Doc: Fix the documentation of fs expunge command in FileSystemShell.md [hadoop]
slfan1989 commented on PR #6413: URL: https://github.com/apache/hadoop/pull/6413#issuecomment-1878430249 LGTM -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org