[jira] [Work logged] (HADOOP-17438) Increase docker memory limit in Jenkins
[ https://issues.apache.org/jira/browse/HADOOP-17438?focusedWorklogId=525884&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-525884 ] ASF GitHub Bot logged work on HADOOP-17438: --- Author: ASF GitHub Bot Created on: 18/Dec/20 07:24 Start Date: 18/Dec/20 07:24 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #2560: URL: https://github.com/apache/hadoop/pull/2560#issuecomment-747919101 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 7s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 13m 51s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 20m 44s | | trunk passed | | +1 :green_heart: | compile | 20m 0s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | compile | 17m 29s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +1 :green_heart: | mvnsite | 25m 5s | | trunk passed | | +1 :green_heart: | shadedclient | 15m 39s | | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 7m 11s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javadoc | 7m 31s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 32s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 27m 15s | | the patch passed | | +1 :green_heart: | compile | 20m 47s | | the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javac | 20m 47s | | the patch passed | | +1 :green_heart: | compile | 20m 24s | | the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +1 :green_heart: | javac | 20m 24s | | the patch passed | | +1 :green_heart: | mvnsite | 24m 45s | | the patch passed | | +1 :green_heart: | shellcheck | 0m 0s | | There were no new shellcheck issues. | | +1 :green_heart: | shelldocs | 0m 17s | | There were no new shelldocs issues. | | +1 :green_heart: | whitespace | 0m 0s | | The patch has no whitespace issues. | | +1 :green_heart: | xml | 0m 3s | | The patch has no ill-formed XML file. | | +1 :green_heart: | shadedclient | 19m 49s | | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 9m 10s | | the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javadoc | 9m 12s | | the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | _ Other Tests _ | | -1 :x: | unit | 617m 25s | [/patch-unit-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2560/1/artifact/out/patch-unit-root.txt) | root in the patch passed. | | +1 :green_heart: | asflicense | 1m 51s | | The patch does not generate ASF License warnings. | | | | 884m 20s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.TestDecommissionWithStriped | | | hadoop.hdfs.qjournal.server.TestJournalNodeSync | | | hadoop.hdfs.TestErasureCodingPolicies | | | hadoop.yarn.applications.distributedshell.TestDistributedShell | | | hadoop.tools.dynamometer.TestDynamometerInfra | | | hadoop.mapreduce.v2.app.rm.TestRMContainerAllocator | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2560/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2560 | | Optional Tests | dupname asflicense shellcheck shelldocs compile javac javadoc mvninstall mvnsite unit shadedclient xml | | uname | Linux 28251d6781c4 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 4c033bafa02 | | Default Java | Private Build-1.8.0_275-8u27
[GitHub] [hadoop] hadoop-yetus commented on pull request #2560: HADOOP-17438. Increase docker memory limit in Jenkins.
hadoop-yetus commented on pull request #2560: URL: https://github.com/apache/hadoop/pull/2560#issuecomment-747919101 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 7s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 13m 51s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 20m 44s | | trunk passed | | +1 :green_heart: | compile | 20m 0s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | compile | 17m 29s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +1 :green_heart: | mvnsite | 25m 5s | | trunk passed | | +1 :green_heart: | shadedclient | 15m 39s | | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 7m 11s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javadoc | 7m 31s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 32s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 27m 15s | | the patch passed | | +1 :green_heart: | compile | 20m 47s | | the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javac | 20m 47s | | the patch passed | | +1 :green_heart: | compile | 20m 24s | | the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +1 :green_heart: | javac | 20m 24s | | the patch passed | | +1 :green_heart: | mvnsite | 24m 45s | | the patch passed | | +1 :green_heart: | shellcheck | 0m 0s | | There were no new shellcheck issues. | | +1 :green_heart: | shelldocs | 0m 17s | | There were no new shelldocs issues. | | +1 :green_heart: | whitespace | 0m 0s | | The patch has no whitespace issues. | | +1 :green_heart: | xml | 0m 3s | | The patch has no ill-formed XML file. | | +1 :green_heart: | shadedclient | 19m 49s | | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 9m 10s | | the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javadoc | 9m 12s | | the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | _ Other Tests _ | | -1 :x: | unit | 617m 25s | [/patch-unit-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2560/1/artifact/out/patch-unit-root.txt) | root in the patch passed. | | +1 :green_heart: | asflicense | 1m 51s | | The patch does not generate ASF License warnings. | | | | 884m 20s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.TestDecommissionWithStriped | | | hadoop.hdfs.qjournal.server.TestJournalNodeSync | | | hadoop.hdfs.TestErasureCodingPolicies | | | hadoop.yarn.applications.distributedshell.TestDistributedShell | | | hadoop.tools.dynamometer.TestDynamometerInfra | | | hadoop.mapreduce.v2.app.rm.TestRMContainerAllocator | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2560/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2560 | | Optional Tests | dupname asflicense shellcheck shelldocs compile javac javadoc mvninstall mvnsite unit shadedclient xml | | uname | Linux 28251d6781c4 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 4c033bafa02 | | Default Java | Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2560/1/testReport/ | | Max. process+thread count | 4052 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project hadoop-yarn-
[GitHub] [hadoop] qizhu-lucas commented on pull request #2563: YARN-10463: For Federation, we should support getApplicationAttemptRe…
qizhu-lucas commented on pull request #2563: URL: https://github.com/apache/hadoop/pull/2563#issuecomment-747910858 Fix checkstyle. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] qizhu-lucas opened a new pull request #2563: YARN-10463: For Federation, we should support getApplicationAttemptRe…
qizhu-lucas opened a new pull request #2563: URL: https://github.com/apache/hadoop/pull/2563 The PR for : [YARN-10463](https://issues.apache.org/jira/browse/YARN-10463) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17224) Install Intel ISA-L library in Dockerfile
[ https://issues.apache.org/jira/browse/HADOOP-17224?focusedWorklogId=525868&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-525868 ] ASF GitHub Bot logged work on HADOOP-17224: --- Author: ASF GitHub Bot Created on: 18/Dec/20 05:22 Start Date: 18/Dec/20 05:22 Worklog Time Spent: 10m Work Description: tasanuma commented on pull request #2537: URL: https://github.com/apache/hadoop/pull/2537#issuecomment-747874700 @amahussein I didn't consider it. Thanks for trying it on [HADOOP-17438](https://issues.apache.org/jira/browse/HADOOP-17438). Let's see the result. I'm also paying attention to https://github.com/apache/hadoop/pull/2556 that Akira is trying to reduce threadCount for unit tests. The result seems very good for now. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 525868) Time Spent: 3.5h (was: 3h 20m) > Install Intel ISA-L library in Dockerfile > - > > Key: HADOOP-17224 > URL: https://issues.apache.org/jira/browse/HADOOP-17224 > Project: Hadoop Common > Issue Type: Bug >Reporter: Takanobu Asanuma >Assignee: Takanobu Asanuma >Priority: Blocker > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 3.5h > Remaining Estimate: 0h > > Currently, there is not isa-l library in the docker container, and jenkins > skips the natvie tests, TestNativeRSRawCoder and TestNativeXORRawCoder. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] tasanuma commented on pull request #2537: HADOOP-17224. Install Intel ISA-L library in Dockerfile.
tasanuma commented on pull request #2537: URL: https://github.com/apache/hadoop/pull/2537#issuecomment-747874700 @amahussein I didn't consider it. Thanks for trying it on [HADOOP-17438](https://issues.apache.org/jira/browse/HADOOP-17438). Let's see the result. I'm also paying attention to https://github.com/apache/hadoop/pull/2556 that Akira is trying to reduce threadCount for unit tests. The result seems very good for now. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] Cosss7 commented on pull request #2559: HDFS-15734. [READ] DirectoryScanner#scan need not check StorageType.PROVIDED
Cosss7 commented on pull request #2559: URL: https://github.com/apache/hadoop/pull/2559#issuecomment-747833201 Unrelated fails. UT can pass locally. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2562: HDFS-15737. Don't remove datanodes from outOfServiceNodeBlocks while checking in DatanodeAdminManager
hadoop-yetus commented on pull request #2562: URL: https://github.com/apache/hadoop/pull/2562#issuecomment-747831900 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 11m 7s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ branch-2.10 Compile Tests _ | | +1 :green_heart: | mvninstall | 16m 8s | branch-2.10 passed | | +1 :green_heart: | compile | 0m 58s | branch-2.10 passed | | +1 :green_heart: | checkstyle | 0m 38s | branch-2.10 passed | | +1 :green_heart: | mvnsite | 1m 12s | branch-2.10 passed | | +1 :green_heart: | javadoc | 1m 15s | branch-2.10 passed | | +0 :ok: | spotbugs | 2m 59s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 2m 56s | branch-2.10 passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 0s | the patch passed | | +1 :green_heart: | compile | 0m 54s | the patch passed | | +1 :green_heart: | javac | 0m 54s | the patch passed | | +1 :green_heart: | checkstyle | 0m 29s | the patch passed | | +1 :green_heart: | mvnsite | 1m 2s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | javadoc | 1m 10s | the patch passed | | +1 :green_heart: | findbugs | 3m 0s | the patch passed | ||| _ Other Tests _ | | -1 :x: | unit | 62m 34s | hadoop-hdfs in the patch failed. | | +1 :green_heart: | asflicense | 0m 41s | The patch does not generate ASF License warnings. | | | | 108m 52s | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys | | | hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithUpgradeDomain | | | hadoop.hdfs.TestRollingUpgrade | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2562/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2562 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 8cd430b18ff7 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | branch-2.10 / 2c45e93 | | Default Java | Oracle Corporation-1.7.0_95-b00 | | unit | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2562/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2562/1/testReport/ | | Max. process+thread count | 2255 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2562/1/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.0.1 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2456: HDFS-15679. DFSOutputStream should not throw exception after closed
hadoop-yetus commented on pull request #2456: URL: https://github.com/apache/hadoop/pull/2456#issuecomment-747826604 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 42s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | | 0m 0s | [test4tests](test4tests) | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 13m 54s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 22m 20s | | trunk passed | | +1 :green_heart: | compile | 4m 12s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | compile | 3m 53s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +1 :green_heart: | checkstyle | 1m 0s | | trunk passed | | +1 :green_heart: | mvnsite | 2m 17s | | trunk passed | | +1 :green_heart: | shadedclient | 18m 21s | | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 1m 33s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javadoc | 2m 4s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +0 :ok: | spotbugs | 3m 16s | | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 5m 40s | | trunk passed | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 1m 24s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 2s | | the patch passed | | +1 :green_heart: | compile | 4m 8s | | the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javac | 4m 8s | | hadoop-hdfs-project-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 generated 0 new + 771 unchanged - 5 fixed = 771 total (was 776) | | +1 :green_heart: | compile | 3m 44s | | the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +1 :green_heart: | javac | 3m 44s | | hadoop-hdfs-project-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 generated 0 new + 748 unchanged - 5 fixed = 748 total (was 753) | | +1 :green_heart: | checkstyle | 0m 54s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 58s | | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 15m 2s | | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 1m 24s | | the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javadoc | 1m 57s | | the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +1 :green_heart: | findbugs | 5m 35s | | the patch passed | _ Other Tests _ | | +1 :green_heart: | unit | 2m 19s | | hadoop-hdfs-client in the patch passed. | | -1 :x: | unit | 98m 58s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2456/3/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 44s | | The patch does not generate ASF License warnings. | | | | 217m 35s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics | | | hadoop.hdfs.TestMaintenanceState | | | hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy | | | hadoop.hdfs.TestMultipleNNPortQOP | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2456/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2456 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux d4e02005a993 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / c2672bb2342 | | Default Java | Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | Multi-JDK versions |
[jira] [Work logged] (HADOOP-17438) Increase docker memory limit in Jenkins
[ https://issues.apache.org/jira/browse/HADOOP-17438?focusedWorklogId=525834&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-525834 ] ASF GitHub Bot logged work on HADOOP-17438: --- Author: ASF GitHub Bot Created on: 18/Dec/20 02:22 Start Date: 18/Dec/20 02:22 Worklog Time Spent: 10m Work Description: aajisaka commented on pull request #2560: URL: https://github.com/apache/hadoop/pull/2560#issuecomment-747825118 I asked the infrastructure team how much memory we can use: https://issues.apache.org/jira/browse/INFRA-21207 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 525834) Time Spent: 40m (was: 0.5h) > Increase docker memory limit in Jenkins > --- > > Key: HADOOP-17438 > URL: https://issues.apache.org/jira/browse/HADOOP-17438 > Project: Hadoop Common > Issue Type: Bug > Components: build, scripts, test, yetus >Reporter: Ahmed Hussein >Assignee: Ahmed Hussein >Priority: Major > Labels: pull-request-available > Time Spent: 40m > Remaining Estimate: 0h > > Yetus keeps failing with OOM. > > {code:bash} > unable to create new native thread > java.lang.OutOfMemoryError: unable to create new native thread > at java.lang.Thread.start0(Native Method) > at java.lang.Thread.start(Thread.java:717) > at > java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:957) > at > java.util.concurrent.ThreadPoolExecutor.ensurePrestart(ThreadPoolExecutor.java:1603) > at > java.util.concurrent.ScheduledThreadPoolExecutor.delayedExecute(ScheduledThreadPoolExecutor.java:334) > at > java.util.concurrent.ScheduledThreadPoolExecutor.schedule(ScheduledThreadPoolExecutor.java:533) > at > org.apache.maven.surefire.booter.ForkedBooter.launchLastDitchDaemonShutdownThread(ForkedBooter.java:369) > at > org.apache.maven.surefire.booter.ForkedBooter.acknowledgedExit(ForkedBooter.java:333) > at > org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:145) > at > org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418) > {code} > > This jira to increase the memory limit from 20g to 22g. > *Note: This is only a workaround to get things more productive. If this > change reduces the frequency of the OOM failure, there must be a follow-up > profile the runtime to figure out which components are causing the docker to > run out of memory.* > CC: [~aajisaka], [~elgoiri], [~weichiu], [~ebadger], [~tasanuma], > [~iwasakims], [~ayushtkn], [~inigoiri] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] aajisaka commented on pull request #2560: HADOOP-17438. Increase docker memory limit in Jenkins.
aajisaka commented on pull request #2560: URL: https://github.com/apache/hadoop/pull/2560#issuecomment-747825118 I asked the infrastructure team how much memory we can use: https://issues.apache.org/jira/browse/INFRA-21207 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17435) Remove Hadoop 2.9.2 from the download page
[ https://issues.apache.org/jira/browse/HADOOP-17435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-17435: --- Fix Version/s: asf-site Resolution: Fixed Status: Resolved (was: Patch Available) Merged the PR and removed https://dist.apache.org/repos/dist/release/hadoop/common/hadoop-2.9.2/ > Remove Hadoop 2.9.2 from the download page > -- > > Key: HADOOP-17435 > URL: https://issues.apache.org/jira/browse/HADOOP-17435 > Project: Hadoop Common > Issue Type: Task > Components: website >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Major > Labels: pull-request-available > Fix For: asf-site > > > 2.9.x is EoL: > https://cwiki.apache.org/confluence/display/HADOOP/EOL+%28End-of-life%29+Release+Branches > Let's remove 2.9.2 from https://hadoop.apache.org/releases.html -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] NickyYe opened a new pull request #2562: HDFS-15737. Don't remove datanodes from outOfServiceNodeBlocks while checking in DatanodeAdminManager
NickyYe opened a new pull request #2562: URL: https://github.com/apache/hadoop/pull/2562 https://issues.apache.org/jira/browse/HDFS-15737 ## NOTICE Please create an issue in ASF JIRA before opening a pull request, and you need to set the title of the pull request which starts with the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.) For more details, please see https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2561: HDFS-15737. Don't remove datanodes from outOfServiceNodeBlocks while checking in DatanodeAdminManager
hadoop-yetus commented on pull request #2561: URL: https://github.com/apache/hadoop/pull/2561#issuecomment-747798322 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 0s | Docker mode activated. | | -1 :x: | patch | 0m 10s | https://github.com/apache/hadoop/pull/2561 does not apply to branch-2.10. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. | | Subsystem | Report/Notes | |--:|:-| | GITHUB PR | https://github.com/apache/hadoop/pull/2561 | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2561/1/console | | versions | git=2.17.1 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] NickyYe closed pull request #2561: HDFS-15737. Don't remove datanodes from outOfServiceNodeBlocks while checking in DatanodeAdminManager
NickyYe closed pull request #2561: URL: https://github.com/apache/hadoop/pull/2561 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17414) Magic committer files don't have the count of bytes written collected by spark
[ https://issues.apache.org/jira/browse/HADOOP-17414?focusedWorklogId=525825&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-525825 ] ASF GitHub Bot logged work on HADOOP-17414: --- Author: ASF GitHub Bot Created on: 18/Dec/20 00:50 Start Date: 18/Dec/20 00:50 Worklog Time Spent: 10m Work Description: dongjoon-hyun commented on pull request #2530: URL: https://github.com/apache/hadoop/pull/2530#issuecomment-747797852 Gentle ping~ This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 525825) Time Spent: 2h 40m (was: 2.5h) > Magic committer files don't have the count of bytes written collected by spark > -- > > Key: HADOOP-17414 > URL: https://issues.apache.org/jira/browse/HADOOP-17414 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.2.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Labels: pull-request-available > Time Spent: 2h 40m > Remaining Estimate: 0h > > The spark statistics tracking doesn't correctly assess the size of the > uploaded files as it only calls getFileStatus on the zero byte objects -not > the yet-to-manifest files. Which, given they don't exist yet, isn't easy to > do. > Solution: > * Add getXAttr and listXAttr API calls to S3AFileSystem > * Return all S3 object headers as XAttr attributes prefixed "header." That's > custom and standard (e.g header.Content-Length). > The setXAttr call isn't implemented, so for correctness the FS doesn't > declare its support for the API in hasPathCapability(). > The magic commit file write sets the custom header > set the length of the data final data in the header > x-hadoop-s3a-magic-data-length in the marker file. > A matching patch in Spark will look for the XAttr > "header.x-hadoop-s3a-magic-data-length" when the file > being probed for output data is zero byte long. > As a result, the job tracking statistics will report the > bytes written but yet to be manifest. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] dongjoon-hyun commented on pull request #2530: HADOOP-17414. Magic committer files don't have the count of bytes written collected by spark
dongjoon-hyun commented on pull request #2530: URL: https://github.com/apache/hadoop/pull/2530#issuecomment-747797852 Gentle ping~ This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] NickyYe opened a new pull request #2561: HDFS-15737. Don't remove datanodes from outOfServiceNodeBlocks while checking in DatanodeAdminManager
NickyYe opened a new pull request #2561: URL: https://github.com/apache/hadoop/pull/2561 https://issues.apache.org/jira/browse/HDFS-15737 ## NOTICE Please create an issue in ASF JIRA before opening a pull request, and you need to set the title of the pull request which starts with the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.) For more details, please see https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] goiri merged pull request #2554: YARN-10536. Client in distributedShell swallows interrupt exceptions
goiri merged pull request #2554: URL: https://github.com/apache/hadoop/pull/2554 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] jbrennan333 commented on pull request #2456: HDFS-15679. DFSOutputStream should not throw exception after closed
jbrennan333 commented on pull request #2456: URL: https://github.com/apache/hadoop/pull/2456#issuecomment-747754051 I think the unit test changes will make them pass, but it seems like those tests will no longer be testing the code in closeImpl() that was added by [HDFS-12612]. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] amahussein edited a comment on pull request #2456: HDFS-15679. DFSOutputStream should not throw exception after closed
amahussein edited a comment on pull request #2456: URL: https://github.com/apache/hadoop/pull/2456#issuecomment-747750687 @eddyxu , @jbrennan333 Can you please take a look at the Junit fixes? I found that the calls to `close()` in `TestDFSStripedOutputStreamWithFailure#testCloseWithExceptionsInStreamer` were inconsistent with the description of the code commit. Only first `close()` should be expected to throw an exception. right? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] amahussein commented on pull request #2456: HDFS-15679. DFSOutputStream should not throw exception after closed
amahussein commented on pull request #2456: URL: https://github.com/apache/hadoop/pull/2456#issuecomment-747750687 @eddyxu , Can you please take a look at the Junit fixes? I found that the calls to `close()` in `TestDFSStripedOutputStreamWithFailure#testCloseWithExceptionsInStreamer` were inconsistent with the description of the code commit. Only first `close()` should be expected to throw an exception. right? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17271) S3A statistics to support IOStatistics
[ https://issues.apache.org/jira/browse/HADOOP-17271?focusedWorklogId=525794&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-525794 ] ASF GitHub Bot logged work on HADOOP-17271: --- Author: ASF GitHub Bot Created on: 17/Dec/20 22:00 Start Date: 17/Dec/20 22:00 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #2553: URL: https://github.com/apache/hadoop/pull/2553#issuecomment-747727830 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 8s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 4s | | No case conflicting files found. | | +0 :ok: | markdownlint | 0m 0s | | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | | 0m 0s | [test4tests](test4tests) | The patch appears to include 56 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 13m 44s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 26m 12s | | trunk passed | | +1 :green_heart: | compile | 24m 12s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | compile | 18m 42s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +1 :green_heart: | checkstyle | 2m 56s | | trunk passed | | +1 :green_heart: | mvnsite | 3m 25s | | trunk passed | | +1 :green_heart: | shadedclient | 22m 11s | | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 2m 19s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javadoc | 2m 57s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +0 :ok: | spotbugs | 1m 18s | | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 4m 57s | | trunk passed | | -0 :warning: | patch | 1m 40s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 27s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 0s | | the patch passed | | +1 :green_heart: | compile | 19m 15s | | the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javac | 19m 15s | | root-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 generated 0 new + 2044 unchanged - 1 fixed = 2044 total (was 2045) | | +1 :green_heart: | compile | 17m 21s | | the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +1 :green_heart: | javac | 17m 21s | | root-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 generated 0 new + 1940 unchanged - 1 fixed = 1940 total (was 1941) | | -0 :warning: | checkstyle | 2m 45s | [/diff-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2553/3/artifact/out/diff-checkstyle-root.txt) | root: The patch generated 19 new + 272 unchanged - 26 fixed = 291 total (was 298) | | +1 :green_heart: | mvnsite | 3m 18s | | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | | The patch has no whitespace issues. | | +1 :green_heart: | xml | 0m 2s | | The patch has no ill-formed XML file. | | +1 :green_heart: | shadedclient | 15m 12s | | patch has no errors when building and testing our client artifacts. | | -1 :x: | javadoc | 1m 3s | [/patch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2553/3/artifact/out/patch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04.txt) | hadoop-common in the patch failed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04. | | -1 :x: | javadoc | 0m 44s | [/diff-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2553/3/artifact/out/diff-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01.txt) | hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 generated 1 new + 88 unchange
[GitHub] [hadoop] hadoop-yetus commented on pull request #2553: HADOOP-17271. S3A to support IOStatistics
hadoop-yetus commented on pull request #2553: URL: https://github.com/apache/hadoop/pull/2553#issuecomment-747727830 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 8s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 4s | | No case conflicting files found. | | +0 :ok: | markdownlint | 0m 0s | | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | | 0m 0s | [test4tests](test4tests) | The patch appears to include 56 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 13m 44s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 26m 12s | | trunk passed | | +1 :green_heart: | compile | 24m 12s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | compile | 18m 42s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +1 :green_heart: | checkstyle | 2m 56s | | trunk passed | | +1 :green_heart: | mvnsite | 3m 25s | | trunk passed | | +1 :green_heart: | shadedclient | 22m 11s | | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 2m 19s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javadoc | 2m 57s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +0 :ok: | spotbugs | 1m 18s | | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 4m 57s | | trunk passed | | -0 :warning: | patch | 1m 40s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 27s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 0s | | the patch passed | | +1 :green_heart: | compile | 19m 15s | | the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javac | 19m 15s | | root-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 generated 0 new + 2044 unchanged - 1 fixed = 2044 total (was 2045) | | +1 :green_heart: | compile | 17m 21s | | the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +1 :green_heart: | javac | 17m 21s | | root-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 generated 0 new + 1940 unchanged - 1 fixed = 1940 total (was 1941) | | -0 :warning: | checkstyle | 2m 45s | [/diff-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2553/3/artifact/out/diff-checkstyle-root.txt) | root: The patch generated 19 new + 272 unchanged - 26 fixed = 291 total (was 298) | | +1 :green_heart: | mvnsite | 3m 18s | | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | | The patch has no whitespace issues. | | +1 :green_heart: | xml | 0m 2s | | The patch has no ill-formed XML file. | | +1 :green_heart: | shadedclient | 15m 12s | | patch has no errors when building and testing our client artifacts. | | -1 :x: | javadoc | 1m 3s | [/patch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2553/3/artifact/out/patch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04.txt) | hadoop-common in the patch failed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04. | | -1 :x: | javadoc | 0m 44s | [/diff-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2553/3/artifact/out/diff-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01.txt) | hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 generated 1 new + 88 unchanged - 0 fixed = 89 total (was 88) | | -1 :x: | findbugs | 1m 25s | [/new-findbugs-hadoop-tools_hadoop-aws.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2553/3/artifact/out/new-findbugs-hadoop-tools_hadoop-aws.html) | hadoop-tools/hadoop-aws generated 8 new + 0 unchanged - 0 fixed = 8 total (was 0) | _ Other Tests _ | | +1 :green_heart: | unit | 9m 43s | | hadoop-common in the patch
[GitHub] [hadoop] amahussein commented on pull request #2456: HDFS-15679. DFSOutputStream should not throw exception after closed
amahussein commented on pull request #2456: URL: https://github.com/apache/hadoop/pull/2456#issuecomment-747719615 Thanks @jbrennan333 , the failing unit tests expect the exception to be thrown more than once. I will need to go through those test units to fix them. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] amahussein commented on pull request #2554: YARN-10536. Client in distributedShell swallows interrupt exceptions
amahussein commented on pull request #2554: URL: https://github.com/apache/hadoop/pull/2554#issuecomment-747702575 > I don't think we broke the TestDistributedShell with this, did we? No, we didn't. Those two test cases were broken for almost a year See [YARN-10040](https://issues.apache.org/jira/browse/YARN-10040) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] goiri commented on pull request #2554: YARN-10536. Client in distributedShell swallows interrupt exceptions
goiri commented on pull request #2554: URL: https://github.com/apache/hadoop/pull/2554#issuecomment-747679024 I don't think we broke the TestDistributedShell with this, did we? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2554: YARN-10536. Client in distributedShell swallows interrupt exceptions
hadoop-yetus commented on pull request #2554: URL: https://github.com/apache/hadoop/pull/2554#issuecomment-747676461 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 39s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | | 0m 0s | [test4tests](test4tests) | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 32m 25s | | trunk passed | | +1 :green_heart: | compile | 0m 30s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | compile | 0m 27s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +1 :green_heart: | checkstyle | 0m 28s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 31s | | trunk passed | | +1 :green_heart: | shadedclient | 16m 19s | | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 29s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javadoc | 0m 25s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +0 :ok: | spotbugs | 0m 46s | | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 0m 44s | | trunk passed | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 25s | | the patch passed | | +1 :green_heart: | compile | 0m 21s | | the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javac | 0m 21s | | the patch passed | | +1 :green_heart: | compile | 0m 20s | | the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +1 :green_heart: | javac | 0m 20s | | the patch passed | | +1 :green_heart: | checkstyle | 0m 19s | | hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell: The patch generated 0 new + 150 unchanged - 5 fixed = 150 total (was 155) | | +1 :green_heart: | mvnsite | 0m 22s | | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 14m 45s | | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 23s | | the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javadoc | 0m 21s | | the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +1 :green_heart: | findbugs | 0m 45s | | the patch passed | _ Other Tests _ | | -1 :x: | unit | 24m 45s | [/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2554/3/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt) | hadoop-yarn-applications-distributedshell in the patch passed. | | +1 :green_heart: | asflicense | 0m 32s | | The patch does not generate ASF License warnings. | | | | 98m 44s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.yarn.applications.distributedshell.TestDistributedShell | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2554/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2554 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux bc6e2f150a4f 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / c2672bb2342 | | Default Java | Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2554/3/testReport/ | | Max. process+thread count | 780 (vs. ulimit of 5500) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-di
[jira] [Work logged] (HADOOP-17438) Increase docker memory limit in Jenkins
[ https://issues.apache.org/jira/browse/HADOOP-17438?focusedWorklogId=525697&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-525697 ] ASF GitHub Bot logged work on HADOOP-17438: --- Author: ASF GitHub Bot Created on: 17/Dec/20 19:52 Start Date: 17/Dec/20 19:52 Worklog Time Spent: 10m Work Description: ericbadger commented on pull request #2560: URL: https://github.com/apache/hadoop/pull/2560#issuecomment-747663571 It would be nice to know what's tying up all of our memory. Because 20 GB is a lot for us to be using for unit tests This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 525697) Time Spent: 0.5h (was: 20m) > Increase docker memory limit in Jenkins > --- > > Key: HADOOP-17438 > URL: https://issues.apache.org/jira/browse/HADOOP-17438 > Project: Hadoop Common > Issue Type: Bug > Components: build, scripts, test, yetus >Reporter: Ahmed Hussein >Assignee: Ahmed Hussein >Priority: Major > Labels: pull-request-available > Time Spent: 0.5h > Remaining Estimate: 0h > > Yetus keeps failing with OOM. > > {code:bash} > unable to create new native thread > java.lang.OutOfMemoryError: unable to create new native thread > at java.lang.Thread.start0(Native Method) > at java.lang.Thread.start(Thread.java:717) > at > java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:957) > at > java.util.concurrent.ThreadPoolExecutor.ensurePrestart(ThreadPoolExecutor.java:1603) > at > java.util.concurrent.ScheduledThreadPoolExecutor.delayedExecute(ScheduledThreadPoolExecutor.java:334) > at > java.util.concurrent.ScheduledThreadPoolExecutor.schedule(ScheduledThreadPoolExecutor.java:533) > at > org.apache.maven.surefire.booter.ForkedBooter.launchLastDitchDaemonShutdownThread(ForkedBooter.java:369) > at > org.apache.maven.surefire.booter.ForkedBooter.acknowledgedExit(ForkedBooter.java:333) > at > org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:145) > at > org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418) > {code} > > This jira to increase the memory limit from 20g to 22g. > *Note: This is only a workaround to get things more productive. If this > change reduces the frequency of the OOM failure, there must be a follow-up > profile the runtime to figure out which components are causing the docker to > run out of memory.* > CC: [~aajisaka], [~elgoiri], [~weichiu], [~ebadger], [~tasanuma], > [~iwasakims], [~ayushtkn], [~inigoiri] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17438) Increase docker memory limit in Jenkins
[ https://issues.apache.org/jira/browse/HADOOP-17438?focusedWorklogId=525695&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-525695 ] ASF GitHub Bot logged work on HADOOP-17438: --- Author: ASF GitHub Bot Created on: 17/Dec/20 19:51 Start Date: 17/Dec/20 19:51 Worklog Time Spent: 10m Work Description: ericbadger commented on pull request #2560: URL: https://github.com/apache/hadoop/pull/2560#issuecomment-747662919 Jeez, are we really running out of memory in a 20 GB container? That seems insane to me This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 525695) Time Spent: 20m (was: 10m) > Increase docker memory limit in Jenkins > --- > > Key: HADOOP-17438 > URL: https://issues.apache.org/jira/browse/HADOOP-17438 > Project: Hadoop Common > Issue Type: Bug > Components: build, scripts, test, yetus >Reporter: Ahmed Hussein >Assignee: Ahmed Hussein >Priority: Major > Labels: pull-request-available > Time Spent: 20m > Remaining Estimate: 0h > > Yetus keeps failing with OOM. > > {code:bash} > unable to create new native thread > java.lang.OutOfMemoryError: unable to create new native thread > at java.lang.Thread.start0(Native Method) > at java.lang.Thread.start(Thread.java:717) > at > java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:957) > at > java.util.concurrent.ThreadPoolExecutor.ensurePrestart(ThreadPoolExecutor.java:1603) > at > java.util.concurrent.ScheduledThreadPoolExecutor.delayedExecute(ScheduledThreadPoolExecutor.java:334) > at > java.util.concurrent.ScheduledThreadPoolExecutor.schedule(ScheduledThreadPoolExecutor.java:533) > at > org.apache.maven.surefire.booter.ForkedBooter.launchLastDitchDaemonShutdownThread(ForkedBooter.java:369) > at > org.apache.maven.surefire.booter.ForkedBooter.acknowledgedExit(ForkedBooter.java:333) > at > org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:145) > at > org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418) > {code} > > This jira to increase the memory limit from 20g to 22g. > *Note: This is only a workaround to get things more productive. If this > change reduces the frequency of the OOM failure, there must be a follow-up > profile the runtime to figure out which components are causing the docker to > run out of memory.* > CC: [~aajisaka], [~elgoiri], [~weichiu], [~ebadger], [~tasanuma], > [~iwasakims], [~ayushtkn], [~inigoiri] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ericbadger commented on pull request #2560: HADOOP-17438. Increase docker memory limit in Jenkins.
ericbadger commented on pull request #2560: URL: https://github.com/apache/hadoop/pull/2560#issuecomment-747663571 It would be nice to know what's tying up all of our memory. Because 20 GB is a lot for us to be using for unit tests This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ericbadger commented on pull request #2560: HADOOP-17438. Increase docker memory limit in Jenkins.
ericbadger commented on pull request #2560: URL: https://github.com/apache/hadoop/pull/2560#issuecomment-747662919 Jeez, are we really running out of memory in a 20 GB container? That seems insane to me This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] amahussein commented on a change in pull request #2554: YARN-10536. Client in distributedShell swallows interrupt exceptions
amahussein commented on a change in pull request #2554: URL: https://github.com/apache/hadoop/pull/2554#discussion_r545314569 ## File path: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/Client.java ## @@ -1139,34 +1147,38 @@ private boolean monitorApplication(ApplicationId appId) FinalApplicationStatus dsStatus = report.getFinalApplicationStatus(); if (YarnApplicationState.FINISHED == state) { if (FinalApplicationStatus.SUCCEEDED == dsStatus) { - LOG.info("Application has completed successfully. Breaking monitoring loop"); - return true; -} -else { - LOG.info("Application did finished unsuccessfully." - + " YarnState=" + state.toString() + ", DSFinalStatus=" + dsStatus.toString() - + ". Breaking monitoring loop"); - return false; + LOG.info("Application has completed successfully. " + + "Breaking monitoring loop"); + res = true; +} else { + LOG.info("Application did finished unsuccessfully. " + + "YarnState={}, DSFinalStatus={}. Breaking monitoring loop", + state.toString(), dsStatus.toString()); } - } - else if (YarnApplicationState.KILLED == state +break; + } else if (YarnApplicationState.KILLED == state || YarnApplicationState.FAILED == state) { -LOG.info("Application did not finish." -+ " YarnState=" + state.toString() + ", DSFinalStatus=" + dsStatus.toString() -+ ". Breaking monitoring loop"); -return false; +LOG.info("Application did not finish. YarnState={}, DSFinalStatus={}. " ++ "Breaking monitoring loop", +state.toString(), dsStatus.toString()); Review comment: Thanks @goiri ! I added a commit to get rid of the "toString()". This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17271) S3A statistics to support IOStatistics
[ https://issues.apache.org/jira/browse/HADOOP-17271?focusedWorklogId=525667&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-525667 ] ASF GitHub Bot logged work on HADOOP-17271: --- Author: ASF GitHub Bot Created on: 17/Dec/20 17:57 Start Date: 17/Dec/20 17:57 Worklog Time Spent: 10m Work Description: steveloughran commented on pull request #2553: URL: https://github.com/apache/hadoop/pull/2553#issuecomment-747600304 Narrator: it was the mock. It just to a long time to debug the fact that durationTracking of deleteObject was triggering an NPE, which, in commit cleanup, was being swallowed. Joy. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 525667) Time Spent: 9h (was: 8h 50m) > S3A statistics to support IOStatistics > -- > > Key: HADOOP-17271 > URL: https://issues.apache.org/jira/browse/HADOOP-17271 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.3.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Labels: pull-request-available > Time Spent: 9h > Remaining Estimate: 0h > > S3A to rework statistics with > * API + Implementation split of the interfaces used by subcomponents when > reporting stats > * S3A Instrumentation to implement all the interfaces > * streams, etc to all implement IOStatisticsSources and serve to callers > * Add some tracking of durations of remote requests -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on pull request #2553: HADOOP-17271. S3A to support IOStatistics
steveloughran commented on pull request #2553: URL: https://github.com/apache/hadoop/pull/2553#issuecomment-747600304 Narrator: it was the mock. It just to a long time to debug the fact that durationTracking of deleteObject was triggering an NPE, which, in commit cleanup, was being swallowed. Joy. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] jbrennan333 merged pull request #2511: HDFS-15704. Mitigate lease monitor's rapid infinite loop.
jbrennan333 merged pull request #2511: URL: https://github.com/apache/hadoop/pull/2511 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2559: HDFS-15734. [READ] DirectoryScanner#scan need not check StorageType.PROVIDED
hadoop-yetus commented on pull request #2559: URL: https://github.com/apache/hadoop/pull/2559#issuecomment-747582478 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 52s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 34m 21s | | trunk passed | | +1 :green_heart: | compile | 1m 23s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | compile | 1m 18s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +1 :green_heart: | checkstyle | 0m 48s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 21s | | trunk passed | | +1 :green_heart: | shadedclient | 17m 19s | | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 54s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javadoc | 1m 29s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +0 :ok: | spotbugs | 3m 16s | | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 3m 14s | | trunk passed | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 14s | | the patch passed | | +1 :green_heart: | compile | 1m 12s | | the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javac | 1m 12s | | the patch passed | | +1 :green_heart: | compile | 1m 6s | | the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +1 :green_heart: | javac | 1m 6s | | the patch passed | | +1 :green_heart: | checkstyle | 0m 43s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 16s | | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 15m 46s | | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 50s | | the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javadoc | 1m 24s | | the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +1 :green_heart: | findbugs | 3m 16s | | the patch passed | _ Other Tests _ | | -1 :x: | unit | 102m 54s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2559/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 44s | | The patch does not generate ASF License warnings. | | | | 195m 36s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.namenode.ha.TestHAAppend | | | hadoop.hdfs.TestFileChecksum | | | hadoop.hdfs.TestErasureCodingPoliciesWithRandomECPolicy | | | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped | | | hadoop.hdfs.TestStateAlignmentContextWithHA | | | hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2559/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2559 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux aaa364a75a66 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 4c033bafa02 | | Default Java | Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2559/1/testReport/ | | Max. process+thread count | 4337 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-
[GitHub] [hadoop] goiri commented on a change in pull request #2554: YARN-10536. Client in distributedShell swallows interrupt exceptions
goiri commented on a change in pull request #2554: URL: https://github.com/apache/hadoop/pull/2554#discussion_r545265205 ## File path: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/Client.java ## @@ -1139,34 +1147,38 @@ private boolean monitorApplication(ApplicationId appId) FinalApplicationStatus dsStatus = report.getFinalApplicationStatus(); if (YarnApplicationState.FINISHED == state) { if (FinalApplicationStatus.SUCCEEDED == dsStatus) { - LOG.info("Application has completed successfully. Breaking monitoring loop"); - return true; -} -else { - LOG.info("Application did finished unsuccessfully." - + " YarnState=" + state.toString() + ", DSFinalStatus=" + dsStatus.toString() - + ". Breaking monitoring loop"); - return false; + LOG.info("Application has completed successfully. " + + "Breaking monitoring loop"); + res = true; +} else { + LOG.info("Application did finished unsuccessfully. " + + "YarnState={}, DSFinalStatus={}. Breaking monitoring loop", + state.toString(), dsStatus.toString()); } - } - else if (YarnApplicationState.KILLED == state +break; + } else if (YarnApplicationState.KILLED == state || YarnApplicationState.FAILED == state) { -LOG.info("Application did not finish." -+ " YarnState=" + state.toString() + ", DSFinalStatus=" + dsStatus.toString() -+ ". Breaking monitoring loop"); -return false; +LOG.info("Application did not finish. YarnState={}, DSFinalStatus={}. " ++ "Breaking monitoring loop", +state.toString(), dsStatus.toString()); Review comment: No need for toString() This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] goiri commented on pull request #2556: HDFS-15731. Reduce threadCount for unit tests to reduce the memory usage
goiri commented on pull request #2556: URL: https://github.com/apache/hadoop/pull/2556#issuecomment-747580696 This looks surprisingly successful. In addition the runtime (8 hours) doesn't look that bad; for comparison how many hours was it right before? I see some with 7 hours and some. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2556: HDFS-15731. Reduce threadCount for unit tests to reduce the memory usage
hadoop-yetus commented on pull request #2556: URL: https://github.com/apache/hadoop/pull/2556#issuecomment-747557355 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 37s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | | 0m 0s | [test4tests](test4tests) | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 4m 5s | | Maven dependency ordering for branch | | -1 :x: | mvninstall | 36m 47s | [/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2556/2/artifact/out/branch-mvninstall-root.txt) | root in trunk failed. | | +1 :green_heart: | compile | 27m 37s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | compile | 23m 11s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +1 :green_heart: | checkstyle | 3m 11s | | trunk passed | | +1 :green_heart: | mvnsite | 3m 7s | | trunk passed | | +1 :green_heart: | shadedclient | 22m 46s | | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 2m 27s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javadoc | 2m 58s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +0 :ok: | spotbugs | 1m 22s | | Used deprecated FindBugs config; considering switching to SpotBugs. | | +0 :ok: | findbugs | 0m 32s | | branch/hadoop-project no findbugs output file (findbugsXml.xml) | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 26s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 59s | | the patch passed | | +1 :green_heart: | compile | 21m 41s | | the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javac | 21m 41s | | the patch passed | | +1 :green_heart: | compile | 22m 31s | | the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +1 :green_heart: | javac | 22m 31s | | the patch passed | | +1 :green_heart: | checkstyle | 3m 13s | | the patch passed | | +1 :green_heart: | mvnsite | 3m 3s | | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | | The patch has no whitespace issues. | | +1 :green_heart: | xml | 0m 2s | | The patch has no ill-formed XML file. | | +1 :green_heart: | shadedclient | 19m 58s | | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 2m 38s | | the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javadoc | 3m 40s | | the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +0 :ok: | findbugs | 0m 32s | | hadoop-project has no data from findbugs | _ Other Tests _ | | +1 :green_heart: | unit | 0m 28s | | hadoop-project in the patch passed. | | -1 :x: | unit | 232m 39s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2556/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | unit | 17m 46s | | hadoop-hdfs-rbf in the patch passed. | | +1 :green_heart: | asflicense | 0m 54s | | The patch does not generate ASF License warnings. | | | | 470m 59s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped | | | hadoop.hdfs.server.namenode.TestEditLogRace | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2556/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2556 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle xml | | uname | Linux 8face924dfa3 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 4c033bafa02 | | Default Java | Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18
[jira] [Work logged] (HADOOP-17438) Increase docker memory limit in Jenkins
[ https://issues.apache.org/jira/browse/HADOOP-17438?focusedWorklogId=525639&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-525639 ] ASF GitHub Bot logged work on HADOOP-17438: --- Author: ASF GitHub Bot Created on: 17/Dec/20 16:39 Start Date: 17/Dec/20 16:39 Worklog Time Spent: 10m Work Description: amahussein opened a new pull request #2560: URL: https://github.com/apache/hadoop/pull/2560 This change to increase the memory limit from 20g to 22g. The last commit is needed to trigger the tests and increase the timeout. It should be skipped before merging. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 525639) Remaining Estimate: 0h Time Spent: 10m > Increase docker memory limit in Jenkins > --- > > Key: HADOOP-17438 > URL: https://issues.apache.org/jira/browse/HADOOP-17438 > Project: Hadoop Common > Issue Type: Bug > Components: build, scripts, test, yetus >Reporter: Ahmed Hussein >Assignee: Ahmed Hussein >Priority: Major > Time Spent: 10m > Remaining Estimate: 0h > > Yetus keeps failing with OOM. > > {code:bash} > unable to create new native thread > java.lang.OutOfMemoryError: unable to create new native thread > at java.lang.Thread.start0(Native Method) > at java.lang.Thread.start(Thread.java:717) > at > java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:957) > at > java.util.concurrent.ThreadPoolExecutor.ensurePrestart(ThreadPoolExecutor.java:1603) > at > java.util.concurrent.ScheduledThreadPoolExecutor.delayedExecute(ScheduledThreadPoolExecutor.java:334) > at > java.util.concurrent.ScheduledThreadPoolExecutor.schedule(ScheduledThreadPoolExecutor.java:533) > at > org.apache.maven.surefire.booter.ForkedBooter.launchLastDitchDaemonShutdownThread(ForkedBooter.java:369) > at > org.apache.maven.surefire.booter.ForkedBooter.acknowledgedExit(ForkedBooter.java:333) > at > org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:145) > at > org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418) > {code} > > This jira to increase the memory limit from 20g to 22g. > *Note: This is only a workaround to get things more productive. If this > change reduces the frequency of the OOM failure, there must be a follow-up > profile the runtime to figure out which components are causing the docker to > run out of memory.* > CC: [~aajisaka], [~elgoiri], [~weichiu], [~ebadger], [~tasanuma], > [~iwasakims], [~ayushtkn], [~inigoiri] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17438) Increase docker memory limit in Jenkins
[ https://issues.apache.org/jira/browse/HADOOP-17438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HADOOP-17438: Labels: pull-request-available (was: ) > Increase docker memory limit in Jenkins > --- > > Key: HADOOP-17438 > URL: https://issues.apache.org/jira/browse/HADOOP-17438 > Project: Hadoop Common > Issue Type: Bug > Components: build, scripts, test, yetus >Reporter: Ahmed Hussein >Assignee: Ahmed Hussein >Priority: Major > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > Yetus keeps failing with OOM. > > {code:bash} > unable to create new native thread > java.lang.OutOfMemoryError: unable to create new native thread > at java.lang.Thread.start0(Native Method) > at java.lang.Thread.start(Thread.java:717) > at > java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:957) > at > java.util.concurrent.ThreadPoolExecutor.ensurePrestart(ThreadPoolExecutor.java:1603) > at > java.util.concurrent.ScheduledThreadPoolExecutor.delayedExecute(ScheduledThreadPoolExecutor.java:334) > at > java.util.concurrent.ScheduledThreadPoolExecutor.schedule(ScheduledThreadPoolExecutor.java:533) > at > org.apache.maven.surefire.booter.ForkedBooter.launchLastDitchDaemonShutdownThread(ForkedBooter.java:369) > at > org.apache.maven.surefire.booter.ForkedBooter.acknowledgedExit(ForkedBooter.java:333) > at > org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:145) > at > org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418) > {code} > > This jira to increase the memory limit from 20g to 22g. > *Note: This is only a workaround to get things more productive. If this > change reduces the frequency of the OOM failure, there must be a follow-up > profile the runtime to figure out which components are causing the docker to > run out of memory.* > CC: [~aajisaka], [~elgoiri], [~weichiu], [~ebadger], [~tasanuma], > [~iwasakims], [~ayushtkn], [~inigoiri] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] amahussein opened a new pull request #2560: HADOOP-17438. Increase docker memory limit in Jenkins.
amahussein opened a new pull request #2560: URL: https://github.com/apache/hadoop/pull/2560 This change to increase the memory limit from 20g to 22g. The last commit is needed to trigger the tests and increase the timeout. It should be skipped before merging. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17437) Update Hadoop Documentation with a new AWS Credential Provider used with EKS
[ https://issues.apache.org/jira/browse/HADOOP-17437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17251193#comment-17251193 ] Prateek Dubey commented on HADOOP-17437: I found the git repo - https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md I can raise a PR and send it across with updates > Update Hadoop Documentation with a new AWS Credential Provider used with EKS > > > Key: HADOOP-17437 > URL: https://issues.apache.org/jira/browse/HADOOP-17437 > Project: Hadoop Common > Issue Type: Task > Components: auth, fs/s3 >Reporter: Prateek Dubey >Priority: Minor > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work started] (HADOOP-17438) Increase docker memory limit in Jenkins
[ https://issues.apache.org/jira/browse/HADOOP-17438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HADOOP-17438 started by Ahmed Hussein. -- > Increase docker memory limit in Jenkins > --- > > Key: HADOOP-17438 > URL: https://issues.apache.org/jira/browse/HADOOP-17438 > Project: Hadoop Common > Issue Type: Bug > Components: build, scripts, test, yetus >Reporter: Ahmed Hussein >Assignee: Ahmed Hussein >Priority: Major > > Yetus keeps failing with OOM. > > {code:bash} > unable to create new native thread > java.lang.OutOfMemoryError: unable to create new native thread > at java.lang.Thread.start0(Native Method) > at java.lang.Thread.start(Thread.java:717) > at > java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:957) > at > java.util.concurrent.ThreadPoolExecutor.ensurePrestart(ThreadPoolExecutor.java:1603) > at > java.util.concurrent.ScheduledThreadPoolExecutor.delayedExecute(ScheduledThreadPoolExecutor.java:334) > at > java.util.concurrent.ScheduledThreadPoolExecutor.schedule(ScheduledThreadPoolExecutor.java:533) > at > org.apache.maven.surefire.booter.ForkedBooter.launchLastDitchDaemonShutdownThread(ForkedBooter.java:369) > at > org.apache.maven.surefire.booter.ForkedBooter.acknowledgedExit(ForkedBooter.java:333) > at > org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:145) > at > org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418) > {code} > > This jira to increase the memory limit from 20g to 22g. > *Note: This is only a workaround to get things more productive. If this > change reduces the frequency of the OOM failure, there must be a follow-up > profile the runtime to figure out which components are causing the docker to > run out of memory.* > CC: [~aajisaka], [~elgoiri], [~weichiu], [~ebadger], [~tasanuma], > [~iwasakims], [~ayushtkn], [~inigoiri] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-17438) Increase docker memory limit in Jenkins
Ahmed Hussein created HADOOP-17438: -- Summary: Increase docker memory limit in Jenkins Key: HADOOP-17438 URL: https://issues.apache.org/jira/browse/HADOOP-17438 Project: Hadoop Common Issue Type: Bug Components: build, scripts, test, yetus Reporter: Ahmed Hussein Assignee: Ahmed Hussein Yetus keeps failing with OOM. {code:bash} unable to create new native thread java.lang.OutOfMemoryError: unable to create new native thread at java.lang.Thread.start0(Native Method) at java.lang.Thread.start(Thread.java:717) at java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:957) at java.util.concurrent.ThreadPoolExecutor.ensurePrestart(ThreadPoolExecutor.java:1603) at java.util.concurrent.ScheduledThreadPoolExecutor.delayedExecute(ScheduledThreadPoolExecutor.java:334) at java.util.concurrent.ScheduledThreadPoolExecutor.schedule(ScheduledThreadPoolExecutor.java:533) at org.apache.maven.surefire.booter.ForkedBooter.launchLastDitchDaemonShutdownThread(ForkedBooter.java:369) at org.apache.maven.surefire.booter.ForkedBooter.acknowledgedExit(ForkedBooter.java:333) at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:145) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418) {code} This jira to increase the memory limit from 20g to 22g. *Note: This is only a workaround to get things more productive. If this change reduces the frequency of the OOM failure, there must be a follow-up profile the runtime to figure out which components are causing the docker to run out of memory.* CC: [~aajisaka], [~elgoiri], [~weichiu], [~ebadger], [~tasanuma], [~iwasakims], [~ayushtkn], [~inigoiri] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2557: HDFS-15732:EC client will not retry get block token when block token expired in kerberized cluster
hadoop-yetus commented on pull request #2557: URL: https://github.com/apache/hadoop/pull/2557#issuecomment-747546220 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 28s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | | 0m 0s | [test4tests](test4tests) | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 13m 41s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 22m 6s | | trunk passed | | +1 :green_heart: | compile | 22m 36s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | compile | 19m 15s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +1 :green_heart: | checkstyle | 2m 35s | | trunk passed | | +1 :green_heart: | mvnsite | 2m 34s | | trunk passed | | +1 :green_heart: | shadedclient | 20m 40s | | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 1m 55s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javadoc | 2m 27s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +0 :ok: | spotbugs | 2m 38s | | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 4m 55s | | trunk passed | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 28s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 46s | | the patch passed | | +1 :green_heart: | compile | 21m 44s | | the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javac | 21m 44s | | the patch passed | | +1 :green_heart: | compile | 19m 24s | | the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +1 :green_heart: | javac | 19m 24s | | the patch passed | | +1 :green_heart: | checkstyle | 2m 36s | | the patch passed | | +1 :green_heart: | mvnsite | 2m 33s | | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 15m 34s | | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 1m 57s | | the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javadoc | 2m 25s | | the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +1 :green_heart: | findbugs | 5m 11s | | the patch passed | _ Other Tests _ | | +1 :green_heart: | unit | 10m 10s | | hadoop-common in the patch passed. | | +1 :green_heart: | unit | 2m 33s | | hadoop-hdfs-client in the patch passed. | | +1 :green_heart: | asflicense | 0m 54s | | The patch does not generate ASF License warnings. | | | | 201m 35s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2557/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2557 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 6f65e93e2325 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 4c033bafa02 | | Default Java | Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2557/3/testReport/ | | Max. process+thread count | 3256 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client U: . | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2557/3/console | | versions | git=2.17.1 maven=3.6.0 findbugs=4.0.6 | | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. This is an automated me
[GitHub] [hadoop] hadoop-yetus commented on pull request #2557: HDFS-15732:EC client will not retry get block token when block token expired in kerberized cluster
hadoop-yetus commented on pull request #2557: URL: https://github.com/apache/hadoop/pull/2557#issuecomment-747545248 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 22s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | | 0m 0s | [test4tests](test4tests) | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 13m 43s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 22m 1s | | trunk passed | | +1 :green_heart: | compile | 22m 33s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | compile | 19m 23s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +1 :green_heart: | checkstyle | 2m 37s | | trunk passed | | +1 :green_heart: | mvnsite | 2m 33s | | trunk passed | | +1 :green_heart: | shadedclient | 20m 41s | | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 1m 50s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javadoc | 2m 26s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +0 :ok: | spotbugs | 2m 39s | | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 5m 1s | | trunk passed | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 29s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 45s | | the patch passed | | +1 :green_heart: | compile | 21m 54s | | the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javac | 21m 54s | | the patch passed | | +1 :green_heart: | compile | 19m 17s | | the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +1 :green_heart: | javac | 19m 17s | | the patch passed | | +1 :green_heart: | checkstyle | 2m 31s | | the patch passed | | +1 :green_heart: | mvnsite | 2m 29s | | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 14m 51s | | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 1m 54s | | the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javadoc | 2m 32s | | the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +1 :green_heart: | findbugs | 5m 17s | | the patch passed | _ Other Tests _ | | +1 :green_heart: | unit | 10m 18s | | hadoop-common in the patch passed. | | +1 :green_heart: | unit | 2m 32s | | hadoop-hdfs-client in the patch passed. | | +1 :green_heart: | asflicense | 0m 52s | | The patch does not generate ASF License warnings. | | | | 200m 55s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2557/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2557 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 27092a1e2943 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 4c033bafa02 | | Default Java | Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2557/2/testReport/ | | Max. process+thread count | 1368 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client U: . | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2557/2/console | | versions | git=2.17.1 maven=3.6.0 findbugs=4.0.6 | | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. This is an automated me
[GitHub] [hadoop] hadoop-yetus commented on pull request #2554: YARN-10536. distributedshell client handles interrupt
hadoop-yetus commented on pull request #2554: URL: https://github.com/apache/hadoop/pull/2554#issuecomment-747540421 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 37s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | | 0m 0s | [test4tests](test4tests) | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 33m 51s | | trunk passed | | +1 :green_heart: | compile | 0m 30s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | compile | 0m 27s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +1 :green_heart: | checkstyle | 0m 28s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 32s | | trunk passed | | +1 :green_heart: | shadedclient | 17m 21s | | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 29s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javadoc | 0m 23s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +0 :ok: | spotbugs | 0m 47s | | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 0m 45s | | trunk passed | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 25s | | the patch passed | | +1 :green_heart: | compile | 0m 20s | | the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javac | 0m 20s | | the patch passed | | +1 :green_heart: | compile | 0m 18s | | the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +1 :green_heart: | javac | 0m 18s | | the patch passed | | +1 :green_heart: | checkstyle | 0m 17s | | hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell: The patch generated 0 new + 150 unchanged - 5 fixed = 150 total (was 155) | | +1 :green_heart: | mvnsite | 0m 20s | | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 15m 44s | | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 21s | | the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javadoc | 0m 22s | | the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +1 :green_heart: | findbugs | 0m 49s | | the patch passed | _ Other Tests _ | | -1 :x: | unit | 24m 51s | [/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2554/2/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt) | hadoop-yarn-applications-distributedshell in the patch passed. | | +1 :green_heart: | asflicense | 0m 35s | | The patch does not generate ASF License warnings. | | | | 102m 14s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.yarn.applications.distributedshell.TestDistributedShell | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2554/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2554 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux e31b8b30d089 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 4c033bafa02 | | Default Java | Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2554/2/testReport/ | | Max. process+thread count | 764 (vs. ulimit of 5500) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-di
[GitHub] [hadoop] hadoop-yetus commented on pull request #2557: HDFS-15732:EC client will not retry get block token when block token expired in kerberized cluster
hadoop-yetus commented on pull request #2557: URL: https://github.com/apache/hadoop/pull/2557#issuecomment-747538572 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 32s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | | 0m 0s | [test4tests](test4tests) | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 13m 53s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 20m 42s | | trunk passed | | +1 :green_heart: | compile | 19m 58s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | compile | 17m 19s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +1 :green_heart: | checkstyle | 2m 41s | | trunk passed | | +1 :green_heart: | mvnsite | 2m 39s | | trunk passed | | +1 :green_heart: | shadedclient | 21m 10s | | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 1m 56s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javadoc | 2m 33s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +0 :ok: | spotbugs | 2m 51s | | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 5m 26s | | trunk passed | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 28s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 51s | | the patch passed | | +1 :green_heart: | compile | 20m 11s | | the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javac | 20m 11s | | the patch passed | | +1 :green_heart: | compile | 17m 20s | | the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +1 :green_heart: | javac | 17m 20s | | the patch passed | | +1 :green_heart: | checkstyle | 2m 39s | | the patch passed | | +1 :green_heart: | mvnsite | 2m 36s | | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 15m 13s | | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 1m 56s | | the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javadoc | 2m 29s | | the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +1 :green_heart: | findbugs | 5m 8s | | the patch passed | _ Other Tests _ | | +1 :green_heart: | unit | 9m 33s | | hadoop-common in the patch passed. | | +1 :green_heart: | unit | 2m 35s | | hadoop-hdfs-client in the patch passed. | | +1 :green_heart: | asflicense | 0m 54s | | The patch does not generate ASF License warnings. | | | | 192m 7s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2557/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2557 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 67ae9387d358 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 4c033bafa02 | | Default Java | Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2557/1/testReport/ | | Max. process+thread count | 3249 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client U: . | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2557/1/console | | versions | git=2.17.1 maven=3.6.0 findbugs=4.0.6 | | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. This is an automated mes
[jira] [Work logged] (HADOOP-17224) Install Intel ISA-L library in Dockerfile
[ https://issues.apache.org/jira/browse/HADOOP-17224?focusedWorklogId=525620&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-525620 ] ASF GitHub Bot logged work on HADOOP-17224: --- Author: ASF GitHub Bot Created on: 17/Dec/20 15:57 Start Date: 17/Dec/20 15:57 Worklog Time Spent: 10m Work Description: amahussein commented on pull request #2537: URL: https://github.com/apache/hadoop/pull/2537#issuecomment-747529035 @tasanuma, @iwasakims did anyone consider increasing the resources allocated to the docker image? For example, increase the memory and see whether or not OOM disappears? Based on [Yetus documentations ](https://yetus.apache.org/documentation/in-progress/precommit/docker/): - maybe we can try increasing `--dockermemlimit=20g` to 22 or 24g. Well, cautious needs to be given to avoid bringing the entire server down. So, increase it "responsibly" :) - Apache Yetus also sets the `--oom-score-adj` to 500 in order to offer itself as the first processes to be killed if memory is low. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 525620) Time Spent: 3h 20m (was: 3h 10m) > Install Intel ISA-L library in Dockerfile > - > > Key: HADOOP-17224 > URL: https://issues.apache.org/jira/browse/HADOOP-17224 > Project: Hadoop Common > Issue Type: Bug >Reporter: Takanobu Asanuma >Assignee: Takanobu Asanuma >Priority: Blocker > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 3h 20m > Remaining Estimate: 0h > > Currently, there is not isa-l library in the docker container, and jenkins > skips the natvie tests, TestNativeRSRawCoder and TestNativeXORRawCoder. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] amahussein commented on pull request #2537: HADOOP-17224. Install Intel ISA-L library in Dockerfile.
amahussein commented on pull request #2537: URL: https://github.com/apache/hadoop/pull/2537#issuecomment-747529035 @tasanuma, @iwasakims did anyone consider increasing the resources allocated to the docker image? For example, increase the memory and see whether or not OOM disappears? Based on [Yetus documentations ](https://yetus.apache.org/documentation/in-progress/precommit/docker/): - maybe we can try increasing `--dockermemlimit=20g` to 22 or 24g. Well, cautious needs to be given to avoid bringing the entire server down. So, increase it "responsibly" :) - Apache Yetus also sets the `--oom-score-adj` to 500 in order to offer itself as the first processes to be killed if memory is low. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17271) S3A statistics to support IOStatistics
[ https://issues.apache.org/jira/browse/HADOOP-17271?focusedWorklogId=525614&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-525614 ] ASF GitHub Bot logged work on HADOOP-17271: --- Author: ASF GitHub Bot Created on: 17/Dec/20 15:46 Start Date: 17/Dec/20 15:46 Worklog Time Spent: 10m Work Description: steveloughran commented on pull request #2553: URL: https://github.com/apache/hadoop/pull/2553#issuecomment-747521734 Mock test failure. As usual, as likely as a mock-related change as a functional code change. But as the committer has been instrumented with stats collection, I'm going to suspect the production code first ``` java.lang.AssertionError: [Committed objects compared to deleted paths org.apache.hadoop.fs.s3a.commit.staging.StagingTestBase$ClientResults@49d4e478{ requests=12, uploads=12, parts=12, tagsByUpload=12, commits=5, aborts=7, deletes=0}] Expecting: <["s3a://bucket-name/output/path/r_1_1_a8d9a7c6-060d-4e05-8225-96b2c412bb6f-ed6e0eb6-81de-4ffd-abfb-337f78e5c312", "s3a://bucket-name/output/path/r_1_1_b54314cb-8266-4beb-8ecc-61e0663b3022-ed6e0eb6-81de-4ffd-abfb-337f78e5c312", "s3a://bucket-name/output/path/r_0_0_65638697-4b34-4320-ad61-817a1ef996ca-ed6e0eb6-81de-4ffd-abfb-337f78e5c312", "s3a://bucket-name/output/path/r_0_0_d34de224-e5dd-4cd7-93ba-45072d14b534-ed6e0eb6-81de-4ffd-abfb-337f78e5c312", "s3a://bucket-name/output/path/r_0_0_1fcd9943-5a6e-448b-a887-452393227473-ed6e0eb6-81de-4ffd-abfb-337f78e5c312"]> to contain exactly in any order: <[]> but the following elements were unexpected: <["s3a://bucket-name/output/path/r_1_1_a8d9a7c6-060d-4e05-8225-96b2c412bb6f-ed6e0eb6-81de-4ffd-abfb-337f78e5c312", "s3a://bucket-name/output/path/r_1_1_b54314cb-8266-4beb-8ecc-61e0663b3022-ed6e0eb6-81de-4ffd-abfb-337f78e5c312", "s3a://bucket-name/output/path/r_0_0_65638697-4b34-4320-ad61-817a1ef996ca-ed6e0eb6-81de-4ffd-abfb-337f78e5c312", "s3a://bucket-name/output/path/r_0_0_d34de224-e5dd-4cd7-93ba-45072d14b534-ed6e0eb6-81de-4ffd-abfb-337f78e5c312", "s3a://bucket-name/output/path/r_0_0_1fcd9943-5a6e-448b-a887-452393227473-ed6e0eb6-81de-4ffd-abfb-337f78e5c312"]> ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 525614) Time Spent: 8h 50m (was: 8h 40m) > S3A statistics to support IOStatistics > -- > > Key: HADOOP-17271 > URL: https://issues.apache.org/jira/browse/HADOOP-17271 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.3.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Labels: pull-request-available > Time Spent: 8h 50m > Remaining Estimate: 0h > > S3A to rework statistics with > * API + Implementation split of the interfaces used by subcomponents when > reporting stats > * S3A Instrumentation to implement all the interfaces > * streams, etc to all implement IOStatisticsSources and serve to callers > * Add some tracking of durations of remote requests -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on pull request #2553: HADOOP-17271. S3A to support IOStatistics
steveloughran commented on pull request #2553: URL: https://github.com/apache/hadoop/pull/2553#issuecomment-747521734 Mock test failure. As usual, as likely as a mock-related change as a functional code change. But as the committer has been instrumented with stats collection, I'm going to suspect the production code first ``` java.lang.AssertionError: [Committed objects compared to deleted paths org.apache.hadoop.fs.s3a.commit.staging.StagingTestBase$ClientResults@49d4e478{ requests=12, uploads=12, parts=12, tagsByUpload=12, commits=5, aborts=7, deletes=0}] Expecting: <["s3a://bucket-name/output/path/r_1_1_a8d9a7c6-060d-4e05-8225-96b2c412bb6f-ed6e0eb6-81de-4ffd-abfb-337f78e5c312", "s3a://bucket-name/output/path/r_1_1_b54314cb-8266-4beb-8ecc-61e0663b3022-ed6e0eb6-81de-4ffd-abfb-337f78e5c312", "s3a://bucket-name/output/path/r_0_0_65638697-4b34-4320-ad61-817a1ef996ca-ed6e0eb6-81de-4ffd-abfb-337f78e5c312", "s3a://bucket-name/output/path/r_0_0_d34de224-e5dd-4cd7-93ba-45072d14b534-ed6e0eb6-81de-4ffd-abfb-337f78e5c312", "s3a://bucket-name/output/path/r_0_0_1fcd9943-5a6e-448b-a887-452393227473-ed6e0eb6-81de-4ffd-abfb-337f78e5c312"]> to contain exactly in any order: <[]> but the following elements were unexpected: <["s3a://bucket-name/output/path/r_1_1_a8d9a7c6-060d-4e05-8225-96b2c412bb6f-ed6e0eb6-81de-4ffd-abfb-337f78e5c312", "s3a://bucket-name/output/path/r_1_1_b54314cb-8266-4beb-8ecc-61e0663b3022-ed6e0eb6-81de-4ffd-abfb-337f78e5c312", "s3a://bucket-name/output/path/r_0_0_65638697-4b34-4320-ad61-817a1ef996ca-ed6e0eb6-81de-4ffd-abfb-337f78e5c312", "s3a://bucket-name/output/path/r_0_0_d34de224-e5dd-4cd7-93ba-45072d14b534-ed6e0eb6-81de-4ffd-abfb-337f78e5c312", "s3a://bucket-name/output/path/r_0_0_1fcd9943-5a6e-448b-a887-452393227473-ed6e0eb6-81de-4ffd-abfb-337f78e5c312"]> ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15691) Add PathCapabilities to FS and FC to complement StreamCapabilities
[ https://issues.apache.org/jira/browse/HADOOP-15691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17251164#comment-17251164 ] Steve Loughran commented on HADOOP-15691: - Sorry, missed that I'd pulled this back as part of some of the dir marker work. > Add PathCapabilities to FS and FC to complement StreamCapabilities > -- > > Key: HADOOP-15691 > URL: https://issues.apache.org/jira/browse/HADOOP-15691 > Project: Hadoop Common > Issue Type: New Feature >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Fix For: 3.3.0, 3.2.2, 3.2.3 > > Attachments: HADOOP-15691-001.patch, HADOOP-15691-002.patch, > HADOOP-15691-003.patch, HADOOP-15691-004.patch > > > This complements the {{StreamCapabilities}} interface by allowing > applications > to probe for a specific path on a specific instance of a {{FileSystem}} > or {{FileContext}} offering a specific feature. > This is intended to allow applications to determine > * Whether a method is implemented before calling it and dealing with > any subsequent UnsupportedOperationException. > * Whether a specific feature is believed to be available in the remote store. > As well as a common set of capabilities defined in CommonPathCapabilities, > file systems are free to add their own capabilities, prefixed with > fs. + schema + . > > The plan is to identify and document more capabilities -and for file systems > which add new features, for a declaration of the availability of the feature > to > always be available. > The interface may be offered by other classes too; there is no restriction > here. > Note > * The remote store is not expected to be checked for the feature; > It is more a check of client API and the client's configuration/knowledge > of the state of the remote system. > * Permissions are not checked. > This is needed for > * HADOOP-14707: declare that a dest FS supports permissions > * object stores to declare that they offer PUT-in-place alongside > (slow-rename) > * Anything else where the implementation semantics of an FS is so different > caller apps would benefit from probing for the underlying semantics > I know, we want all filesystem to work *exactly* the same. But it doesn't > hold, especially for object stores —and to efficiently use them, callers need > to be able to ask for specific features. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17437) Update Hadoop Documentation with a new AWS Credential Provider used with EKS
[ https://issues.apache.org/jira/browse/HADOOP-17437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17251159#comment-17251159 ] Steve Loughran commented on HADOOP-17437: - Happy to take a github PR against hadoop with the changes to the documentation > Update Hadoop Documentation with a new AWS Credential Provider used with EKS > > > Key: HADOOP-17437 > URL: https://issues.apache.org/jira/browse/HADOOP-17437 > Project: Hadoop Common > Issue Type: Task > Components: auth, fs/s3 >Reporter: Prateek Dubey >Priority: Minor > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17437) Update Hadoop Documentation with a new AWS Credential Provider used with EKS
[ https://issues.apache.org/jira/browse/HADOOP-17437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-17437: Component/s: fs/s3 > Update Hadoop Documentation with a new AWS Credential Provider used with EKS > > > Key: HADOOP-17437 > URL: https://issues.apache.org/jira/browse/HADOOP-17437 > Project: Hadoop Common > Issue Type: Task > Components: auth, fs/s3 >Reporter: Prateek Dubey >Priority: Minor > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2556: HDFS-15731. Reduce threadCount for unit tests to reduce the memory usage
hadoop-yetus commented on pull request #2556: URL: https://github.com/apache/hadoop/pull/2556#issuecomment-747501766 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 31s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | | 0m 0s | [test4tests](test4tests) | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 13m 54s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 21m 53s | | trunk passed | | +1 :green_heart: | compile | 20m 0s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | compile | 17m 19s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +1 :green_heart: | checkstyle | 2m 39s | | trunk passed | | +1 :green_heart: | mvnsite | 3m 2s | | trunk passed | | +1 :green_heart: | shadedclient | 21m 32s | | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 2m 34s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javadoc | 3m 21s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +0 :ok: | spotbugs | 1m 27s | | Used deprecated FindBugs config; considering switching to SpotBugs. | | +0 :ok: | findbugs | 0m 38s | | branch/hadoop-project no findbugs output file (findbugsXml.xml) | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 25s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 56s | | the patch passed | | +1 :green_heart: | compile | 19m 25s | | the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javac | 19m 25s | | the patch passed | | +1 :green_heart: | compile | 17m 15s | | the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +1 :green_heart: | javac | 17m 15s | | the patch passed | | +1 :green_heart: | checkstyle | 2m 36s | | the patch passed | | +1 :green_heart: | mvnsite | 3m 1s | | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | | The patch has no whitespace issues. | | +1 :green_heart: | xml | 0m 1s | | The patch has no ill-formed XML file. | | +1 :green_heart: | shadedclient | 15m 27s | | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 2m 31s | | the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javadoc | 3m 18s | | the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +0 :ok: | findbugs | 0m 36s | | hadoop-project has no data from findbugs | _ Other Tests _ | | +1 :green_heart: | unit | 0m 34s | | hadoop-project in the patch passed. | | +1 :green_heart: | unit | 186m 16s | | hadoop-hdfs in the patch passed. | | +1 :green_heart: | unit | 16m 52s | | hadoop-hdfs-rbf in the patch passed. | | +1 :green_heart: | asflicense | 1m 8s | | The patch does not generate ASF License warnings. | | | | 389m 11s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2556/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2556 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle xml | | uname | Linux fd1cd8fc4e93 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 4c033bafa02 | | Default Java | Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2556/1/testReport/ | | Max. process+thread count | 3352 (vs. ulimit of 5500) | | modules | C: hadoop-project hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-rbf U: . | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2556/1/console | | versions | git=2.17.1 maven=3.6.0
[GitHub] [hadoop] sodonnel edited a comment on pull request #2533: HDFS-15719. [Hadoop 3] Both NameNodes can crash simultaneously due to the short JN socket timeout
sodonnel edited a comment on pull request #2533: URL: https://github.com/apache/hadoop/pull/2533#issuecomment-747424463 This change LGTM +1 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2558: HDFS-15732 EC client will not retry get block token when block token …
hadoop-yetus commented on pull request #2558: URL: https://github.com/apache/hadoop/pull/2558#issuecomment-747484300 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 33s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 36m 13s | | trunk passed | | +1 :green_heart: | compile | 1m 7s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | compile | 0m 57s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +1 :green_heart: | checkstyle | 0m 29s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 0s | | trunk passed | | +1 :green_heart: | shadedclient | 17m 56s | | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 45s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javadoc | 0m 38s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +0 :ok: | spotbugs | 2m 47s | | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 2m 45s | | trunk passed | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 55s | | the patch passed | | +1 :green_heart: | compile | 0m 55s | | the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javac | 0m 55s | | the patch passed | | +1 :green_heart: | compile | 0m 47s | | the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +1 :green_heart: | javac | 0m 47s | | the patch passed | | +1 :green_heart: | checkstyle | 0m 18s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 49s | | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 16m 44s | | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 38s | | the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javadoc | 0m 32s | | the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +1 :green_heart: | findbugs | 2m 47s | | the patch passed | _ Other Tests _ | | +1 :green_heart: | unit | 2m 23s | | hadoop-hdfs-client in the patch passed. | | +1 :green_heart: | asflicense | 0m 36s | | The patch does not generate ASF License warnings. | | | | 92m 41s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2558/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2558 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux d4bff7cd 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 4c033bafa02 | | Default Java | Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2558/1/testReport/ | | Max. process+thread count | 570 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-client U: hadoop-hdfs-project/hadoop-hdfs-client | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2558/1/console | | versions | git=2.17.1 maven=3.6.0 findbugs=4.0.6 | | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific commen
[GitHub] [hadoop] amahussein commented on pull request #2554: YARN-10536. distributedshell client handles interrupt
amahussein commented on pull request #2554: URL: https://github.com/apache/hadoop/pull/2554#issuecomment-747466075 The changes should be fine to merge. The two failing tests are failing for more than a year. there are opened jiras to address them. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] Cosss7 opened a new pull request #2559: HDFS-15734. [READ] DirectoryScanner#scan need not check StorageType.PROVIDED
Cosss7 opened a new pull request #2559: URL: https://github.com/apache/hadoop/pull/2559 ## NOTICE Please create an issue in ASF JIRA before opening a pull request, and you need to set the title of the pull request which starts with the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.) For more details, please see https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute https://issues.apache.org/jira/browse/HDFS-15734 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] dgzdot opened a new pull request #2558: HDFS-15732 EC client will not retry get block token when block token …
dgzdot opened a new pull request #2558: URL: https://github.com/apache/hadoop/pull/2558 Since client side will not identify the InvalidToken error because of the SASL negotiation, always refetch token when encountered error for the first time. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] dgzdot commented on pull request #2557: HDFS-15732:EC client will not retry get block token when block token expired in kerberized cluster
dgzdot commented on pull request #2557: URL: https://github.com/apache/hadoop/pull/2557#issuecomment-747427566 I made some mistakes. so close the pr This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] dgzdot closed pull request #2557: HDFS-15732:EC client will not retry get block token when block token expired in kerberized cluster
dgzdot closed pull request #2557: URL: https://github.com/apache/hadoop/pull/2557 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] sodonnel commented on pull request #2533: HDFS-15719. [Hadoop 3] Both NameNodes can crash simultaneously due to the short JN socket timeout
sodonnel commented on pull request #2533: URL: https://github.com/apache/hadoop/pull/2533#issuecomment-747424463 This change LGTM - +1 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] dgzdot opened a new pull request #2557: HDFS-15732:EC client will not retry get block token when block token expired in kerberized cluster
dgzdot opened a new pull request #2557: URL: https://github.com/apache/hadoop/pull/2557 Since ec client could not identify the InvalidToken error because of the SASL negotiation, always reftch token when encounter error for the first time. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #309: MAPREDUCE-7017:Too many times of meaningless invocation in TaskAttemptImpl#resolveHosts
hadoop-yetus commented on pull request #309: URL: https://github.com/apache/hadoop/pull/309#issuecomment-747325248 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 36s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 40m 6s | | trunk passed | | +1 :green_heart: | compile | 0m 41s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | compile | 0m 37s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +1 :green_heart: | checkstyle | 0m 34s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 40s | | trunk passed | | +1 :green_heart: | shadedclient | 21m 14s | | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 34s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javadoc | 0m 27s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +0 :ok: | spotbugs | 1m 14s | | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 1m 11s | | trunk passed | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 41s | | the patch passed | | +1 :green_heart: | compile | 0m 35s | | the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javac | 0m 35s | | the patch passed | | +1 :green_heart: | compile | 0m 25s | | the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +1 :green_heart: | javac | 0m 25s | | the patch passed | | -0 :warning: | checkstyle | 0m 25s | [/diff-checkstyle-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-309/6/artifact/out/diff-checkstyle-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app.txt) | hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app: The patch generated 2 new + 264 unchanged - 0 fixed = 266 total (was 264) | | +1 :green_heart: | mvnsite | 0m 30s | | the patch passed | | -1 :x: | whitespace | 0m 0s | [/whitespace-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-309/6/artifact/out/whitespace-eol.txt) | The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply | | +1 :green_heart: | shadedclient | 14m 49s | | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 27s | | the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javadoc | 0m 24s | | the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +1 :green_heart: | findbugs | 1m 2s | | the patch passed | _ Other Tests _ | | -1 :x: | unit | 8m 21s | [/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-309/6/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app.txt) | hadoop-mapreduce-client-app in the patch passed. | | +1 :green_heart: | asflicense | 0m 32s | | The patch does not generate ASF License warnings. | | | | 97m 8s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.mapreduce.v2.app.job.impl.TestTaskAttempt | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-309/6/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/309 | | JIRA Issue | MAPREDUCE-7017 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux ee78a2583feb 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 4c033bafa02 | | Default J
[GitHub] [hadoop] hadoop-yetus commented on pull request #309: MAPREDUCE-7017:Too many times of meaningless invocation in TaskAttemptImpl#resolveHosts
hadoop-yetus commented on pull request #309: URL: https://github.com/apache/hadoop/pull/309#issuecomment-747323838 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 32s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 35m 34s | | trunk passed | | +1 :green_heart: | compile | 0m 38s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | compile | 0m 32s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +1 :green_heart: | checkstyle | 0m 31s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 36s | | trunk passed | | +1 :green_heart: | shadedclient | 16m 17s | | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 31s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javadoc | 0m 27s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +0 :ok: | spotbugs | 1m 1s | | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 0m 58s | | trunk passed | _ Patch Compile Tests _ | | -1 :x: | mvninstall | 0m 18s | [/patch-mvninstall-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-309/7/artifact/out/patch-mvninstall-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app.txt) | hadoop-mapreduce-client-app in the patch failed. | | -1 :x: | compile | 0m 21s | [/patch-compile-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-309/7/artifact/out/patch-compile-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04.txt) | hadoop-mapreduce-client-app in the patch failed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04. | | -1 :x: | javac | 0m 21s | [/patch-compile-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-309/7/artifact/out/patch-compile-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04.txt) | hadoop-mapreduce-client-app in the patch failed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04. | | -1 :x: | compile | 0m 19s | [/patch-compile-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-309/7/artifact/out/patch-compile-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01.txt) | hadoop-mapreduce-client-app in the patch failed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01. | | -1 :x: | javac | 0m 19s | [/patch-compile-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-309/7/artifact/out/patch-compile-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01.txt) | hadoop-mapreduce-client-app in the patch failed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01. | | -0 :warning: | checkstyle | 0m 25s | [/diff-checkstyle-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-309/7/artifact/out/diff-checkstyle-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app.txt) | hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app: The patch generated 1 new + 264 unchanged - 0 fixed = 265 total (was 264) | | -1 :x: | mvnsite | 0m 20s | [/patch-mvnsite-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app.txt](https://ci-hadoop.apache.org/job/had
[GitHub] [hadoop] aajisaka opened a new pull request #2556: HDFS-15731. Reduce threadCount for unit tests to reduce the memory usage
aajisaka opened a new pull request #2556: URL: https://github.com/apache/hadoop/pull/2556 JIRA: https://issues.apache.org/jira/browse/HDFS-15731 Reduce the thread count from 4 to 2. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] aajisaka commented on pull request #2555: HDFS-13579. Fix potential OutOfMemory error in DatanodeHttpServer.
aajisaka commented on pull request #2555: URL: https://github.com/apache/hadoop/pull/2555#issuecomment-747297288 Rethinking this, the OOM is not fixed by this change. Closing. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] aajisaka closed pull request #2555: HDFS-13579. Fix potential OutOfMemory error in DatanodeHttpServer.
aajisaka closed pull request #2555: URL: https://github.com/apache/hadoop/pull/2555 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-17437) Update Hadoop Documentation with a new AWS Credential Provider used with EKS
[ https://issues.apache.org/jira/browse/HADOOP-17437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17250897#comment-17250897 ] Prateek Dubey edited comment on HADOOP-17437 at 12/17/20, 8:38 AM: --- I'm opening this issue to request to update Hadoop Documentation for S3 Authentication with a new Credential Provider that is used by EKS (Amazon Elastic Kubernetes Service) to authenticate to AWS Services. Document to update - https://hadoop.apache.org/docs/current/hadoop-aws/tools/hadoop-aws/index.html#Authenticating_with_S3 I was trying to setup Hive Metastore service on EKS and while using S3 I got access denied issues wherein my K8s pod already assumed the correct Service Account which has access to S3. After some troubleshooting, I figured out we need to add following property in core-site.xml of Hadoop for Hive to Authenticate to S3 and create schemas/ tables while running on EKS - {quote} fs.s3a.aws.credentials.provider com.amazonaws.auth.WebIdentityTokenCredentialsProvider {quote} This property is currently not mentioned in the documentation yet. I tested this using - Hadoop 3.2.0 aws-java-sdk-bundle-1.11.874.jar hadoop-aws-3.2.0.jar was (Author: dprateek): I'm opening this issue to request to update Hadoop Documentation for S3 Authentication with a new Credential Provider that is used by EKS (Amazon Elastic Kubernetes Service) to authenticate to AWS Services. Document to update - https://hadoop.apache.org/docs/current/hadoop-aws/tools/hadoop-aws/index.html#Authenticating_with_S3 I was trying to setup Hive Metastore service on EKS and while using S3 I got access denied issues wherein my K8s pod already assumed the correct Service Account which has access to S3. After some troubleshooting, I figured out we need to add following property in core-site.xml for Hive to Authenticate to S3 while running on EKS - {quote} fs.s3a.aws.credentials.provider com.amazonaws.auth.WebIdentityTokenCredentialsProvider {quote} This property is currently not mentioned in the documentation yet. I tested this using - Hadoop 3.2.0 aws-java-sdk-bundle-1.11.874.jar hadoop-aws-3.2.0.jar > Update Hadoop Documentation with a new AWS Credential Provider used with EKS > > > Key: HADOOP-17437 > URL: https://issues.apache.org/jira/browse/HADOOP-17437 > Project: Hadoop Common > Issue Type: Task > Components: auth >Reporter: Prateek Dubey >Priority: Minor > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-17437) Update Hadoop Documentation with a new AWS Credential Provider used with EKS
[ https://issues.apache.org/jira/browse/HADOOP-17437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17250897#comment-17250897 ] Prateek Dubey edited comment on HADOOP-17437 at 12/17/20, 8:35 AM: --- I'm opening this issue to request to update Hadoop Documentation for S3 Authentication with a new Credential Provider that is used by EKS (Amazon Elastic Kubernetes Service) to authenticate to AWS Services. Document to update - https://hadoop.apache.org/docs/current/hadoop-aws/tools/hadoop-aws/index.html#Authenticating_with_S3 I was trying to setup Hive Metastore service on EKS and while using S3 I got access denied issues wherein my K8s pod already assumed the correct Service Account which has access to S3. After some troubleshooting, I figured out we need to add following property in core-site.xml for Hive to Authenticate to S3 while running on EKS - fs.s3a.aws.credentials.provider com.amazonaws.auth.WebIdentityTokenCredentialsProvider This property is currently not mentioned in the documentation yet. I tested this using - Hadoop 3.2.0 aws-java-sdk-bundle-1.11.874.jar hadoop-aws-3.2.0.jar was (Author: dprateek): I'm opening this issues to request to update Hadoop Documentation for S3 Authentication with a new Credential Provider that is used by EKS (Amazon Elastic Kubernetes Service) to authenticate to AWS Services. Document to update - https://hadoop.apache.org/docs/current/hadoop-aws/tools/hadoop-aws/index.html#Authenticating_with_S3 I tried setting up Hive Metastore service on EKS and while using S3 I got access denied issues wherein my K8s pod already assumed the correct Service Account which has access to S3. After some troubleshooting, I figured out we need to add following property in core-site.xml for Hive to Authenticate to S3 while running on EKS - fs.s3a.aws.credentials.provider com.amazonaws.auth.WebIdentityTokenCredentialsProvider This property is currently not mentioned in the documentation yet. I tested this using - Hadoop 3.2.0 aws-java-sdk-bundle-1.11.874.jar hadoop-aws-3.2.0.jar > Update Hadoop Documentation with a new AWS Credential Provider used with EKS > > > Key: HADOOP-17437 > URL: https://issues.apache.org/jira/browse/HADOOP-17437 > Project: Hadoop Common > Issue Type: Task > Components: auth >Reporter: Prateek Dubey >Priority: Minor > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-17437) Update Hadoop Documentation with a new AWS Credential Provider used with EKS
[ https://issues.apache.org/jira/browse/HADOOP-17437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17250897#comment-17250897 ] Prateek Dubey edited comment on HADOOP-17437 at 12/17/20, 8:35 AM: --- I'm opening this issue to request to update Hadoop Documentation for S3 Authentication with a new Credential Provider that is used by EKS (Amazon Elastic Kubernetes Service) to authenticate to AWS Services. Document to update - https://hadoop.apache.org/docs/current/hadoop-aws/tools/hadoop-aws/index.html#Authenticating_with_S3 I was trying to setup Hive Metastore service on EKS and while using S3 I got access denied issues wherein my K8s pod already assumed the correct Service Account which has access to S3. After some troubleshooting, I figured out we need to add following property in core-site.xml for Hive to Authenticate to S3 while running on EKS - {quote} fs.s3a.aws.credentials.provider com.amazonaws.auth.WebIdentityTokenCredentialsProvider {quote} This property is currently not mentioned in the documentation yet. I tested this using - Hadoop 3.2.0 aws-java-sdk-bundle-1.11.874.jar hadoop-aws-3.2.0.jar was (Author: dprateek): I'm opening this issue to request to update Hadoop Documentation for S3 Authentication with a new Credential Provider that is used by EKS (Amazon Elastic Kubernetes Service) to authenticate to AWS Services. Document to update - https://hadoop.apache.org/docs/current/hadoop-aws/tools/hadoop-aws/index.html#Authenticating_with_S3 I was trying to setup Hive Metastore service on EKS and while using S3 I got access denied issues wherein my K8s pod already assumed the correct Service Account which has access to S3. After some troubleshooting, I figured out we need to add following property in core-site.xml for Hive to Authenticate to S3 while running on EKS - fs.s3a.aws.credentials.provider com.amazonaws.auth.WebIdentityTokenCredentialsProvider This property is currently not mentioned in the documentation yet. I tested this using - Hadoop 3.2.0 aws-java-sdk-bundle-1.11.874.jar hadoop-aws-3.2.0.jar > Update Hadoop Documentation with a new AWS Credential Provider used with EKS > > > Key: HADOOP-17437 > URL: https://issues.apache.org/jira/browse/HADOOP-17437 > Project: Hadoop Common > Issue Type: Task > Components: auth >Reporter: Prateek Dubey >Priority: Minor > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17437) Update Hadoop Documentation with a new AWS Credential Provider used with EKS
[ https://issues.apache.org/jira/browse/HADOOP-17437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17250897#comment-17250897 ] Prateek Dubey commented on HADOOP-17437: I'm opening this issues to request to update Hadoop Documentation for S3 Authentication with a new Credential Provider that is used by EKS (Amazon Elastic Kubernetes Service) to authenticate to AWS Services. Document to update - https://hadoop.apache.org/docs/current/hadoop-aws/tools/hadoop-aws/index.html#Authenticating_with_S3 I tried setting up Hive Metastore service on EKS and while using S3 I got access denied issues wherein my K8s pod already assumed the correct Service Account which has access to S3. After some troubleshooting, I figured out we need to add following property in core-site.xml for Hive to Authenticate to S3 while running on EKS - fs.s3a.aws.credentials.provider com.amazonaws.auth.WebIdentityTokenCredentialsProvider This property is currently not mentioned in the documentation yet. I tested this using - Hadoop 3.2.0 aws-java-sdk-bundle-1.11.874.jar hadoop-aws-3.2.0.jar > Update Hadoop Documentation with a new AWS Credential Provider used with EKS > > > Key: HADOOP-17437 > URL: https://issues.apache.org/jira/browse/HADOOP-17437 > Project: Hadoop Common > Issue Type: Task > Components: auth >Reporter: Prateek Dubey >Priority: Minor > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17317) [JDK 11] Upgrade dnsjava to remove illegal access warnings
[ https://issues.apache.org/jira/browse/HADOOP-17317?focusedWorklogId=525438&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-525438 ] ASF GitHub Bot logged work on HADOOP-17317: --- Author: ASF GitHub Bot Created on: 17/Dec/20 08:23 Start Date: 17/Dec/20 08:23 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #2442: URL: https://github.com/apache/hadoop/pull/2442#issuecomment-747288551 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 30m 20s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | | 0m 0s | [test4tests](test4tests) | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 14m 4s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 21m 3s | | trunk passed | | +1 :green_heart: | compile | 20m 0s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | compile | 17m 13s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +1 :green_heart: | checkstyle | 2m 41s | | trunk passed | | +1 :green_heart: | mvnsite | 2m 47s | | trunk passed | | +1 :green_heart: | shadedclient | 21m 28s | | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 2m 3s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javadoc | 2m 37s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +0 :ok: | spotbugs | 0m 52s | | Used deprecated FindBugs config; considering switching to SpotBugs. | | +0 :ok: | findbugs | 0m 34s | | branch/hadoop-project no findbugs output file (findbugsXml.xml) | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 29s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 31s | | the patch passed | | +1 :green_heart: | compile | 22m 4s | | the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javac | 22m 4s | | the patch passed | | +1 :green_heart: | compile | 17m 17s | | the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +1 :green_heart: | javac | 17m 17s | | the patch passed | | +1 :green_heart: | checkstyle | 2m 35s | | the patch passed | | +1 :green_heart: | mvnsite | 2m 47s | | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | | The patch has no whitespace issues. | | +1 :green_heart: | xml | 0m 2s | | The patch has no ill-formed XML file. | | +1 :green_heart: | shadedclient | 15m 21s | | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 2m 5s | | the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javadoc | 2m 26s | | the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +0 :ok: | findbugs | 0m 29s | | hadoop-project has no data from findbugs | _ Other Tests _ | | +1 :green_heart: | unit | 0m 25s | | hadoop-project in the patch passed. | | +1 :green_heart: | unit | 10m 41s | | hadoop-common in the patch passed. | | +1 :green_heart: | unit | 1m 38s | | hadoop-registry in the patch passed. | | +1 :green_heart: | asflicense | 1m 0s | | The patch does not generate ASF License warnings. | | | | 222m 9s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2442/6/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2442 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle xml | | uname | Linux 733e91bdc64a 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 4c033bafa02 | | Default Java | Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_275-8u27
[jira] [Created] (HADOOP-17437) Update Hadoop Documentation with a new AWS Credential Provider used with EKS
Prateek Dubey created HADOOP-17437: -- Summary: Update Hadoop Documentation with a new AWS Credential Provider used with EKS Key: HADOOP-17437 URL: https://issues.apache.org/jira/browse/HADOOP-17437 Project: Hadoop Common Issue Type: Task Components: auth Reporter: Prateek Dubey -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2442: HADOOP-17317. [JDK 11] Upgrade dnsjava to remove illegal access warnings
hadoop-yetus commented on pull request #2442: URL: https://github.com/apache/hadoop/pull/2442#issuecomment-747288551 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 30m 20s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | | 0m 0s | [test4tests](test4tests) | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 14m 4s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 21m 3s | | trunk passed | | +1 :green_heart: | compile | 20m 0s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | compile | 17m 13s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +1 :green_heart: | checkstyle | 2m 41s | | trunk passed | | +1 :green_heart: | mvnsite | 2m 47s | | trunk passed | | +1 :green_heart: | shadedclient | 21m 28s | | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 2m 3s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javadoc | 2m 37s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +0 :ok: | spotbugs | 0m 52s | | Used deprecated FindBugs config; considering switching to SpotBugs. | | +0 :ok: | findbugs | 0m 34s | | branch/hadoop-project no findbugs output file (findbugsXml.xml) | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 29s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 31s | | the patch passed | | +1 :green_heart: | compile | 22m 4s | | the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javac | 22m 4s | | the patch passed | | +1 :green_heart: | compile | 17m 17s | | the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +1 :green_heart: | javac | 17m 17s | | the patch passed | | +1 :green_heart: | checkstyle | 2m 35s | | the patch passed | | +1 :green_heart: | mvnsite | 2m 47s | | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | | The patch has no whitespace issues. | | +1 :green_heart: | xml | 0m 2s | | The patch has no ill-formed XML file. | | +1 :green_heart: | shadedclient | 15m 21s | | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 2m 5s | | the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javadoc | 2m 26s | | the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +0 :ok: | findbugs | 0m 29s | | hadoop-project has no data from findbugs | _ Other Tests _ | | +1 :green_heart: | unit | 0m 25s | | hadoop-project in the patch passed. | | +1 :green_heart: | unit | 10m 41s | | hadoop-common in the patch passed. | | +1 :green_heart: | unit | 1m 38s | | hadoop-registry in the patch passed. | | +1 :green_heart: | asflicense | 1m 0s | | The patch does not generate ASF License warnings. | | | | 222m 9s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2442/6/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2442 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle xml | | uname | Linux 733e91bdc64a 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 4c033bafa02 | | Default Java | Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2442/6/testReport/ | | Max. process+thread count | 1618 (vs. ulimit of 5500) | | modules | C: hadoop-project hadoop-common-project/hadoop-common hadoop-common-project/hadoop-registry U: . | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2442/6/console | | versions | git=2.17.1 ma
[GitHub] [hadoop] hadoop-yetus commented on pull request #2549: Hadoop 17428. ABFS: Implementation for getContentSummary
hadoop-yetus commented on pull request #2549: URL: https://github.com/apache/hadoop/pull/2549#issuecomment-747284265 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 30s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | | 0m 0s | [test4tests](test4tests) | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 32m 46s | | trunk passed | | +1 :green_heart: | compile | 0m 38s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | compile | 0m 34s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +1 :green_heart: | checkstyle | 0m 27s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 39s | | trunk passed | | +1 :green_heart: | shadedclient | 16m 0s | | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 32s | | trunk passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javadoc | 0m 29s | | trunk passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +0 :ok: | spotbugs | 0m 59s | | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 0m 57s | | trunk passed | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 30s | | the patch passed | | +1 :green_heart: | compile | 0m 30s | | the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javac | 0m 30s | | the patch passed | | +1 :green_heart: | compile | 0m 26s | | the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +1 :green_heart: | javac | 0m 26s | | the patch passed | | -0 :warning: | checkstyle | 0m 17s | [/diff-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2549/4/artifact/out/diff-checkstyle-hadoop-tools_hadoop-azure.txt) | hadoop-tools/hadoop-azure: The patch generated 1 new + 2 unchanged - 0 fixed = 3 total (was 2) | | +1 :green_heart: | mvnsite | 0m 28s | | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 14m 39s | | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 26s | | the patch passed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 | | +1 :green_heart: | javadoc | 0m 25s | | the patch passed with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | +1 :green_heart: | findbugs | 1m 1s | | the patch passed | _ Other Tests _ | | +1 :green_heart: | unit | 1m 31s | | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 33s | | The patch does not generate ASF License warnings. | | | | 76m 29s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2549/4/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2549 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 251867f51016 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 4c033bafa02 | | Default Java | Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2549/4/testReport/ | | Max. process+thread count | 537 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2549/4/console | | versions | git=2.17.1 maven=3.6.0 findbugs=4.0.6 | | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Servi