[GitHub] [hadoop] hadoop-yetus commented on pull request #3145: [Do not commit][WIP] CI for Centos 8
hadoop-yetus commented on pull request #3145: URL: https://github.com/apache/hadoop/pull/3145#issuecomment-868956715 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 26m 8s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | shellcheck | 0m 1s | | Shellcheck was not available. | | +0 :ok: | shelldocs | 0m 1s | | Shelldocs was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 34m 9s | | trunk passed | | -1 :x: | compile | 1m 26s | [/branch-compile-hadoop-hdfs-project_hadoop-hdfs-native-client.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3145/2/artifact/out/branch-compile-hadoop-hdfs-project_hadoop-hdfs-native-client.txt) | hadoop-hdfs-native-client in trunk failed. | | +1 :green_heart: | mvnsite | 0m 31s | | trunk passed | | +1 :green_heart: | shadedclient | 52m 6s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 18s | | the patch passed | | -1 :x: | compile | 1m 16s | [/patch-compile-hadoop-hdfs-project_hadoop-hdfs-native-client.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3145/2/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-native-client.txt) | hadoop-hdfs-native-client in the patch failed. | | -1 :x: | cc | 1m 16s | [/patch-compile-hadoop-hdfs-project_hadoop-hdfs-native-client.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3145/2/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-native-client.txt) | hadoop-hdfs-native-client in the patch failed. | | -1 :x: | golang | 1m 16s | [/patch-compile-hadoop-hdfs-project_hadoop-hdfs-native-client.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3145/2/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-native-client.txt) | hadoop-hdfs-native-client in the patch failed. | | -1 :x: | javac | 1m 16s | [/patch-compile-hadoop-hdfs-project_hadoop-hdfs-native-client.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3145/2/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-native-client.txt) | hadoop-hdfs-native-client in the patch failed. | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | mvnsite | 0m 18s | | the patch passed | | +1 :green_heart: | shadedclient | 15m 27s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 0m 49s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs-native-client.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3145/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-native-client.txt) | hadoop-hdfs-native-client in the patch failed. | | +1 :green_heart: | asflicense | 0m 36s | | The patch does not generate ASF License warnings. | | | | 99m 21s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3145/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3145 | | Optional Tests | dupname asflicense codespell shellcheck shelldocs compile cc mvnsite javac unit golang | | uname | Linux d8706bb0ed9b 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / d514e6564072619ef5df31b0e697f34f5fbd9a41 | | Default Java | Red Hat, Inc.-1.8.0_292-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3145/2/testReport/ | | Max. process+thread count | 520 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: hadoop-hdfs-project/hadoop-hdfs-native-client | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3145/2/console | | versions | git=2.27.0 maven=3.6.3 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message f
[GitHub] [hadoop] fengnanli commented on pull request #2639: HDFS-15785. Datanode to support using DNS to resolve nameservices to IP addresses to get list of namenodes.
fengnanli commented on pull request #2639: URL: https://github.com/apache/hadoop/pull/2639#issuecomment-868952941 Looks good to me. @goiri Do you want to take another look? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] GauthamBanasandra opened a new pull request #3145: [Do not commit][WIP] CI for Centos 8
GauthamBanasandra opened a new pull request #3145: URL: https://github.com/apache/hadoop/pull/3145 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17764) S3AInputStream read does not re-open the input stream on the second read retry attempt
[ https://issues.apache.org/jira/browse/HADOOP-17764?focusedWorklogId=615183&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-615183 ] ASF GitHub Bot logged work on HADOOP-17764: --- Author: ASF GitHub Bot Created on: 25/Jun/21 20:11 Start Date: 25/Jun/21 20:11 Worklog Time Spent: 10m Work Description: steveloughran commented on pull request #3109: URL: https://github.com/apache/hadoop/pull/3109#issuecomment-868808282 thanks. Merged to trunk then (locally) cherrypicked that to branch-3.3, ran the new test (and only that test!) and pushed up. @majdyz thanks! your contribution is appreciated -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 615183) Time Spent: 8h 40m (was: 8.5h) > S3AInputStream read does not re-open the input stream on the second read > retry attempt > -- > > Key: HADOOP-17764 > URL: https://issues.apache.org/jira/browse/HADOOP-17764 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3 >Affects Versions: 3.3.1 >Reporter: Zamil Majdy >Assignee: Zamil Majdy >Priority: Major > Labels: pull-request-available > Fix For: 3.3.2 > > Time Spent: 8h 40m > Remaining Estimate: 0h > > *Bug description:* > The read method in S3AInputStream has this following behaviour when an > IOException happening during the read: > * {{reopen and read quickly}}: The client after failing in the first attempt > of {{read}}, will reopen the stream and try reading again without {{sleep}}. > * {{reopen and wait for fixed duration}}: The client after failing in the > attempt of {{read}}, will reopen the stream, sleep for > {{fs.s3a.retry.interval}} milliseconds (defaults to 500 ms), and then try > reading from the stream. > While doing the {{reopen and read quickly}} process, the subsequent read will > be retried without reopening the input stream in case of the second failure > happened. This leads to some of the bytes read being skipped which results to > corrupt/less data than required. > > *Scenario to reproduce:* > * Execute S3AInputStream `read()` or `read(b, off, len)`. > * The read failed and throws `Connection Reset` exception after reading some > data. > * The InputStream is re-opened and another `read()` or `read(b, off, len)` > is executed > * The read failed for the second time and throws `Connection Reset` > exception after reading some data. > * The InputStream is not re-opened and another `read()` or `read(b, off, > len)` is executed after sleep > * The read succeed, but it skips the first few bytes that has already been > read on the second failure. > > *Proposed fix:* > [https://github.com/apache/hadoop/pull/3109] > Added the test that reproduces the issue along with the fix -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on pull request #3109: HADOOP-17764. S3AInputStream read does not re-open the input stream on the second read retry attempt
steveloughran commented on pull request #3109: URL: https://github.com/apache/hadoop/pull/3109#issuecomment-868808282 thanks. Merged to trunk then (locally) cherrypicked that to branch-3.3, ran the new test (and only that test!) and pushed up. @majdyz thanks! your contribution is appreciated -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17764) S3AInputStream read does not re-open the input stream on the second read retry attempt
[ https://issues.apache.org/jira/browse/HADOOP-17764?focusedWorklogId=615181&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-615181 ] ASF GitHub Bot logged work on HADOOP-17764: --- Author: ASF GitHub Bot Created on: 25/Jun/21 20:09 Start Date: 25/Jun/21 20:09 Worklog Time Spent: 10m Work Description: hadoop-yetus removed a comment on pull request #3109: URL: https://github.com/apache/hadoop/pull/3109#issuecomment-867763605 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 52s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 59m 35s | | trunk passed | | +1 :green_heart: | compile | 0m 44s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 0m 36s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 0m 26s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 42s | | trunk passed | | +1 :green_heart: | javadoc | 0m 21s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 30s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 1m 11s | | trunk passed | | +1 :green_heart: | shadedclient | 17m 5s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 36s | | the patch passed | | +1 :green_heart: | compile | 0m 38s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 0m 38s | | the patch passed | | +1 :green_heart: | compile | 0m 31s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 0m 31s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 19s | [/results-checkstyle-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3109/10/artifact/out/results-checkstyle-hadoop-tools_hadoop-aws.txt) | hadoop-tools/hadoop-aws: The patch generated 4 new + 9 unchanged - 0 fixed = 13 total (was 9) | | +1 :green_heart: | mvnsite | 0m 35s | | the patch passed | | +1 :green_heart: | javadoc | 0m 14s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 23s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 1m 16s | | the patch passed | | +1 :green_heart: | shadedclient | 16m 59s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 2m 30s | | hadoop-aws in the patch passed. | | +1 :green_heart: | asflicense | 0m 28s | | The patch does not generate ASF License warnings. | | | | 107m 10s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3109/10/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3109 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 38ed7c3f7332 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / bb232121d40a1d1a6473341a4869907739fa3956 | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3109/10/testReport/ | | Max. process+thread count | 599 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-a
[jira] [Work logged] (HADOOP-17764) S3AInputStream read does not re-open the input stream on the second read retry attempt
[ https://issues.apache.org/jira/browse/HADOOP-17764?focusedWorklogId=615182&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-615182 ] ASF GitHub Bot logged work on HADOOP-17764: --- Author: ASF GitHub Bot Created on: 25/Jun/21 20:09 Start Date: 25/Jun/21 20:09 Worklog Time Spent: 10m Work Description: hadoop-yetus removed a comment on pull request #3109: URL: https://github.com/apache/hadoop/pull/3109#issuecomment-868684088 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 55s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 35m 53s | | trunk passed | | +1 :green_heart: | compile | 0m 49s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 0m 42s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 0m 29s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 48s | | trunk passed | | +1 :green_heart: | javadoc | 0m 22s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 47s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 1m 32s | | trunk passed | | +1 :green_heart: | shadedclient | 18m 24s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 35s | | the patch passed | | +1 :green_heart: | compile | 0m 40s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 0m 40s | | the patch passed | | +1 :green_heart: | compile | 0m 32s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 0m 32s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 23s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 40s | | the patch passed | | +1 :green_heart: | javadoc | 0m 15s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 26s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 1m 18s | | the patch passed | | +1 :green_heart: | shadedclient | 16m 55s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 2m 43s | | hadoop-aws in the patch passed. | | +1 :green_heart: | asflicense | 0m 30s | | The patch does not generate ASF License warnings. | | | | 86m 1s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3109/11/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3109 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux e92c2d7139cd 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 448b2ef2baefcc74e7f974245a3d654a80d292c8 | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3109/11/testReport/ | | Max. process+thread count | 577 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3109/11/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This me
[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #3109: HADOOP-17764. S3AInputStream read does not re-open the input stream on the second read retry attempt
hadoop-yetus removed a comment on pull request #3109: URL: https://github.com/apache/hadoop/pull/3109#issuecomment-868684088 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 55s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 35m 53s | | trunk passed | | +1 :green_heart: | compile | 0m 49s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 0m 42s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 0m 29s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 48s | | trunk passed | | +1 :green_heart: | javadoc | 0m 22s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 47s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 1m 32s | | trunk passed | | +1 :green_heart: | shadedclient | 18m 24s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 35s | | the patch passed | | +1 :green_heart: | compile | 0m 40s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 0m 40s | | the patch passed | | +1 :green_heart: | compile | 0m 32s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 0m 32s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 23s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 40s | | the patch passed | | +1 :green_heart: | javadoc | 0m 15s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 26s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 1m 18s | | the patch passed | | +1 :green_heart: | shadedclient | 16m 55s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 2m 43s | | hadoop-aws in the patch passed. | | +1 :green_heart: | asflicense | 0m 30s | | The patch does not generate ASF License warnings. | | | | 86m 1s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3109/11/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3109 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux e92c2d7139cd 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 448b2ef2baefcc74e7f974245a3d654a80d292c8 | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3109/11/testReport/ | | Max. process+thread count | 577 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3109/11/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #3135: YARN-10829. Support getApplications API in FederationClientInterceptor
hadoop-yetus commented on pull request #3135: URL: https://github.com/apache/hadoop/pull/3135#issuecomment-868807286 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 0s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 3 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 12m 29s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 22m 25s | | trunk passed | | +1 :green_heart: | compile | 9m 59s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 8m 20s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 1m 45s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 29s | | trunk passed | | +1 :green_heart: | javadoc | 1m 18s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 11s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 2m 55s | | trunk passed | | +1 :green_heart: | shadedclient | 16m 35s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 24s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 0s | | the patch passed | | +1 :green_heart: | compile | 9m 12s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 9m 12s | | the patch passed | | +1 :green_heart: | compile | 8m 14s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 8m 14s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 1m 40s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 20s | | the patch passed | | +1 :green_heart: | javadoc | 1m 11s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 5s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 8s | | the patch passed | | +1 :green_heart: | shadedclient | 16m 44s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 1m 0s | | hadoop-yarn-api in the patch passed. | | +1 :green_heart: | unit | 2m 37s | | hadoop-yarn-server-router in the patch passed. | | +1 :green_heart: | asflicense | 0m 45s | | The patch does not generate ASF License warnings. | | | | 131m 0s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3135/6/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3135 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 0169cf36d9e6 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 709efc474962947419f25571050866680245e093 | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3135/6/testReport/ | | Max. process+thread count | 675 (vs. ulimit of 5500) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router U: hadoop-yarn-project/hadoop-yarn | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3135/6/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automate
[jira] [Updated] (HADOOP-17764) S3AInputStream read does not re-open the input stream on the second read retry attempt
[ https://issues.apache.org/jira/browse/HADOOP-17764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-17764: Fix Version/s: 3.3.2 > S3AInputStream read does not re-open the input stream on the second read > retry attempt > -- > > Key: HADOOP-17764 > URL: https://issues.apache.org/jira/browse/HADOOP-17764 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3 >Affects Versions: 3.3.1 >Reporter: Zamil Majdy >Priority: Major > Labels: pull-request-available > Fix For: 3.3.2 > > Time Spent: 8h 10m > Remaining Estimate: 0h > > *Bug description:* > The read method in S3AInputStream has this following behaviour when an > IOException happening during the read: > * {{reopen and read quickly}}: The client after failing in the first attempt > of {{read}}, will reopen the stream and try reading again without {{sleep}}. > * {{reopen and wait for fixed duration}}: The client after failing in the > attempt of {{read}}, will reopen the stream, sleep for > {{fs.s3a.retry.interval}} milliseconds (defaults to 500 ms), and then try > reading from the stream. > While doing the {{reopen and read quickly}} process, the subsequent read will > be retried without reopening the input stream in case of the second failure > happened. This leads to some of the bytes read being skipped which results to > corrupt/less data than required. > > *Scenario to reproduce:* > * Execute S3AInputStream `read()` or `read(b, off, len)`. > * The read failed and throws `Connection Reset` exception after reading some > data. > * The InputStream is re-opened and another `read()` or `read(b, off, len)` > is executed > * The read failed for the second time and throws `Connection Reset` > exception after reading some data. > * The InputStream is not re-opened and another `read()` or `read(b, off, > len)` is executed after sleep > * The read succeed, but it skips the first few bytes that has already been > read on the second failure. > > *Proposed fix:* > [https://github.com/apache/hadoop/pull/3109] > Added the test that reproduces the issue along with the fix -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #3109: HADOOP-17764. S3AInputStream read does not re-open the input stream on the second read retry attempt
hadoop-yetus removed a comment on pull request #3109: URL: https://github.com/apache/hadoop/pull/3109#issuecomment-867763605 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 52s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 59m 35s | | trunk passed | | +1 :green_heart: | compile | 0m 44s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 0m 36s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 0m 26s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 42s | | trunk passed | | +1 :green_heart: | javadoc | 0m 21s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 30s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 1m 11s | | trunk passed | | +1 :green_heart: | shadedclient | 17m 5s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 36s | | the patch passed | | +1 :green_heart: | compile | 0m 38s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 0m 38s | | the patch passed | | +1 :green_heart: | compile | 0m 31s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 0m 31s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 19s | [/results-checkstyle-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3109/10/artifact/out/results-checkstyle-hadoop-tools_hadoop-aws.txt) | hadoop-tools/hadoop-aws: The patch generated 4 new + 9 unchanged - 0 fixed = 13 total (was 9) | | +1 :green_heart: | mvnsite | 0m 35s | | the patch passed | | +1 :green_heart: | javadoc | 0m 14s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 23s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 1m 16s | | the patch passed | | +1 :green_heart: | shadedclient | 16m 59s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 2m 30s | | hadoop-aws in the patch passed. | | +1 :green_heart: | asflicense | 0m 28s | | The patch does not generate ASF License warnings. | | | | 107m 10s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3109/10/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3109 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 38ed7c3f7332 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / bb232121d40a1d1a6473341a4869907739fa3956 | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3109/10/testReport/ | | Max. process+thread count | 599 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3109/10/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above
[jira] [Assigned] (HADOOP-17764) S3AInputStream read does not re-open the input stream on the second read retry attempt
[ https://issues.apache.org/jira/browse/HADOOP-17764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran reassigned HADOOP-17764: --- Assignee: Zamil Majdy > S3AInputStream read does not re-open the input stream on the second read > retry attempt > -- > > Key: HADOOP-17764 > URL: https://issues.apache.org/jira/browse/HADOOP-17764 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3 >Affects Versions: 3.3.1 >Reporter: Zamil Majdy >Assignee: Zamil Majdy >Priority: Major > Labels: pull-request-available > Fix For: 3.3.2 > > Time Spent: 8h 10m > Remaining Estimate: 0h > > *Bug description:* > The read method in S3AInputStream has this following behaviour when an > IOException happening during the read: > * {{reopen and read quickly}}: The client after failing in the first attempt > of {{read}}, will reopen the stream and try reading again without {{sleep}}. > * {{reopen and wait for fixed duration}}: The client after failing in the > attempt of {{read}}, will reopen the stream, sleep for > {{fs.s3a.retry.interval}} milliseconds (defaults to 500 ms), and then try > reading from the stream. > While doing the {{reopen and read quickly}} process, the subsequent read will > be retried without reopening the input stream in case of the second failure > happened. This leads to some of the bytes read being skipped which results to > corrupt/less data than required. > > *Scenario to reproduce:* > * Execute S3AInputStream `read()` or `read(b, off, len)`. > * The read failed and throws `Connection Reset` exception after reading some > data. > * The InputStream is re-opened and another `read()` or `read(b, off, len)` > is executed > * The read failed for the second time and throws `Connection Reset` > exception after reading some data. > * The InputStream is not re-opened and another `read()` or `read(b, off, > len)` is executed after sleep > * The read succeed, but it skips the first few bytes that has already been > read on the second failure. > > *Proposed fix:* > [https://github.com/apache/hadoop/pull/3109] > Added the test that reproduces the issue along with the fix -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17765) ABFS: Use Unique File Paths in Tests
[ https://issues.apache.org/jira/browse/HADOOP-17765?focusedWorklogId=615174&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-615174 ] ASF GitHub Bot logged work on HADOOP-17765: --- Author: ASF GitHub Bot Created on: 25/Jun/21 19:43 Start Date: 25/Jun/21 19:43 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3122: URL: https://github.com/apache/hadoop/pull/3122#issuecomment-868794040 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 43s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 21 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 30m 45s | | trunk passed | | +1 :green_heart: | compile | 0m 39s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 0m 36s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 0m 29s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 40s | | trunk passed | | +1 :green_heart: | javadoc | 0m 34s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 31s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 1m 1s | | trunk passed | | +1 :green_heart: | shadedclient | 14m 15s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 30s | | the patch passed | | +1 :green_heart: | compile | 0m 31s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 0m 31s | | the patch passed | | +1 :green_heart: | compile | 0m 27s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 0m 27s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 18s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 30s | | the patch passed | | +1 :green_heart: | javadoc | 0m 23s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 21s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 1m 0s | | the patch passed | | +1 :green_heart: | shadedclient | 13m 48s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 2m 9s | | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 33s | | The patch does not generate ASF License warnings. | | | | 72m 6s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3122/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3122 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 59e05e3e7509 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 5dbb6f1d9c2fb508067560d595fc92ea336b7d89 | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3122/3/testReport/ | | Max. process+thread count | 543 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3122/3/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This messag
[GitHub] [hadoop] hadoop-yetus commented on pull request #3122: HADOOP-17765. ABFS: Use Unique File Paths in Tests
hadoop-yetus commented on pull request #3122: URL: https://github.com/apache/hadoop/pull/3122#issuecomment-868794040 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 43s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 21 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 30m 45s | | trunk passed | | +1 :green_heart: | compile | 0m 39s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 0m 36s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 0m 29s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 40s | | trunk passed | | +1 :green_heart: | javadoc | 0m 34s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 31s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 1m 1s | | trunk passed | | +1 :green_heart: | shadedclient | 14m 15s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 30s | | the patch passed | | +1 :green_heart: | compile | 0m 31s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 0m 31s | | the patch passed | | +1 :green_heart: | compile | 0m 27s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 0m 27s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 18s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 30s | | the patch passed | | +1 :green_heart: | javadoc | 0m 23s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 21s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 1m 0s | | the patch passed | | +1 :green_heart: | shadedclient | 13m 48s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 2m 9s | | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 33s | | The patch does not generate ASF License warnings. | | | | 72m 6s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3122/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3122 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 59e05e3e7509 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 5dbb6f1d9c2fb508067560d595fc92ea336b7d89 | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3122/3/testReport/ | | Max. process+thread count | 543 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3122/3/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Work logged] (HADOOP-17764) S3AInputStream read does not re-open the input stream on the second read retry attempt
[ https://issues.apache.org/jira/browse/HADOOP-17764?focusedWorklogId=615151&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-615151 ] ASF GitHub Bot logged work on HADOOP-17764: --- Author: ASF GitHub Bot Created on: 25/Jun/21 19:02 Start Date: 25/Jun/21 19:02 Worklog Time Spent: 10m Work Description: steveloughran merged pull request #3109: URL: https://github.com/apache/hadoop/pull/3109 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 615151) Time Spent: 8h 10m (was: 8h) > S3AInputStream read does not re-open the input stream on the second read > retry attempt > -- > > Key: HADOOP-17764 > URL: https://issues.apache.org/jira/browse/HADOOP-17764 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3 >Affects Versions: 3.3.1 >Reporter: Zamil Majdy >Priority: Major > Labels: pull-request-available > Time Spent: 8h 10m > Remaining Estimate: 0h > > *Bug description:* > The read method in S3AInputStream has this following behaviour when an > IOException happening during the read: > * {{reopen and read quickly}}: The client after failing in the first attempt > of {{read}}, will reopen the stream and try reading again without {{sleep}}. > * {{reopen and wait for fixed duration}}: The client after failing in the > attempt of {{read}}, will reopen the stream, sleep for > {{fs.s3a.retry.interval}} milliseconds (defaults to 500 ms), and then try > reading from the stream. > While doing the {{reopen and read quickly}} process, the subsequent read will > be retried without reopening the input stream in case of the second failure > happened. This leads to some of the bytes read being skipped which results to > corrupt/less data than required. > > *Scenario to reproduce:* > * Execute S3AInputStream `read()` or `read(b, off, len)`. > * The read failed and throws `Connection Reset` exception after reading some > data. > * The InputStream is re-opened and another `read()` or `read(b, off, len)` > is executed > * The read failed for the second time and throws `Connection Reset` > exception after reading some data. > * The InputStream is not re-opened and another `read()` or `read(b, off, > len)` is executed after sleep > * The read succeed, but it skips the first few bytes that has already been > read on the second failure. > > *Proposed fix:* > [https://github.com/apache/hadoop/pull/3109] > Added the test that reproduces the issue along with the fix -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran merged pull request #3109: HADOOP-17764. S3AInputStream read does not re-open the input stream on the second read retry attempt
steveloughran merged pull request #3109: URL: https://github.com/apache/hadoop/pull/3109 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] goiri commented on a change in pull request #3135: YARN-10829. Support getApplications API in FederationClientInterceptor
goiri commented on a change in pull request #3135: URL: https://github.com/apache/hadoop/pull/3135#discussion_r658925079 ## File path: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/clientrm/FederationClientInterceptor.java ## @@ -675,6 +715,11 @@ public Object call() throws Exception { } return results; } + Map invokeConcurrent(Collection clusterIds, Review comment: Add break line right before the method definition. ## File path: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/clientrm/FederationClientInterceptor.java ## @@ -196,6 +198,10 @@ public void init(String userName) { clientRMProxies = new ConcurrentHashMap(); routerMetrics = RouterMetrics.getMetrics(); + +returnPartialReport = conf.getBoolean( Review comment: Extra space before = -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17769) Upgrade JUnit to 4.13.2
[ https://issues.apache.org/jira/browse/HADOOP-17769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17369608#comment-17369608 ] Ayush Saxena commented on HADOOP-17769: --- Committed to trunk, branch-3.3,3.2 and 2.10 Thanx [~ahussein] for the contribution!!! > Upgrade JUnit to 4.13.2 > --- > > Key: HADOOP-17769 > URL: https://issues.apache.org/jira/browse/HADOOP-17769 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.3.1, 3.4.0, 2.10.2, 3.2.3 >Reporter: Ahmed Hussein >Assignee: Ahmed Hussein >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0, 2.10.2, 3.2.3, 3.3.2 > > Time Spent: 4.5h > Remaining Estimate: 0h > > JUnit 4.13.1 has a bug that is reported in Junit > [issue-1652|https://github.com/junit-team/junit4/issues/1652] _Timeout > ThreadGroups should not be destroyed_ > After upgrading Junit to 4.13.1 in HADOOP-17602, {{TestBlockRecovery}} > started to fail regularly in branch-3.x and branch-2.10. > While investigating the failure in branch-2.10 HDFS-16072, I found out that > the bug is the main reason {{TestBlockRecovery}} started to fail because the > timeout of the Junit would try to close a ThreadGroup that has been already > closed which throws the {{java.lang.IllegalThreadStateException}}. > The bug has been fixed in Junit-4.13.2 > For branch-3.x, HDFS-15940 did not address the root cause of the problem. > Eventually, Splitting the {{TestBlockRecovery}} hid the bug, but the upgrade > needs to be done so that the problem does not show up in another unit test. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-17769) Upgrade JUnit to 4.13.2
[ https://issues.apache.org/jira/browse/HADOOP-17769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ayush Saxena resolved HADOOP-17769. --- Fix Version/s: 3.3.2 3.2.3 2.10.2 3.4.0 Hadoop Flags: Reviewed Resolution: Fixed > Upgrade JUnit to 4.13.2 > --- > > Key: HADOOP-17769 > URL: https://issues.apache.org/jira/browse/HADOOP-17769 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.3.1, 3.4.0, 2.10.2, 3.2.3 >Reporter: Ahmed Hussein >Assignee: Ahmed Hussein >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0, 2.10.2, 3.2.3, 3.3.2 > > Time Spent: 4.5h > Remaining Estimate: 0h > > JUnit 4.13.1 has a bug that is reported in Junit > [issue-1652|https://github.com/junit-team/junit4/issues/1652] _Timeout > ThreadGroups should not be destroyed_ > After upgrading Junit to 4.13.1 in HADOOP-17602, {{TestBlockRecovery}} > started to fail regularly in branch-3.x and branch-2.10. > While investigating the failure in branch-2.10 HDFS-16072, I found out that > the bug is the main reason {{TestBlockRecovery}} started to fail because the > timeout of the Junit would try to close a ThreadGroup that has been already > closed which throws the {{java.lang.IllegalThreadStateException}}. > The bug has been fixed in Junit-4.13.2 > For branch-3.x, HDFS-15940 did not address the root cause of the problem. > Eventually, Splitting the {{TestBlockRecovery}} hid the bug, but the upgrade > needs to be done so that the problem does not show up in another unit test. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17769) Upgrade JUnit to 4.13.2
[ https://issues.apache.org/jira/browse/HADOOP-17769?focusedWorklogId=615115&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-615115 ] ASF GitHub Bot logged work on HADOOP-17769: --- Author: ASF GitHub Bot Created on: 25/Jun/21 17:20 Start Date: 25/Jun/21 17:20 Worklog Time Spent: 10m Work Description: ayushtkn merged pull request #3139: URL: https://github.com/apache/hadoop/pull/3139 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 615115) Time Spent: 4.5h (was: 4h 20m) > Upgrade JUnit to 4.13.2 > --- > > Key: HADOOP-17769 > URL: https://issues.apache.org/jira/browse/HADOOP-17769 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.3.1, 3.4.0, 2.10.2, 3.2.3 >Reporter: Ahmed Hussein >Assignee: Ahmed Hussein >Priority: Major > Labels: pull-request-available > Time Spent: 4.5h > Remaining Estimate: 0h > > JUnit 4.13.1 has a bug that is reported in Junit > [issue-1652|https://github.com/junit-team/junit4/issues/1652] _Timeout > ThreadGroups should not be destroyed_ > After upgrading Junit to 4.13.1 in HADOOP-17602, {{TestBlockRecovery}} > started to fail regularly in branch-3.x and branch-2.10. > While investigating the failure in branch-2.10 HDFS-16072, I found out that > the bug is the main reason {{TestBlockRecovery}} started to fail because the > timeout of the Junit would try to close a ThreadGroup that has been already > closed which throws the {{java.lang.IllegalThreadStateException}}. > The bug has been fixed in Junit-4.13.2 > For branch-3.x, HDFS-15940 did not address the root cause of the problem. > Eventually, Splitting the {{TestBlockRecovery}} hid the bug, but the upgrade > needs to be done so that the problem does not show up in another unit test. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ayushtkn merged pull request #3139: HADOOP-17769. Upgrade JUnit to 4.13.2. branch-3.2
ayushtkn merged pull request #3139: URL: https://github.com/apache/hadoop/pull/3139 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17769) Upgrade JUnit to 4.13.2
[ https://issues.apache.org/jira/browse/HADOOP-17769?focusedWorklogId=615113&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-615113 ] ASF GitHub Bot logged work on HADOOP-17769: --- Author: ASF GitHub Bot Created on: 25/Jun/21 17:19 Start Date: 25/Jun/21 17:19 Worklog Time Spent: 10m Work Description: ayushtkn merged pull request #3138: URL: https://github.com/apache/hadoop/pull/3138 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 615113) Time Spent: 4h 20m (was: 4h 10m) > Upgrade JUnit to 4.13.2 > --- > > Key: HADOOP-17769 > URL: https://issues.apache.org/jira/browse/HADOOP-17769 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.3.1, 3.4.0, 2.10.2, 3.2.3 >Reporter: Ahmed Hussein >Assignee: Ahmed Hussein >Priority: Major > Labels: pull-request-available > Time Spent: 4h 20m > Remaining Estimate: 0h > > JUnit 4.13.1 has a bug that is reported in Junit > [issue-1652|https://github.com/junit-team/junit4/issues/1652] _Timeout > ThreadGroups should not be destroyed_ > After upgrading Junit to 4.13.1 in HADOOP-17602, {{TestBlockRecovery}} > started to fail regularly in branch-3.x and branch-2.10. > While investigating the failure in branch-2.10 HDFS-16072, I found out that > the bug is the main reason {{TestBlockRecovery}} started to fail because the > timeout of the Junit would try to close a ThreadGroup that has been already > closed which throws the {{java.lang.IllegalThreadStateException}}. > The bug has been fixed in Junit-4.13.2 > For branch-3.x, HDFS-15940 did not address the root cause of the problem. > Eventually, Splitting the {{TestBlockRecovery}} hid the bug, but the upgrade > needs to be done so that the problem does not show up in another unit test. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ayushtkn merged pull request #3138: HADOOP-17769. Upgrade JUnit to 4.13.2. branch-3.3
ayushtkn merged pull request #3138: URL: https://github.com/apache/hadoop/pull/3138 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17139) Re-enable optimized copyFromLocal implementation in S3AFileSystem
[ https://issues.apache.org/jira/browse/HADOOP-17139?focusedWorklogId=615102&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-615102 ] ASF GitHub Bot logged work on HADOOP-17139: --- Author: ASF GitHub Bot Created on: 25/Jun/21 17:00 Start Date: 25/Jun/21 17:00 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3101: URL: https://github.com/apache/hadoop/pull/3101#issuecomment-868703568 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 34s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 12m 37s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 21m 6s | | trunk passed | | +1 :green_heart: | compile | 21m 55s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 19m 25s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 3m 56s | | trunk passed | | +1 :green_heart: | mvnsite | 2m 43s | | trunk passed | | +1 :green_heart: | javadoc | 1m 44s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 2m 25s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 46s | | trunk passed | | +1 :green_heart: | shadedclient | 15m 24s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 29s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 37s | | the patch passed | | +1 :green_heart: | compile | 20m 36s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 20m 36s | | the patch passed | | +1 :green_heart: | compile | 19m 20s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 19m 20s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 3m 50s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3101/3/artifact/out/results-checkstyle-root.txt) | root: The patch generated 151 new + 83 unchanged - 1 fixed = 234 total (was 84) | | +1 :green_heart: | mvnsite | 2m 27s | | the patch passed | | +1 :green_heart: | javadoc | 1m 44s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 2m 25s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | -1 :x: | spotbugs | 1m 37s | [/new-spotbugs-hadoop-tools_hadoop-aws.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3101/3/artifact/out/new-spotbugs-hadoop-tools_hadoop-aws.html) | hadoop-tools/hadoop-aws generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | | +1 :green_heart: | shadedclient | 15m 13s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 17m 15s | [/patch-unit-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3101/3/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt) | hadoop-common in the patch passed. | | -1 :x: | unit | 2m 37s | [/patch-unit-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3101/3/artifact/out/patch-unit-hadoop-tools_hadoop-aws.txt) | hadoop-aws in the patch passed. | | -1 :x: | asflicense | 1m 2s | [/results-asflicense.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3101/3/artifact/out/results-asflicense.txt) | The patch generated 1 ASF License warnings. | | | | 201m 44s | | | | Reason | Tests | |---:|:--| | SpotBugs | module:hadoop-tools/hadoop-aws | | | org.apache.hadoop.fs.s3a.impl.CopyFromLocalOperation$UploadEntry$SizeComparator implements Comparator but not Serializable At CopyFromLocalOperation.java:Serializable At CopyFromLocalOperation.java:[lines 215-218] |
[GitHub] [hadoop] hadoop-yetus commented on pull request #3101: HADOOP-17139 Re-enable optimized copyFromLocal implementation in S3AFileSystem
hadoop-yetus commented on pull request #3101: URL: https://github.com/apache/hadoop/pull/3101#issuecomment-868703568 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 34s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 12m 37s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 21m 6s | | trunk passed | | +1 :green_heart: | compile | 21m 55s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 19m 25s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 3m 56s | | trunk passed | | +1 :green_heart: | mvnsite | 2m 43s | | trunk passed | | +1 :green_heart: | javadoc | 1m 44s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 2m 25s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 46s | | trunk passed | | +1 :green_heart: | shadedclient | 15m 24s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 29s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 37s | | the patch passed | | +1 :green_heart: | compile | 20m 36s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 20m 36s | | the patch passed | | +1 :green_heart: | compile | 19m 20s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 19m 20s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 3m 50s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3101/3/artifact/out/results-checkstyle-root.txt) | root: The patch generated 151 new + 83 unchanged - 1 fixed = 234 total (was 84) | | +1 :green_heart: | mvnsite | 2m 27s | | the patch passed | | +1 :green_heart: | javadoc | 1m 44s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 2m 25s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | -1 :x: | spotbugs | 1m 37s | [/new-spotbugs-hadoop-tools_hadoop-aws.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3101/3/artifact/out/new-spotbugs-hadoop-tools_hadoop-aws.html) | hadoop-tools/hadoop-aws generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | | +1 :green_heart: | shadedclient | 15m 13s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 17m 15s | [/patch-unit-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3101/3/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt) | hadoop-common in the patch passed. | | -1 :x: | unit | 2m 37s | [/patch-unit-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3101/3/artifact/out/patch-unit-hadoop-tools_hadoop-aws.txt) | hadoop-aws in the patch passed. | | -1 :x: | asflicense | 1m 2s | [/results-asflicense.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3101/3/artifact/out/results-asflicense.txt) | The patch generated 1 ASF License warnings. | | | | 201m 44s | | | | Reason | Tests | |---:|:--| | SpotBugs | module:hadoop-tools/hadoop-aws | | | org.apache.hadoop.fs.s3a.impl.CopyFromLocalOperation$UploadEntry$SizeComparator implements Comparator but not Serializable At CopyFromLocalOperation.java:Serializable At CopyFromLocalOperation.java:[lines 215-218] | | Failed junit tests | hadoop.fs.TestFilterFileSystem | | | hadoop.fs.TestHarFileSystem | | | hadoop.fs.s3a.commit.staging.TestStagingCommitter | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3101/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3101
[jira] [Work logged] (HADOOP-17139) Re-enable optimized copyFromLocal implementation in S3AFileSystem
[ https://issues.apache.org/jira/browse/HADOOP-17139?focusedWorklogId=615093&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-615093 ] ASF GitHub Bot logged work on HADOOP-17139: --- Author: ASF GitHub Bot Created on: 25/Jun/21 16:48 Start Date: 25/Jun/21 16:48 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3101: URL: https://github.com/apache/hadoop/pull/3101#issuecomment-868696556 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 54s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 12m 40s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 19m 57s | | trunk passed | | +1 :green_heart: | compile | 21m 5s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 18m 6s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 3m 47s | | trunk passed | | +1 :green_heart: | mvnsite | 2m 37s | | trunk passed | | +1 :green_heart: | javadoc | 1m 48s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 2m 30s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 46s | | trunk passed | | +1 :green_heart: | shadedclient | 14m 52s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 29s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 32s | | the patch passed | | +1 :green_heart: | compile | 20m 14s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 20m 14s | | the patch passed | | +1 :green_heart: | compile | 18m 18s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 18m 18s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 3m 41s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3101/2/artifact/out/results-checkstyle-root.txt) | root: The patch generated 156 new + 83 unchanged - 1 fixed = 239 total (was 84) | | +1 :green_heart: | mvnsite | 2m 34s | | the patch passed | | +1 :green_heart: | javadoc | 1m 47s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 2m 27s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | -1 :x: | spotbugs | 1m 36s | [/new-spotbugs-hadoop-tools_hadoop-aws.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3101/2/artifact/out/new-spotbugs-hadoop-tools_hadoop-aws.html) | hadoop-tools/hadoop-aws generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | | +1 :green_heart: | shadedclient | 15m 1s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 17m 19s | [/patch-unit-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3101/2/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt) | hadoop-common in the patch passed. | | -1 :x: | unit | 3m 48s | [/patch-unit-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3101/2/artifact/out/patch-unit-hadoop-tools_hadoop-aws.txt) | hadoop-aws in the patch passed. | | -1 :x: | asflicense | 1m 1s | [/results-asflicense.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3101/2/artifact/out/results-asflicense.txt) | The patch generated 1 ASF License warnings. | | | | 197m 58s | | | | Reason | Tests | |---:|:--| | SpotBugs | module:hadoop-tools/hadoop-aws | | | org.apache.hadoop.fs.s3a.impl.CopyFromLocalOperation$UploadEntry$SizeComparator implements Comparator but not Serializable At CopyFromLocalOperation.java:Serializable At CopyFromLocalOperation.java:[lines 215-218] |
[GitHub] [hadoop] hadoop-yetus commented on pull request #3101: HADOOP-17139 Re-enable optimized copyFromLocal implementation in S3AFileSystem
hadoop-yetus commented on pull request #3101: URL: https://github.com/apache/hadoop/pull/3101#issuecomment-868696556 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 54s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 12m 40s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 19m 57s | | trunk passed | | +1 :green_heart: | compile | 21m 5s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 18m 6s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 3m 47s | | trunk passed | | +1 :green_heart: | mvnsite | 2m 37s | | trunk passed | | +1 :green_heart: | javadoc | 1m 48s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 2m 30s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 46s | | trunk passed | | +1 :green_heart: | shadedclient | 14m 52s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 29s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 32s | | the patch passed | | +1 :green_heart: | compile | 20m 14s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 20m 14s | | the patch passed | | +1 :green_heart: | compile | 18m 18s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 18m 18s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 3m 41s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3101/2/artifact/out/results-checkstyle-root.txt) | root: The patch generated 156 new + 83 unchanged - 1 fixed = 239 total (was 84) | | +1 :green_heart: | mvnsite | 2m 34s | | the patch passed | | +1 :green_heart: | javadoc | 1m 47s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 2m 27s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | -1 :x: | spotbugs | 1m 36s | [/new-spotbugs-hadoop-tools_hadoop-aws.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3101/2/artifact/out/new-spotbugs-hadoop-tools_hadoop-aws.html) | hadoop-tools/hadoop-aws generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | | +1 :green_heart: | shadedclient | 15m 1s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 17m 19s | [/patch-unit-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3101/2/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt) | hadoop-common in the patch passed. | | -1 :x: | unit | 3m 48s | [/patch-unit-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3101/2/artifact/out/patch-unit-hadoop-tools_hadoop-aws.txt) | hadoop-aws in the patch passed. | | -1 :x: | asflicense | 1m 1s | [/results-asflicense.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3101/2/artifact/out/results-asflicense.txt) | The patch generated 1 ASF License warnings. | | | | 197m 58s | | | | Reason | Tests | |---:|:--| | SpotBugs | module:hadoop-tools/hadoop-aws | | | org.apache.hadoop.fs.s3a.impl.CopyFromLocalOperation$UploadEntry$SizeComparator implements Comparator but not Serializable At CopyFromLocalOperation.java:Serializable At CopyFromLocalOperation.java:[lines 215-218] | | Failed junit tests | hadoop.fs.TestFilterFileSystem | | | hadoop.fs.TestHarFileSystem | | | hadoop.fs.s3a.commit.staging.TestStagingCommitter | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3101/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3101
[jira] [Work logged] (HADOOP-17764) S3AInputStream read does not re-open the input stream on the second read retry attempt
[ https://issues.apache.org/jira/browse/HADOOP-17764?focusedWorklogId=615090&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-615090 ] ASF GitHub Bot logged work on HADOOP-17764: --- Author: ASF GitHub Bot Created on: 25/Jun/21 16:38 Start Date: 25/Jun/21 16:38 Worklog Time Spent: 10m Work Description: majdyz commented on pull request #3109: URL: https://github.com/apache/hadoop/pull/3109#issuecomment-868690774 There seems to be no complaint from Yetus now :) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 615090) Time Spent: 8h (was: 7h 50m) > S3AInputStream read does not re-open the input stream on the second read > retry attempt > -- > > Key: HADOOP-17764 > URL: https://issues.apache.org/jira/browse/HADOOP-17764 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3 >Affects Versions: 3.3.1 >Reporter: Zamil Majdy >Priority: Major > Labels: pull-request-available > Time Spent: 8h > Remaining Estimate: 0h > > *Bug description:* > The read method in S3AInputStream has this following behaviour when an > IOException happening during the read: > * {{reopen and read quickly}}: The client after failing in the first attempt > of {{read}}, will reopen the stream and try reading again without {{sleep}}. > * {{reopen and wait for fixed duration}}: The client after failing in the > attempt of {{read}}, will reopen the stream, sleep for > {{fs.s3a.retry.interval}} milliseconds (defaults to 500 ms), and then try > reading from the stream. > While doing the {{reopen and read quickly}} process, the subsequent read will > be retried without reopening the input stream in case of the second failure > happened. This leads to some of the bytes read being skipped which results to > corrupt/less data than required. > > *Scenario to reproduce:* > * Execute S3AInputStream `read()` or `read(b, off, len)`. > * The read failed and throws `Connection Reset` exception after reading some > data. > * The InputStream is re-opened and another `read()` or `read(b, off, len)` > is executed > * The read failed for the second time and throws `Connection Reset` > exception after reading some data. > * The InputStream is not re-opened and another `read()` or `read(b, off, > len)` is executed after sleep > * The read succeed, but it skips the first few bytes that has already been > read on the second failure. > > *Proposed fix:* > [https://github.com/apache/hadoop/pull/3109] > Added the test that reproduces the issue along with the fix -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] majdyz commented on pull request #3109: HADOOP-17764. S3AInputStream read does not re-open the input stream on the second read retry attempt
majdyz commented on pull request #3109: URL: https://github.com/apache/hadoop/pull/3109#issuecomment-868690774 There seems to be no complaint from Yetus now :) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17764) S3AInputStream read does not re-open the input stream on the second read retry attempt
[ https://issues.apache.org/jira/browse/HADOOP-17764?focusedWorklogId=615088&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-615088 ] ASF GitHub Bot logged work on HADOOP-17764: --- Author: ASF GitHub Bot Created on: 25/Jun/21 16:26 Start Date: 25/Jun/21 16:26 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3109: URL: https://github.com/apache/hadoop/pull/3109#issuecomment-868684088 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 55s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 35m 53s | | trunk passed | | +1 :green_heart: | compile | 0m 49s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 0m 42s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 0m 29s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 48s | | trunk passed | | +1 :green_heart: | javadoc | 0m 22s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 47s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 1m 32s | | trunk passed | | +1 :green_heart: | shadedclient | 18m 24s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 35s | | the patch passed | | +1 :green_heart: | compile | 0m 40s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 0m 40s | | the patch passed | | +1 :green_heart: | compile | 0m 32s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 0m 32s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 23s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 40s | | the patch passed | | +1 :green_heart: | javadoc | 0m 15s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 26s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 1m 18s | | the patch passed | | +1 :green_heart: | shadedclient | 16m 55s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 2m 43s | | hadoop-aws in the patch passed. | | +1 :green_heart: | asflicense | 0m 30s | | The patch does not generate ASF License warnings. | | | | 86m 1s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3109/11/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3109 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux e92c2d7139cd 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 448b2ef2baefcc74e7f974245a3d654a80d292c8 | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3109/11/testReport/ | | Max. process+thread count | 577 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3109/11/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message wa
[GitHub] [hadoop] hadoop-yetus commented on pull request #3109: HADOOP-17764. S3AInputStream read does not re-open the input stream on the second read retry attempt
hadoop-yetus commented on pull request #3109: URL: https://github.com/apache/hadoop/pull/3109#issuecomment-868684088 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 55s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 35m 53s | | trunk passed | | +1 :green_heart: | compile | 0m 49s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 0m 42s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 0m 29s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 48s | | trunk passed | | +1 :green_heart: | javadoc | 0m 22s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 47s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 1m 32s | | trunk passed | | +1 :green_heart: | shadedclient | 18m 24s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 35s | | the patch passed | | +1 :green_heart: | compile | 0m 40s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 0m 40s | | the patch passed | | +1 :green_heart: | compile | 0m 32s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 0m 32s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 23s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 40s | | the patch passed | | +1 :green_heart: | javadoc | 0m 15s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 26s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 1m 18s | | the patch passed | | +1 :green_heart: | shadedclient | 16m 55s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 2m 43s | | hadoop-aws in the patch passed. | | +1 :green_heart: | asflicense | 0m 30s | | The patch does not generate ASF License warnings. | | | | 86m 1s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3109/11/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3109 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux e92c2d7139cd 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 448b2ef2baefcc74e7f974245a3d654a80d292c8 | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3109/11/testReport/ | | Max. process+thread count | 577 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3109/11/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Work logged] (HADOOP-17764) S3AInputStream read does not re-open the input stream on the second read retry attempt
[ https://issues.apache.org/jira/browse/HADOOP-17764?focusedWorklogId=615085&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-615085 ] ASF GitHub Bot logged work on HADOOP-17764: --- Author: ASF GitHub Bot Created on: 25/Jun/21 16:20 Start Date: 25/Jun/21 16:20 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3132: URL: https://github.com/apache/hadoop/pull/3132#issuecomment-868680478 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 48s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 33m 40s | | trunk passed | | +1 :green_heart: | compile | 0m 46s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 0m 36s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 0m 27s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 42s | | trunk passed | | +1 :green_heart: | javadoc | 0m 21s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 29s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 1m 8s | | trunk passed | | +1 :green_heart: | shadedclient | 16m 32s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 35s | | the patch passed | | +1 :green_heart: | compile | 0m 37s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 0m 37s | | the patch passed | | +1 :green_heart: | compile | 0m 28s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 0m 28s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 19s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 34s | | the patch passed | | +1 :green_heart: | javadoc | 0m 14s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 24s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 1m 11s | | the patch passed | | +1 :green_heart: | shadedclient | 16m 41s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 2m 17s | | hadoop-aws in the patch passed. | | +1 :green_heart: | asflicense | 0m 30s | | The patch does not generate ASF License warnings. | | | | 80m 6s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3132/4/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3132 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 5a556fbf7533 4.15.0-128-generic #131-Ubuntu SMP Wed Dec 9 06:57:35 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 448b2ef2baefcc74e7f974245a3d654a80d292c8 | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3132/4/testReport/ | | Max. process+thread count | 517 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3132/4/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was au
[GitHub] [hadoop] hadoop-yetus commented on pull request #3132: HADOOP-17764. S3AInputStream read does not re-open the input stream on the second read retry attempt
hadoop-yetus commented on pull request #3132: URL: https://github.com/apache/hadoop/pull/3132#issuecomment-868680478 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 48s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 33m 40s | | trunk passed | | +1 :green_heart: | compile | 0m 46s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 0m 36s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 0m 27s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 42s | | trunk passed | | +1 :green_heart: | javadoc | 0m 21s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 29s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 1m 8s | | trunk passed | | +1 :green_heart: | shadedclient | 16m 32s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 35s | | the patch passed | | +1 :green_heart: | compile | 0m 37s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 0m 37s | | the patch passed | | +1 :green_heart: | compile | 0m 28s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 0m 28s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 19s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 34s | | the patch passed | | +1 :green_heart: | javadoc | 0m 14s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 24s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 1m 11s | | the patch passed | | +1 :green_heart: | shadedclient | 16m 41s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 2m 17s | | hadoop-aws in the patch passed. | | +1 :green_heart: | asflicense | 0m 30s | | The patch does not generate ASF License warnings. | | | | 80m 6s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3132/4/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3132 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 5a556fbf7533 4.15.0-128-generic #131-Ubuntu SMP Wed Dec 9 06:57:35 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 448b2ef2baefcc74e7f974245a3d654a80d292c8 | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3132/4/testReport/ | | Max. process+thread count | 517 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3132/4/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To
[GitHub] [hadoop] hadoop-yetus commented on pull request #3135: YARN-10829. Support getApplications API in FederationClientInterceptor
hadoop-yetus commented on pull request #3135: URL: https://github.com/apache/hadoop/pull/3135#issuecomment-868667101 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 40s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 3 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 12m 42s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 20m 17s | | trunk passed | | +1 :green_heart: | compile | 10m 3s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 9m 29s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 1m 56s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 38s | | trunk passed | | +1 :green_heart: | javadoc | 1m 29s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 26s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 9s | | trunk passed | | +1 :green_heart: | shadedclient | 14m 25s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 28s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 3s | | the patch passed | | +1 :green_heart: | compile | 8m 35s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 8m 35s | | the patch passed | | +1 :green_heart: | compile | 8m 0s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 8m 0s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 1m 40s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 36s | | the patch passed | | +1 :green_heart: | javadoc | 1m 24s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 19s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 16s | | the patch passed | | +1 :green_heart: | shadedclient | 14m 18s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 1m 7s | | hadoop-yarn-api in the patch passed. | | +1 :green_heart: | unit | 2m 45s | | hadoop-yarn-server-router in the patch passed. | | +1 :green_heart: | asflicense | 0m 54s | | The patch does not generate ASF License warnings. | | | | 127m 0s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3135/5/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3135 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 7cac1c4e4a34 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 0497bfc9a54ca23c64cdbb054882bd78ce91073c | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3135/5/testReport/ | | Max. process+thread count | 779 (vs. ulimit of 5500) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router U: hadoop-yarn-project/hadoop-yarn | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3135/5/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated m
[GitHub] [hadoop] goiri commented on a change in pull request #3141: HDFS-16087. Fix stuck issue in rbfbalance tool.
goiri commented on a change in pull request #3141: URL: https://github.com/apache/hadoop/pull/3141#discussion_r658872774 ## File path: hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/rbfbalance/RouterDistCpProcedure.java ## @@ -44,6 +44,7 @@ protected void disableWrite(FedBalanceContext context) throws IOException { Configuration conf = context.getConf(); String mount = context.getMount(); MountTableProcedure.disableWrite(mount, conf); +updateStage(Stage.FINAL_DISTCP); Review comment: Is there a test we can have for this? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17139) Re-enable optimized copyFromLocal implementation in S3AFileSystem
[ https://issues.apache.org/jira/browse/HADOOP-17139?focusedWorklogId=615062&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-615062 ] ASF GitHub Bot logged work on HADOOP-17139: --- Author: ASF GitHub Bot Created on: 25/Jun/21 15:34 Start Date: 25/Jun/21 15:34 Worklog Time Spent: 10m Work Description: bogthe commented on pull request #3101: URL: https://github.com/apache/hadoop/pull/3101#issuecomment-868622164 Thanks for the comments, finding them very helpful! The FS change was a long shot, seemed too simple and convenient for my use case (listing of directories) to pass up. Will change it + address your comments and keep the changes coming. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 615062) Time Spent: 1h 10m (was: 1h) > Re-enable optimized copyFromLocal implementation in S3AFileSystem > - > > Key: HADOOP-17139 > URL: https://issues.apache.org/jira/browse/HADOOP-17139 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.3.0, 3.2.1 >Reporter: Sahil Takiar >Assignee: Bogdan Stolojan >Priority: Minor > Labels: pull-request-available > Time Spent: 1h 10m > Remaining Estimate: 0h > > It looks like HADOOP-15932 disabled the optimized copyFromLocal > implementation in S3A for correctness reasons. innerCopyFromLocalFile should > be fixed and re-enabled. The current implementation uses > FileSystem.copyFromLocal which will open an input stream from the local fs > and an output stream to the destination fs, and then call IOUtils.copyBytes. > With default configs, this will cause S3A to read the file into memory, write > it back to a file on the local fs, and then when the file is closed, upload > it to S3. > The optimized version of copyFromLocal in innerCopyFromLocalFile, directly > creates a PutObjectRequest request with the local file as the input. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bogthe commented on pull request #3101: HADOOP-17139 Re-enable optimized copyFromLocal implementation in S3AFileSystem
bogthe commented on pull request #3101: URL: https://github.com/apache/hadoop/pull/3101#issuecomment-868622164 Thanks for the comments, finding them very helpful! The FS change was a long shot, seemed too simple and convenient for my use case (listing of directories) to pass up. Will change it + address your comments and keep the changes coming. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17139) Re-enable optimized copyFromLocal implementation in S3AFileSystem
[ https://issues.apache.org/jira/browse/HADOOP-17139?focusedWorklogId=615057&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-615057 ] ASF GitHub Bot logged work on HADOOP-17139: --- Author: ASF GitHub Bot Created on: 25/Jun/21 15:29 Start Date: 25/Jun/21 15:29 Worklog Time Spent: 10m Work Description: bogthe commented on a change in pull request #3101: URL: https://github.com/apache/hadoop/pull/3101#discussion_r658852709 ## File path: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/CopyFromLocalOperation.java ## @@ -0,0 +1,241 @@ +package org.apache.hadoop.fs.s3a.impl; + +import org.apache.commons.collections.comparators.ReverseComparator; +import org.apache.hadoop.fs.FileStatus; +import org.apache.hadoop.fs.LocatedFileStatus; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.fs.PathExistsException; +import org.apache.hadoop.fs.RemoteIterator; +import org.apache.hadoop.fs.s3a.Retries; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.io.Closeable; +import java.io.File; +import java.io.FileNotFoundException; +import java.io.IOException; +import java.net.URI; +import java.util.ArrayList; +import java.util.Collections; +import java.util.Comparator; +import java.util.HashSet; +import java.util.List; +import java.util.Optional; +import java.util.Set; + +/** + * TODO list: + * - Improve implementation to use Completable Futures + * - Better error handling + * - Add abstract class + tests for LocalFS + * - Add tests for this class + * - Add documentation + * - This class + * - `filesystem.md` + * - Clean old `innerCopyFromLocalFile` code up + */ +public class CopyFromLocalOperation extends ExecutingStoreOperation { + +private static final Logger LOG = LoggerFactory.getLogger( +CopyFromLocalOperation.class); + +private final CopyFromLocalOperationCallbacks callbacks; +private final boolean deleteSource; +private final boolean overwrite; +private final Path source; +private final Path destination; + +private FileStatus dstStatus; + +public CopyFromLocalOperation( +final StoreContext storeContext, +Path source, +Path destination, +boolean deleteSource, +boolean overwrite, +CopyFromLocalOperationCallbacks callbacks) { +super(storeContext); +this.callbacks = callbacks; +this.deleteSource = deleteSource; +this.overwrite = overwrite; +this.source = source; +this.destination = destination; +} + +@Override +@Retries.RetryTranslated +public Void execute() +throws IOException, PathExistsException { +LOG.debug("Copying local file from {} to {}", source, destination); +File sourceFile = callbacks.pathToFile(source); +try { +dstStatus = callbacks.getFileStatus(destination); +} catch (FileNotFoundException e) { +dstStatus = null; +} + +checkSource(sourceFile); +prepareDestination(destination, sourceFile, overwrite); +uploadSourceFromFS(); + +if (deleteSource) { +callbacks.delete(source, true); +} + +return null; +} + +private void uploadSourceFromFS() +throws IOException, PathExistsException { +RemoteIterator localFiles = callbacks +.listStatusIterator(source, true); + +// After all files are traversed, this set will contain only emptyDirs +Set emptyDirs = new HashSet<>(); +List entries = new ArrayList<>(); +while (localFiles.hasNext()) { +LocatedFileStatus sourceFile = localFiles.next(); +Path sourceFilePath = sourceFile.getPath(); + +// Directory containing this file / directory isn't empty +emptyDirs.remove(sourceFilePath.getParent()); + +if (sourceFile.isDirectory()) { +emptyDirs.add(sourceFilePath); +continue; +} + +Path destPath = getFinalPath(sourceFilePath); +// UploadEntries: have a destination path, a file size +entries.add(new UploadEntry( +sourceFilePath, +destPath, +sourceFile.getLen())); +} + +if (localFiles instanceof Closeable) { +((Closeable) localFiles).close(); +} + +// Sort all upload entries based on size +entries.sort(new ReverseComparator(new UploadEntry.SizeComparator())); + +int LARGEST_N_FILES = 5; +final int sortedUploadsCount = Math.min(LARGEST_N_FILES, entries.size()); +List uploaded = new ArrayList<>(); + +// Take only top most X entries and upload +for (int uploadNo = 0; uploadNo < sortedUploadsCount; uploadNo++) { Rev
[GitHub] [hadoop] bogthe commented on a change in pull request #3101: HADOOP-17139 Re-enable optimized copyFromLocal implementation in S3AFileSystem
bogthe commented on a change in pull request #3101: URL: https://github.com/apache/hadoop/pull/3101#discussion_r658852709 ## File path: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/CopyFromLocalOperation.java ## @@ -0,0 +1,241 @@ +package org.apache.hadoop.fs.s3a.impl; + +import org.apache.commons.collections.comparators.ReverseComparator; +import org.apache.hadoop.fs.FileStatus; +import org.apache.hadoop.fs.LocatedFileStatus; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.fs.PathExistsException; +import org.apache.hadoop.fs.RemoteIterator; +import org.apache.hadoop.fs.s3a.Retries; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.io.Closeable; +import java.io.File; +import java.io.FileNotFoundException; +import java.io.IOException; +import java.net.URI; +import java.util.ArrayList; +import java.util.Collections; +import java.util.Comparator; +import java.util.HashSet; +import java.util.List; +import java.util.Optional; +import java.util.Set; + +/** + * TODO list: + * - Improve implementation to use Completable Futures + * - Better error handling + * - Add abstract class + tests for LocalFS + * - Add tests for this class + * - Add documentation + * - This class + * - `filesystem.md` + * - Clean old `innerCopyFromLocalFile` code up + */ +public class CopyFromLocalOperation extends ExecutingStoreOperation { + +private static final Logger LOG = LoggerFactory.getLogger( +CopyFromLocalOperation.class); + +private final CopyFromLocalOperationCallbacks callbacks; +private final boolean deleteSource; +private final boolean overwrite; +private final Path source; +private final Path destination; + +private FileStatus dstStatus; + +public CopyFromLocalOperation( +final StoreContext storeContext, +Path source, +Path destination, +boolean deleteSource, +boolean overwrite, +CopyFromLocalOperationCallbacks callbacks) { +super(storeContext); +this.callbacks = callbacks; +this.deleteSource = deleteSource; +this.overwrite = overwrite; +this.source = source; +this.destination = destination; +} + +@Override +@Retries.RetryTranslated +public Void execute() +throws IOException, PathExistsException { +LOG.debug("Copying local file from {} to {}", source, destination); +File sourceFile = callbacks.pathToFile(source); +try { +dstStatus = callbacks.getFileStatus(destination); +} catch (FileNotFoundException e) { +dstStatus = null; +} + +checkSource(sourceFile); +prepareDestination(destination, sourceFile, overwrite); +uploadSourceFromFS(); + +if (deleteSource) { +callbacks.delete(source, true); +} + +return null; +} + +private void uploadSourceFromFS() +throws IOException, PathExistsException { +RemoteIterator localFiles = callbacks +.listStatusIterator(source, true); + +// After all files are traversed, this set will contain only emptyDirs +Set emptyDirs = new HashSet<>(); +List entries = new ArrayList<>(); +while (localFiles.hasNext()) { +LocatedFileStatus sourceFile = localFiles.next(); +Path sourceFilePath = sourceFile.getPath(); + +// Directory containing this file / directory isn't empty +emptyDirs.remove(sourceFilePath.getParent()); + +if (sourceFile.isDirectory()) { +emptyDirs.add(sourceFilePath); +continue; +} + +Path destPath = getFinalPath(sourceFilePath); +// UploadEntries: have a destination path, a file size +entries.add(new UploadEntry( +sourceFilePath, +destPath, +sourceFile.getLen())); +} + +if (localFiles instanceof Closeable) { +((Closeable) localFiles).close(); +} + +// Sort all upload entries based on size +entries.sort(new ReverseComparator(new UploadEntry.SizeComparator())); + +int LARGEST_N_FILES = 5; +final int sortedUploadsCount = Math.min(LARGEST_N_FILES, entries.size()); +List uploaded = new ArrayList<>(); + +// Take only top most X entries and upload +for (int uploadNo = 0; uploadNo < sortedUploadsCount; uploadNo++) { Review comment: Not parallelized yet, wasn't clear on the best way to do it in the code base so left it simple for the first pass. The reason why I wasn't clear is, as you said xfer manager does it across threads depending on file size, which capacity the executor should work with. Is there any other executor other than the one available from `ContextStore` I should be aware of? And do you have any thoughts aroun
[jira] [Work logged] (HADOOP-17765) ABFS: Use Unique File Paths in Tests
[ https://issues.apache.org/jira/browse/HADOOP-17765?focusedWorklogId=615048&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-615048 ] ASF GitHub Bot logged work on HADOOP-17765: --- Author: ASF GitHub Bot Created on: 25/Jun/21 15:09 Start Date: 25/Jun/21 15:09 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3122: URL: https://github.com/apache/hadoop/pull/3122#issuecomment-868567657 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 43s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 21 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 30m 48s | | trunk passed | | +1 :green_heart: | compile | 0m 40s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 0m 36s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 0m 28s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 42s | | trunk passed | | +1 :green_heart: | javadoc | 0m 34s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 32s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 1m 3s | | trunk passed | | +1 :green_heart: | shadedclient | 14m 3s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 31s | | the patch passed | | +1 :green_heart: | compile | 0m 31s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 0m 31s | | the patch passed | | +1 :green_heart: | compile | 0m 27s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 0m 27s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 19s | [/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3122/2/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt) | hadoop-tools/hadoop-azure: The patch generated 2 new + 7 unchanged - 0 fixed = 9 total (was 7) | | +1 :green_heart: | mvnsite | 0m 30s | | the patch passed | | +1 :green_heart: | javadoc | 0m 24s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 22s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | -1 :x: | spotbugs | 1m 4s | [/new-spotbugs-hadoop-tools_hadoop-azure.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3122/2/artifact/out/new-spotbugs-hadoop-tools_hadoop-azure.html) | hadoop-tools/hadoop-azure generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | | +1 :green_heart: | shadedclient | 2m 19s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 1m 51s | | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 28s | | The patch does not generate ASF License warnings. | | | | 60m 14s | | | | Reason | Tests | |---:|:--| | SpotBugs | module:hadoop-tools/hadoop-azure | | | Dead store to testUniqueForkId in org.apache.hadoop.fs.azurebfs.utils.UriUtils.generateUniqueTestPath() At UriUtils.java:org.apache.hadoop.fs.azurebfs.utils.UriUtils.generateUniqueTestPath() At UriUtils.java:[line 73] | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3122/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3122 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux c28d200f38fe 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | |
[GitHub] [hadoop] hadoop-yetus commented on pull request #3122: HADOOP-17765. ABFS: Use Unique File Paths in Tests
hadoop-yetus commented on pull request #3122: URL: https://github.com/apache/hadoop/pull/3122#issuecomment-868567657 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 43s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 21 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 30m 48s | | trunk passed | | +1 :green_heart: | compile | 0m 40s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 0m 36s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 0m 28s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 42s | | trunk passed | | +1 :green_heart: | javadoc | 0m 34s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 32s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 1m 3s | | trunk passed | | +1 :green_heart: | shadedclient | 14m 3s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 31s | | the patch passed | | +1 :green_heart: | compile | 0m 31s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 0m 31s | | the patch passed | | +1 :green_heart: | compile | 0m 27s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 0m 27s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 19s | [/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3122/2/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt) | hadoop-tools/hadoop-azure: The patch generated 2 new + 7 unchanged - 0 fixed = 9 total (was 7) | | +1 :green_heart: | mvnsite | 0m 30s | | the patch passed | | +1 :green_heart: | javadoc | 0m 24s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 22s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | -1 :x: | spotbugs | 1m 4s | [/new-spotbugs-hadoop-tools_hadoop-azure.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3122/2/artifact/out/new-spotbugs-hadoop-tools_hadoop-azure.html) | hadoop-tools/hadoop-azure generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | | +1 :green_heart: | shadedclient | 2m 19s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 1m 51s | | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 28s | | The patch does not generate ASF License warnings. | | | | 60m 14s | | | | Reason | Tests | |---:|:--| | SpotBugs | module:hadoop-tools/hadoop-azure | | | Dead store to testUniqueForkId in org.apache.hadoop.fs.azurebfs.utils.UriUtils.generateUniqueTestPath() At UriUtils.java:org.apache.hadoop.fs.azurebfs.utils.UriUtils.generateUniqueTestPath() At UriUtils.java:[line 73] | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3122/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3122 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux c28d200f38fe 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 72bce77280550b0d88a986239d75da5639defa2e | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3122/2/testReport/
[jira] [Work logged] (HADOOP-17774) bytesRead FS statistic showing twice the correct value in S3A
[ https://issues.apache.org/jira/browse/HADOOP-17774?focusedWorklogId=615040&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-615040 ] ASF GitHub Bot logged work on HADOOP-17774: --- Author: ASF GitHub Bot Created on: 25/Jun/21 14:51 Start Date: 25/Jun/21 14:51 Worklog Time Spent: 10m Work Description: steveloughran commented on a change in pull request #3144: URL: https://github.com/apache/hadoop/pull/3144#discussion_r658822885 ## File path: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/statistics/ITestS3AFileSystemStatistic.java ## @@ -0,0 +1,66 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.s3a.statistics; + +import java.io.IOException; + +import org.junit.Test; + +import org.apache.hadoop.fs.FSDataInputStream; +import org.apache.hadoop.fs.FSDataOutputStream; +import org.apache.hadoop.fs.FileSystem; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.fs.s3a.AbstractS3ATestBase; +import org.apache.hadoop.fs.s3a.S3AFileSystem; + +public class ITestS3AFileSystemStatistic extends AbstractS3ATestBase { + + private static final int ONE_MB = 1024 * 1024; + private static final int TWO_MB = 2 * 1024 * 1024; + + /** + * Verify the fs statistic bytesRead after reading from 2 different + * InputStreams for the same filesystem instance. + */ + @Test + public void testBytesReadWithStream() throws IOException { +S3AFileSystem fs = getFileSystem(); +Path filePath = path(getMethodName()); +byte[] oneMbBuf = new byte[ONE_MB]; + +// Writing 1MB in a file. +FSDataOutputStream out = fs.create(filePath); Review comment: use try with resources for guaranteed cleanup -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 615040) Time Spent: 0.5h (was: 20m) > bytesRead FS statistic showing twice the correct value in S3A > - > > Key: HADOOP-17774 > URL: https://issues.apache.org/jira/browse/HADOOP-17774 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Mehakmeet Singh >Assignee: Mehakmeet Singh >Priority: Major > Labels: pull-request-available > Time Spent: 0.5h > Remaining Estimate: 0h > > S3A "bytes read" statistic is being incremented twice. Firstly while reading > in S3AInputStream and then in merge() of S3AInstrumentation when > S3AInputStream is closed. > This makes "bytes read" statistic equal to sum of stream_read_bytes and > stream_read_total_bytes. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on a change in pull request #3144: HADOOP-17774. bytesRead FS statistic showing twice the correct value in S3A
steveloughran commented on a change in pull request #3144: URL: https://github.com/apache/hadoop/pull/3144#discussion_r658822885 ## File path: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/statistics/ITestS3AFileSystemStatistic.java ## @@ -0,0 +1,66 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.s3a.statistics; + +import java.io.IOException; + +import org.junit.Test; + +import org.apache.hadoop.fs.FSDataInputStream; +import org.apache.hadoop.fs.FSDataOutputStream; +import org.apache.hadoop.fs.FileSystem; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.fs.s3a.AbstractS3ATestBase; +import org.apache.hadoop.fs.s3a.S3AFileSystem; + +public class ITestS3AFileSystemStatistic extends AbstractS3ATestBase { + + private static final int ONE_MB = 1024 * 1024; + private static final int TWO_MB = 2 * 1024 * 1024; + + /** + * Verify the fs statistic bytesRead after reading from 2 different + * InputStreams for the same filesystem instance. + */ + @Test + public void testBytesReadWithStream() throws IOException { +S3AFileSystem fs = getFileSystem(); +Path filePath = path(getMethodName()); +byte[] oneMbBuf = new byte[ONE_MB]; + +// Writing 1MB in a file. +FSDataOutputStream out = fs.create(filePath); Review comment: use try with resources for guaranteed cleanup -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17139) Re-enable optimized copyFromLocal implementation in S3AFileSystem
[ https://issues.apache.org/jira/browse/HADOOP-17139?focusedWorklogId=615039&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-615039 ] ASF GitHub Bot logged work on HADOOP-17139: --- Author: ASF GitHub Bot Created on: 25/Jun/21 14:45 Start Date: 25/Jun/21 14:45 Worklog Time Spent: 10m Work Description: steveloughran commented on a change in pull request #3101: URL: https://github.com/apache/hadoop/pull/3101#discussion_r658819030 ## File path: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3ACopyFromLocalFile.java ## @@ -120,15 +117,48 @@ public void testCopyMissingFile() throws Throwable { () -> upload(file, true)); } + /* + * The following path is being created on disk and copied over + * /parent/ (trailing slash to make it clear it's a directory + * /parent/test1.txt + * /parent/child/test.txt + */ @Test - @Ignore("HADOOP-15932") - public void testCopyDirectoryFile() throws Throwable { -file = File.createTempFile("test", ".txt"); -// first upload to create -intercept(FileNotFoundException.class, "Not a file", -() -> upload(file.getParentFile(), true)); + public void testCopyTreeDirectoryWithoutDelete() throws Throwable { +java.nio.file.Path srcDir = Files.createTempDirectory("parent"); +java.nio.file.Path childDir = Files.createTempDirectory(srcDir, "child"); +java.nio.file.Path parentFile = Files.createTempFile(srcDir, "test1", ".txt"); +java.nio.file.Path childFile = Files.createTempFile(childDir, "test2", ".txt"); + +Path src = new Path(srcDir.toUri()); +Path dst = path(srcDir.getFileName().toString()); +getFileSystem().copyFromLocalFile(false, true, src, dst); + +java.nio.file.Path parent = srcDir.getParent(); + +assertPathExists("Parent directory", srcDir, parent); +assertPathExists("Child directory", childDir, parent); +assertPathExists("Parent file", parentFile, parent); +assertPathExists("Child file", childFile, parent); + +if (!Files.exists(srcDir)) { Review comment: just use an assert ## File path: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/CopyFromLocalOperation.java ## @@ -0,0 +1,241 @@ +package org.apache.hadoop.fs.s3a.impl; + +import org.apache.commons.collections.comparators.ReverseComparator; +import org.apache.hadoop.fs.FileStatus; +import org.apache.hadoop.fs.LocatedFileStatus; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.fs.PathExistsException; +import org.apache.hadoop.fs.RemoteIterator; +import org.apache.hadoop.fs.s3a.Retries; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.io.Closeable; +import java.io.File; +import java.io.FileNotFoundException; +import java.io.IOException; +import java.net.URI; +import java.util.ArrayList; +import java.util.Collections; +import java.util.Comparator; +import java.util.HashSet; +import java.util.List; +import java.util.Optional; +import java.util.Set; + +/** + * TODO list: + * - Improve implementation to use Completable Futures + * - Better error handling + * - Add abstract class + tests for LocalFS + * - Add tests for this class + * - Add documentation + * - This class + * - `filesystem.md` + * - Clean old `innerCopyFromLocalFile` code up + */ +public class CopyFromLocalOperation extends ExecutingStoreOperation { + +private static final Logger LOG = LoggerFactory.getLogger( +CopyFromLocalOperation.class); + +private final CopyFromLocalOperationCallbacks callbacks; +private final boolean deleteSource; +private final boolean overwrite; +private final Path source; +private final Path destination; + +private FileStatus dstStatus; + +public CopyFromLocalOperation( +final StoreContext storeContext, +Path source, +Path destination, +boolean deleteSource, +boolean overwrite, +CopyFromLocalOperationCallbacks callbacks) { +super(storeContext); +this.callbacks = callbacks; +this.deleteSource = deleteSource; +this.overwrite = overwrite; +this.source = source; +this.destination = destination; +} + +@Override +@Retries.RetryTranslated +public Void execute() +throws IOException, PathExistsException { +LOG.debug("Copying local file from {} to {}", source, destination); +File sourceFile = callbacks.pathToFile(source); +try { +dstStatus = callbacks.getFileStatus(destination); +} catch (FileNotFoundException e) { +dstStatus = null; +} + +checkSource(sourceFile); +prepareDestination(destination, sourceFile, overwrite); +uploadSourceFromFS(); + +if (deleteSource) { +callbacks.delete(source, true); +
[GitHub] [hadoop] steveloughran commented on a change in pull request #3101: HADOOP-17139 Re-enable optimized copyFromLocal implementation in S3AFileSystem
steveloughran commented on a change in pull request #3101: URL: https://github.com/apache/hadoop/pull/3101#discussion_r658819030 ## File path: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3ACopyFromLocalFile.java ## @@ -120,15 +117,48 @@ public void testCopyMissingFile() throws Throwable { () -> upload(file, true)); } + /* + * The following path is being created on disk and copied over + * /parent/ (trailing slash to make it clear it's a directory + * /parent/test1.txt + * /parent/child/test.txt + */ @Test - @Ignore("HADOOP-15932") - public void testCopyDirectoryFile() throws Throwable { -file = File.createTempFile("test", ".txt"); -// first upload to create -intercept(FileNotFoundException.class, "Not a file", -() -> upload(file.getParentFile(), true)); + public void testCopyTreeDirectoryWithoutDelete() throws Throwable { +java.nio.file.Path srcDir = Files.createTempDirectory("parent"); +java.nio.file.Path childDir = Files.createTempDirectory(srcDir, "child"); +java.nio.file.Path parentFile = Files.createTempFile(srcDir, "test1", ".txt"); +java.nio.file.Path childFile = Files.createTempFile(childDir, "test2", ".txt"); + +Path src = new Path(srcDir.toUri()); +Path dst = path(srcDir.getFileName().toString()); +getFileSystem().copyFromLocalFile(false, true, src, dst); + +java.nio.file.Path parent = srcDir.getParent(); + +assertPathExists("Parent directory", srcDir, parent); +assertPathExists("Child directory", childDir, parent); +assertPathExists("Parent file", parentFile, parent); +assertPathExists("Child file", childFile, parent); + +if (!Files.exists(srcDir)) { Review comment: just use an assert ## File path: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/CopyFromLocalOperation.java ## @@ -0,0 +1,241 @@ +package org.apache.hadoop.fs.s3a.impl; + +import org.apache.commons.collections.comparators.ReverseComparator; +import org.apache.hadoop.fs.FileStatus; +import org.apache.hadoop.fs.LocatedFileStatus; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.fs.PathExistsException; +import org.apache.hadoop.fs.RemoteIterator; +import org.apache.hadoop.fs.s3a.Retries; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.io.Closeable; +import java.io.File; +import java.io.FileNotFoundException; +import java.io.IOException; +import java.net.URI; +import java.util.ArrayList; +import java.util.Collections; +import java.util.Comparator; +import java.util.HashSet; +import java.util.List; +import java.util.Optional; +import java.util.Set; + +/** + * TODO list: + * - Improve implementation to use Completable Futures + * - Better error handling + * - Add abstract class + tests for LocalFS + * - Add tests for this class + * - Add documentation + * - This class + * - `filesystem.md` + * - Clean old `innerCopyFromLocalFile` code up + */ +public class CopyFromLocalOperation extends ExecutingStoreOperation { + +private static final Logger LOG = LoggerFactory.getLogger( +CopyFromLocalOperation.class); + +private final CopyFromLocalOperationCallbacks callbacks; +private final boolean deleteSource; +private final boolean overwrite; +private final Path source; +private final Path destination; + +private FileStatus dstStatus; + +public CopyFromLocalOperation( +final StoreContext storeContext, +Path source, +Path destination, +boolean deleteSource, +boolean overwrite, +CopyFromLocalOperationCallbacks callbacks) { +super(storeContext); +this.callbacks = callbacks; +this.deleteSource = deleteSource; +this.overwrite = overwrite; +this.source = source; +this.destination = destination; +} + +@Override +@Retries.RetryTranslated +public Void execute() +throws IOException, PathExistsException { +LOG.debug("Copying local file from {} to {}", source, destination); +File sourceFile = callbacks.pathToFile(source); +try { +dstStatus = callbacks.getFileStatus(destination); +} catch (FileNotFoundException e) { +dstStatus = null; +} + +checkSource(sourceFile); +prepareDestination(destination, sourceFile, overwrite); +uploadSourceFromFS(); + +if (deleteSource) { +callbacks.delete(source, true); +} + +return null; +} + +private void uploadSourceFromFS() +throws IOException, PathExistsException { +RemoteIterator localFiles = callbacks +.listStatusIterator(source, true); + +// After all files are traversed, this set will contain only emptyDirs +Set emptyDirs = new HashSet<>(); +List entries = new ArrayList<>(); +while (localFiles.hasNext())
[jira] [Work logged] (HADOOP-17139) Re-enable optimized copyFromLocal implementation in S3AFileSystem
[ https://issues.apache.org/jira/browse/HADOOP-17139?focusedWorklogId=615033&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-615033 ] ASF GitHub Bot logged work on HADOOP-17139: --- Author: ASF GitHub Bot Created on: 25/Jun/21 14:29 Start Date: 25/Jun/21 14:29 Worklog Time Spent: 10m Work Description: hadoop-yetus removed a comment on pull request #3101: URL: https://github.com/apache/hadoop/pull/3101#issuecomment-859721627 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 57s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 30m 49s | | trunk passed | | +1 :green_heart: | compile | 0m 45s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 0m 40s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 0m 32s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 46s | | trunk passed | | +1 :green_heart: | javadoc | 0m 28s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 36s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 1m 10s | | trunk passed | | +1 :green_heart: | shadedclient | 14m 36s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 37s | | the patch passed | | +1 :green_heart: | compile | 0m 38s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 0m 38s | | the patch passed | | +1 :green_heart: | compile | 0m 31s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 0m 31s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 20s | [/results-checkstyle-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3101/1/artifact/out/results-checkstyle-hadoop-tools_hadoop-aws.txt) | hadoop-tools/hadoop-aws: The patch generated 10 new + 9 unchanged - 0 fixed = 19 total (was 9) | | +1 :green_heart: | mvnsite | 0m 36s | | the patch passed | | +1 :green_heart: | javadoc | 0m 17s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 26s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 1m 9s | | the patch passed | | +1 :green_heart: | shadedclient | 14m 4s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 2m 27s | | hadoop-aws in the patch passed. | | +1 :green_heart: | asflicense | 0m 34s | | The patch does not generate ASF License warnings. | | | | 74m 3s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3101/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3101 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux d241585b39e6 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / cd7e17e5d00466ad78531017cee9df19dd8286ad | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3101/1/testReport/ | | Max. process+thread count | 642 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
[jira] [Work logged] (HADOOP-17139) Re-enable optimized copyFromLocal implementation in S3AFileSystem
[ https://issues.apache.org/jira/browse/HADOOP-17139?focusedWorklogId=615031&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-615031 ] ASF GitHub Bot logged work on HADOOP-17139: --- Author: ASF GitHub Bot Created on: 25/Jun/21 14:29 Start Date: 25/Jun/21 14:29 Worklog Time Spent: 10m Work Description: steveloughran commented on pull request #3101: URL: https://github.com/apache/hadoop/pull/3101#issuecomment-868540988 > listFilesAndDirs a new RemoteIterator similar to listFiles that includes LocatedFileStatus for directories too. It's handy when we want to detect empty directories; -1 to that change. Making FS changes is a big thing with more trauma and planning. See the comments at the top of FileSystem.java. Any new list operation should * support multiple dirs (for faster partition scanning) * builder API for any specific options * return a list of Future<>s to make clear that list can be slow & return dirs out of order * has high performance impl for HDFS/webHDFS as well as "S3A And ABFS object stores (could just relay to BatchListingOperations & so existing results.) * Plus all the spec/contract work. See HADOOP-16898 for discussion there. It's not trivial -we need to think about "what is the best list model for the future?". -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 615031) Time Spent: 0.5h (was: 20m) > Re-enable optimized copyFromLocal implementation in S3AFileSystem > - > > Key: HADOOP-17139 > URL: https://issues.apache.org/jira/browse/HADOOP-17139 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.3.0, 3.2.1 >Reporter: Sahil Takiar >Assignee: Bogdan Stolojan >Priority: Minor > Labels: pull-request-available > Time Spent: 0.5h > Remaining Estimate: 0h > > It looks like HADOOP-15932 disabled the optimized copyFromLocal > implementation in S3A for correctness reasons. innerCopyFromLocalFile should > be fixed and re-enabled. The current implementation uses > FileSystem.copyFromLocal which will open an input stream from the local fs > and an output stream to the destination fs, and then call IOUtils.copyBytes. > With default configs, this will cause S3A to read the file into memory, write > it back to a file on the local fs, and then when the file is closed, upload > it to S3. > The optimized version of copyFromLocal in innerCopyFromLocalFile, directly > creates a PutObjectRequest request with the local file as the input. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #3101: HADOOP-17139 Re-enable optimized copyFromLocal implementation in S3AFileSystem
hadoop-yetus removed a comment on pull request #3101: URL: https://github.com/apache/hadoop/pull/3101#issuecomment-859721627 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 57s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 30m 49s | | trunk passed | | +1 :green_heart: | compile | 0m 45s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 0m 40s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 0m 32s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 46s | | trunk passed | | +1 :green_heart: | javadoc | 0m 28s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 36s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 1m 10s | | trunk passed | | +1 :green_heart: | shadedclient | 14m 36s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 37s | | the patch passed | | +1 :green_heart: | compile | 0m 38s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 0m 38s | | the patch passed | | +1 :green_heart: | compile | 0m 31s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 0m 31s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 20s | [/results-checkstyle-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3101/1/artifact/out/results-checkstyle-hadoop-tools_hadoop-aws.txt) | hadoop-tools/hadoop-aws: The patch generated 10 new + 9 unchanged - 0 fixed = 19 total (was 9) | | +1 :green_heart: | mvnsite | 0m 36s | | the patch passed | | +1 :green_heart: | javadoc | 0m 17s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 26s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 1m 9s | | the patch passed | | +1 :green_heart: | shadedclient | 14m 4s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 2m 27s | | hadoop-aws in the patch passed. | | +1 :green_heart: | asflicense | 0m 34s | | The patch does not generate ASF License warnings. | | | | 74m 3s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3101/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3101 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux d241585b39e6 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / cd7e17e5d00466ad78531017cee9df19dd8286ad | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3101/1/testReport/ | | Max. process+thread count | 642 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3101/1/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go
[GitHub] [hadoop] steveloughran commented on pull request #3101: HADOOP-17139 Re-enable optimized copyFromLocal implementation in S3AFileSystem
steveloughran commented on pull request #3101: URL: https://github.com/apache/hadoop/pull/3101#issuecomment-868540988 > listFilesAndDirs a new RemoteIterator similar to listFiles that includes LocatedFileStatus for directories too. It's handy when we want to detect empty directories; -1 to that change. Making FS changes is a big thing with more trauma and planning. See the comments at the top of FileSystem.java. Any new list operation should * support multiple dirs (for faster partition scanning) * builder API for any specific options * return a list of Future<>s to make clear that list can be slow & return dirs out of order * has high performance impl for HDFS/webHDFS as well as "S3A And ABFS object stores (could just relay to BatchListingOperations & so existing results.) * Plus all the spec/contract work. See HADOOP-16898 for discussion there. It's not trivial -we need to think about "what is the best list model for the future?". -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17764) S3AInputStream read does not re-open the input stream on the second read retry attempt
[ https://issues.apache.org/jira/browse/HADOOP-17764?focusedWorklogId=615027&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-615027 ] ASF GitHub Bot logged work on HADOOP-17764: --- Author: ASF GitHub Bot Created on: 25/Jun/21 14:19 Start Date: 25/Jun/21 14:19 Worklog Time Spent: 10m Work Description: steveloughran commented on pull request #3109: URL: https://github.com/apache/hadoop/pull/3109#issuecomment-868534727 ok. just fix those line length checkstyles and we are good to merge. As these are just formatting, no need to rerun the tests. regarding the second failure -I've updated the JIRA to "lets just cut it"; it's part of the fault injection of inconsistencies we needed to test S3Guard. Now s3 is consistent, just a needless failure -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 615027) Time Spent: 7.5h (was: 7h 20m) > S3AInputStream read does not re-open the input stream on the second read > retry attempt > -- > > Key: HADOOP-17764 > URL: https://issues.apache.org/jira/browse/HADOOP-17764 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3 >Affects Versions: 3.3.1 >Reporter: Zamil Majdy >Priority: Major > Labels: pull-request-available > Time Spent: 7.5h > Remaining Estimate: 0h > > *Bug description:* > The read method in S3AInputStream has this following behaviour when an > IOException happening during the read: > * {{reopen and read quickly}}: The client after failing in the first attempt > of {{read}}, will reopen the stream and try reading again without {{sleep}}. > * {{reopen and wait for fixed duration}}: The client after failing in the > attempt of {{read}}, will reopen the stream, sleep for > {{fs.s3a.retry.interval}} milliseconds (defaults to 500 ms), and then try > reading from the stream. > While doing the {{reopen and read quickly}} process, the subsequent read will > be retried without reopening the input stream in case of the second failure > happened. This leads to some of the bytes read being skipped which results to > corrupt/less data than required. > > *Scenario to reproduce:* > * Execute S3AInputStream `read()` or `read(b, off, len)`. > * The read failed and throws `Connection Reset` exception after reading some > data. > * The InputStream is re-opened and another `read()` or `read(b, off, len)` > is executed > * The read failed for the second time and throws `Connection Reset` > exception after reading some data. > * The InputStream is not re-opened and another `read()` or `read(b, off, > len)` is executed after sleep > * The read succeed, but it skips the first few bytes that has already been > read on the second failure. > > *Proposed fix:* > [https://github.com/apache/hadoop/pull/3109] > Added the test that reproduces the issue along with the fix -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on pull request #3109: HADOOP-17764. S3AInputStream read does not re-open the input stream on the second read retry attempt
steveloughran commented on pull request #3109: URL: https://github.com/apache/hadoop/pull/3109#issuecomment-868534727 ok. just fix those line length checkstyles and we are good to merge. As these are just formatting, no need to rerun the tests. regarding the second failure -I've updated the JIRA to "lets just cut it"; it's part of the fault injection of inconsistencies we needed to test S3Guard. Now s3 is consistent, just a needless failure -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17457) intermittent ITestS3AInconsistency.testGetFileStatus failure.
[ https://issues.apache.org/jira/browse/HADOOP-17457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17369495#comment-17369495 ] Steve Loughran commented on HADOOP-17457: - Let's just cut this test; its obsolete > intermittent ITestS3AInconsistency.testGetFileStatus failure. > - > > Key: HADOOP-17457 > URL: https://issues.apache.org/jira/browse/HADOOP-17457 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3, test >Affects Versions: 3.3.1 >Reporter: Mukund Thakur >Priority: Major > > {code} > [*ERROR*] *Tests* *run: 3*, *Failures: 1*, Errors: 0, Skipped: 0, Time > elapsed: 30.944 s *<<< FAILURE!* - in > org.apache.hadoop.fs.s3a.*ITestS3AInconsistency* > [*ERROR*] testGetFileStatus(org.apache.hadoop.fs.s3a.ITestS3AInconsistency) > Time elapsed: 6.471 s <<< FAILURE! > java.lang.AssertionError: getFileStatus should fail due to delayed visibility. > at org.junit.Assert.fail(Assert.java:88) > at > org.apache.hadoop.fs.s3a.ITestS3AInconsistency.testGetFileStatus(ITestS3AInconsistency.java:114) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) > at > org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298) > at > org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at java.lang.Thread.run(Thread.java:748) > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17457) intermittent ITestS3AInconsistency.testGetFileStatus failure.
[ https://issues.apache.org/jira/browse/HADOOP-17457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-17457: Summary: intermittent ITestS3AInconsistency.testGetFileStatus failure. (was: Seeing test ITestS3AInconsistency.testGetFileStatus failure.) > intermittent ITestS3AInconsistency.testGetFileStatus failure. > - > > Key: HADOOP-17457 > URL: https://issues.apache.org/jira/browse/HADOOP-17457 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3, test >Affects Versions: 3.3.1 >Reporter: Mukund Thakur >Priority: Major > > {code} > [*ERROR*] *Tests* *run: 3*, *Failures: 1*, Errors: 0, Skipped: 0, Time > elapsed: 30.944 s *<<< FAILURE!* - in > org.apache.hadoop.fs.s3a.*ITestS3AInconsistency* > [*ERROR*] testGetFileStatus(org.apache.hadoop.fs.s3a.ITestS3AInconsistency) > Time elapsed: 6.471 s <<< FAILURE! > java.lang.AssertionError: getFileStatus should fail due to delayed visibility. > at org.junit.Assert.fail(Assert.java:88) > at > org.apache.hadoop.fs.s3a.ITestS3AInconsistency.testGetFileStatus(ITestS3AInconsistency.java:114) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) > at > org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298) > at > org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at java.lang.Thread.run(Thread.java:748) > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-17755) EOF reached error reading ORC file on S3A
[ https://issues.apache.org/jira/browse/HADOOP-17755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17369485#comment-17369485 ] Steve Loughran edited comment on HADOOP-17755 at 6/25/21, 2:10 PM: --- well, try with a later version of 3-2.x 3.2.2 has the fix was (Author: ste...@apache.org): well, try with a later version of 3.2.x > EOF reached error reading ORC file on S3A > - > > Key: HADOOP-17755 > URL: https://issues.apache.org/jira/browse/HADOOP-17755 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3 >Affects Versions: 3.2.0 > Environment: Hadoop 3.2.0 >Reporter: Arghya Saha >Priority: Major > > Hi I am trying to do some transformation using Spark 3.1.1-Hadoop 3.2 on K8s > and using s3a > I have around 700 GB of data to read and around 200 executors (5 vCore and > 30G each). > Its able to read most of the files in problematic stage (Scan orc => Filter > => Project) but is failing with few files at the end with below error. The > size of the file mentioned in error is around 140 MB and all other files are > of similar size. > I am able to read and rewrite the specific file mentioned which suggest the > file is not corrupted. > Let me know if further information is required > > {code:java} > java.io.IOException: Error reading file: > s3a:///part-1-5e22a873-82a5-4781-9eb9-473b483396bd.c000.zlib.orcjava.io.IOException: > Error reading file: > s3a:///part-1-5e22a873-82a5-4781-9eb9-473b483396bd.c000.zlib.orc > at > org.apache.orc.impl.RecordReaderImpl.nextBatch(RecordReaderImpl.java:1331) at > org.apache.orc.mapreduce.OrcMapreduceRecordReader.ensureBatch(OrcMapreduceRecordReader.java:78) > at > org.apache.orc.mapreduce.OrcMapreduceRecordReader.nextKeyValue(OrcMapreduceRecordReader.java:96) > at > org.apache.spark.sql.execution.datasources.RecordReaderIterator.hasNext(RecordReaderIterator.scala:37) > at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458) at > org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:93) > at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458) at > scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:511) at > scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458) at > scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458) at > org.apache.spark.shuffle.sort.UnsafeShuffleWriter.write(UnsafeShuffleWriter.java:177) > at > org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59) > at > org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99) at > org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:52) at > org.apache.spark.scheduler.Task.run(Task.scala:131) at > org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:497) > at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1439) at > org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:500) at > java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) > at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown > Source) at java.base/java.lang.Thread.run(Unknown Source)Caused by: > java.io.EOFException: End of file reached before reading fully. at > org.apache.hadoop.fs.s3a.S3AInputStream.readFully(S3AInputStream.java:702) at > org.apache.hadoop.fs.FSDataInputStream.readFully(FSDataInputStream.java:111) > at > org.apache.orc.impl.RecordReaderUtils.readDiskRanges(RecordReaderUtils.java:566) > at > org.apache.orc.impl.RecordReaderUtils$DefaultDataReader.readFileData(RecordReaderUtils.java:285) > at > org.apache.orc.impl.RecordReaderImpl.readPartialDataStreams(RecordReaderImpl.java:1237) > at > org.apache.orc.impl.RecordReaderImpl.readStripe(RecordReaderImpl.java:1105) > at > org.apache.orc.impl.RecordReaderImpl.advanceStripe(RecordReaderImpl.java:1256) > at > org.apache.orc.impl.RecordReaderImpl.advanceToNextRow(RecordReaderImpl.java:1291) > at > org.apache.orc.impl.RecordReaderImpl.nextBatch(RecordReaderImpl.java:1327) > ... 20 more > {code} > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17755) EOF reached error reading ORC file on S3A
[ https://issues.apache.org/jira/browse/HADOOP-17755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17369485#comment-17369485 ] Steve Loughran commented on HADOOP-17755: - well, try with a later version of 3.2.x > EOF reached error reading ORC file on S3A > - > > Key: HADOOP-17755 > URL: https://issues.apache.org/jira/browse/HADOOP-17755 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3 >Affects Versions: 3.2.0 > Environment: Hadoop 3.2.0 >Reporter: Arghya Saha >Priority: Major > > Hi I am trying to do some transformation using Spark 3.1.1-Hadoop 3.2 on K8s > and using s3a > I have around 700 GB of data to read and around 200 executors (5 vCore and > 30G each). > Its able to read most of the files in problematic stage (Scan orc => Filter > => Project) but is failing with few files at the end with below error. The > size of the file mentioned in error is around 140 MB and all other files are > of similar size. > I am able to read and rewrite the specific file mentioned which suggest the > file is not corrupted. > Let me know if further information is required > > {code:java} > java.io.IOException: Error reading file: > s3a:///part-1-5e22a873-82a5-4781-9eb9-473b483396bd.c000.zlib.orcjava.io.IOException: > Error reading file: > s3a:///part-1-5e22a873-82a5-4781-9eb9-473b483396bd.c000.zlib.orc > at > org.apache.orc.impl.RecordReaderImpl.nextBatch(RecordReaderImpl.java:1331) at > org.apache.orc.mapreduce.OrcMapreduceRecordReader.ensureBatch(OrcMapreduceRecordReader.java:78) > at > org.apache.orc.mapreduce.OrcMapreduceRecordReader.nextKeyValue(OrcMapreduceRecordReader.java:96) > at > org.apache.spark.sql.execution.datasources.RecordReaderIterator.hasNext(RecordReaderIterator.scala:37) > at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458) at > org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:93) > at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458) at > scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:511) at > scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458) at > scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458) at > org.apache.spark.shuffle.sort.UnsafeShuffleWriter.write(UnsafeShuffleWriter.java:177) > at > org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59) > at > org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99) at > org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:52) at > org.apache.spark.scheduler.Task.run(Task.scala:131) at > org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:497) > at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1439) at > org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:500) at > java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) > at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown > Source) at java.base/java.lang.Thread.run(Unknown Source)Caused by: > java.io.EOFException: End of file reached before reading fully. at > org.apache.hadoop.fs.s3a.S3AInputStream.readFully(S3AInputStream.java:702) at > org.apache.hadoop.fs.FSDataInputStream.readFully(FSDataInputStream.java:111) > at > org.apache.orc.impl.RecordReaderUtils.readDiskRanges(RecordReaderUtils.java:566) > at > org.apache.orc.impl.RecordReaderUtils$DefaultDataReader.readFileData(RecordReaderUtils.java:285) > at > org.apache.orc.impl.RecordReaderImpl.readPartialDataStreams(RecordReaderImpl.java:1237) > at > org.apache.orc.impl.RecordReaderImpl.readStripe(RecordReaderImpl.java:1105) > at > org.apache.orc.impl.RecordReaderImpl.advanceStripe(RecordReaderImpl.java:1256) > at > org.apache.orc.impl.RecordReaderImpl.advanceToNextRow(RecordReaderImpl.java:1291) > at > org.apache.orc.impl.RecordReaderImpl.nextBatch(RecordReaderImpl.java:1327) > ... 20 more > {code} > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] akshatb1 commented on pull request #3135: YARN-10829. Support getApplications API in FederationClientInterceptor
akshatb1 commented on pull request #3135: URL: https://github.com/apache/hadoop/pull/3135#issuecomment-868522907 @goiri I have addressed the further comments. Please take a look whenever you get a chance. Thanks. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] akshatb1 commented on a change in pull request #3135: YARN-10829. Support getApplications API in FederationClientInterceptor
akshatb1 commented on a change in pull request #3135: URL: https://github.com/apache/hadoop/pull/3135#discussion_r658787886 ## File path: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/test/java/org/apache/hadoop/yarn/conf/TestYarnConfigurationFields.java ## @@ -184,6 +184,8 @@ public void initializeMemberVariables() { configurationPrefixToSkipCompare .add(YarnConfiguration.ROUTER_CLIENTRM_SUBMIT_RETRY); +configurationPrefixToSkipCompare +.add(YarnConfiguration.ROUTER_CLIENTRM_PARTIAL_RESULTS_ENABLED); Review comment: Fixed. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] akshatb1 commented on a change in pull request #3135: YARN-10829. Support getApplications API in FederationClientInterceptor
akshatb1 commented on a change in pull request #3135: URL: https://github.com/apache/hadoop/pull/3135#discussion_r658787636 ## File path: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/clientrm/FederationClientInterceptor.java ## @@ -599,10 +604,45 @@ public GetApplicationReportResponse getApplicationReport( return response; } + /** + * The Yarn Router will forward the request to all the Yarn RMs in parallel, + * after that it will group all the ApplicationReports by the ApplicationId. + * + * Possible failure: + * + * Client: identical behavior as {@code ClientRMService}. + * + * Router: the Client will timeout and resubmit the request. + * + * ResourceManager: the Router calls each Yarn RM in parallel. In case a + * Yarn RM fails, a single call will timeout. However the Router will + * merge the ApplicationReports it got, and provides a partial list to + * the client. + * + * State Store: the Router will timeout and it will retry depending on the + * FederationFacade settings - if the failure happened before the select + * operation. + */ @Override public GetApplicationsResponse getApplications(GetApplicationsRequest request) throws YarnException, IOException { -throw new NotImplementedException("Code is not implemented"); +if (request == null) { + RouterServerUtil.logAndThrowException( + "Missing getApplications request.", + null); +} +Map subclusters = +federationFacade.getSubClusters(true); +ClientMethod remoteMethod = new ClientMethod("getApplications", +new Class[] {GetApplicationsRequest.class}, new Object[] {request}); +ArrayList clusterIds = new ArrayList<>(subclusters.keySet()); Review comment: Overloaded the `invokeConcurrent` method to take `Collection`. Will update the other usage in a follow up PR once this is merged. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] akshatb1 commented on a change in pull request #3135: YARN-10829. Support getApplications API in FederationClientInterceptor
akshatb1 commented on a change in pull request #3135: URL: https://github.com/apache/hadoop/pull/3135#discussion_r658786758 ## File path: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/clientrm/FederationClientInterceptor.java ## @@ -599,10 +604,45 @@ public GetApplicationReportResponse getApplicationReport( return response; } + /** + * The Yarn Router will forward the request to all the Yarn RMs in parallel, + * after that it will group all the ApplicationReports by the ApplicationId. + * + * Possible failure: + * + * Client: identical behavior as {@code ClientRMService}. + * + * Router: the Client will timeout and resubmit the request. + * + * ResourceManager: the Router calls each Yarn RM in parallel. In case a + * Yarn RM fails, a single call will timeout. However the Router will + * merge the ApplicationReports it got, and provides a partial list to + * the client. + * + * State Store: the Router will timeout and it will retry depending on the + * FederationFacade settings - if the failure happened before the select + * operation. + */ @Override public GetApplicationsResponse getApplications(GetApplicationsRequest request) throws YarnException, IOException { -throw new NotImplementedException("Code is not implemented"); +if (request == null) { + RouterServerUtil.logAndThrowException( + "Missing getApplications request.", + null); +} +Map subclusters = +federationFacade.getSubClusters(true); +ClientMethod remoteMethod = new ClientMethod("getApplications", +new Class[] {GetApplicationsRequest.class}, new Object[] {request}); +ArrayList clusterIds = new ArrayList<>(subclusters.keySet()); +Map applications = +invokeConcurrent(clusterIds, remoteMethod, Review comment: After using Collection directly, it does not fit in a single line. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] akshatb1 commented on a change in pull request #3135: YARN-10829. Support getApplications API in FederationClientInterceptor
akshatb1 commented on a change in pull request #3135: URL: https://github.com/apache/hadoop/pull/3135#discussion_r658786234 ## File path: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/clientrm/RouterYarnClientUtils.java ## @@ -52,4 +63,126 @@ public static GetClusterMetricsResponse merge( } return GetClusterMetricsResponse.newInstance(tmp); } + + /** + * Merges a list of ApplicationReports grouping by ApplicationId. + * Our current policy is to merge the application reports from the reachable + * SubClusters. + * @param responses a list of ApplicationResponse to merge + * @param returnPartialResult if the merge ApplicationReports should contain + * partial result or not + * @return the merged ApplicationsResponse + */ + public static GetApplicationsResponse mergeApplications( Review comment: This is used in `FederationClientInterceptor` to merge Applications. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] akshatb1 commented on a change in pull request #3135: YARN-10829. Support getApplications API in FederationClientInterceptor
akshatb1 commented on a change in pull request #3135: URL: https://github.com/apache/hadoop/pull/3135#discussion_r658785656 ## File path: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/clientrm/RouterYarnClientUtils.java ## @@ -52,4 +63,126 @@ public static GetClusterMetricsResponse merge( } return GetClusterMetricsResponse.newInstance(tmp); } + + /** + * Merges a list of ApplicationReports grouping by ApplicationId. + * Our current policy is to merge the application reports from the reachable + * SubClusters. + * @param responses a list of ApplicationResponse to merge + * @param returnPartialResult if the merge ApplicationReports should contain + * partial result or not + * @return the merged ApplicationsResponse + */ + public static GetApplicationsResponse mergeApplications( + Collection responses, + boolean returnPartialResult){ +Map federationAM = new HashMap<>(); +Map federationUAMSum = new HashMap<>(); + +for (GetApplicationsResponse appResponse : responses){ + for (ApplicationReport appReport : appResponse.getApplicationList()){ +ApplicationId appId = appReport.getApplicationId(); +// Check if this ApplicationReport is an AM +if (appReport.getHost() != null) { + // Insert in the list of AM + federationAM.put(appId, appReport); + // Check if there are any UAM found before + if (federationUAMSum.containsKey(appId)) { +// Merge the current AM with the found UAM +mergeAMWithUAM(appReport, federationUAMSum.get(appId)); +// Remove the sum of the UAMs +federationUAMSum.remove(appId); + } + // This ApplicationReport is an UAM +} else if (federationAM.containsKey(appId)) { + // Merge the current UAM with its own AM + mergeAMWithUAM(federationAM.get(appId), appReport); +} else if (federationUAMSum.containsKey(appId)) { + // Merge the current UAM with its own UAM and update the list of UAM + federationUAMSum.put(appId, + mergeUAMWithUAM(federationUAMSum.get(appId), appReport)); +} else { + // Insert in the list of UAM + federationUAMSum.put(appId, appReport); +} + } +} +// Check the remaining UAMs are depending or not from federation +for (ApplicationReport appReport : federationUAMSum.values()) { + if (mergeUamToReport(appReport.getName(), returnPartialResult)) { +federationAM.put(appReport.getApplicationId(), appReport); + } +} + +return GetApplicationsResponse.newInstance(federationAM.values()); + } + + private static ApplicationReport mergeUAMWithUAM(ApplicationReport uam1, + ApplicationReport uam2){ +uam1.setName(PARTIAL_REPORT + uam1.getApplicationId()); Review comment: Added test for checking UAM merge. ## File path: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/clientrm/RouterYarnClientUtils.java ## @@ -52,4 +63,126 @@ public static GetClusterMetricsResponse merge( } return GetClusterMetricsResponse.newInstance(tmp); } + + /** + * Merges a list of ApplicationReports grouping by ApplicationId. + * Our current policy is to merge the application reports from the reachable + * SubClusters. + * @param responses a list of ApplicationResponse to merge + * @param returnPartialResult if the merge ApplicationReports should contain + * partial result or not + * @return the merged ApplicationsResponse + */ + public static GetApplicationsResponse mergeApplications( + Collection responses, + boolean returnPartialResult){ +Map federationAM = new HashMap<>(); +Map federationUAMSum = new HashMap<>(); + +for (GetApplicationsResponse appResponse : responses){ + for (ApplicationReport appReport : appResponse.getApplicationList()){ +ApplicationId appId = appReport.getApplicationId(); +// Check if this ApplicationReport is an AM +if (appReport.getHost() != null) { + // Insert in the list of AM + federationAM.put(appId, appReport); + // Check if there are any UAM found before + if (federationUAMSum.containsKey(appId)) { +// Merge the current AM with the found UAM +mergeAMWithUAM(appReport, federationUAMSum.get(appId)); +// Remove the sum of the UAMs +federationUAMSum.remove(appId); + } + // This ApplicationReport is an UAM +} else if (federationAM.containsKey(appId)) { + // Merge the current UAM with its own AM + mergeAMWithUAM(federationAM.get(appId), appReport); +} else if (federationUAMSum.containsKey(appId)) { + // Merge the current UAM with its own UAM and upd
[GitHub] [hadoop] akshatb1 commented on a change in pull request #3135: YARN-10829. Support getApplications API in FederationClientInterceptor
akshatb1 commented on a change in pull request #3135: URL: https://github.com/apache/hadoop/pull/3135#discussion_r658785424 ## File path: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/clientrm/RouterYarnClientUtils.java ## @@ -52,4 +63,126 @@ public static GetClusterMetricsResponse merge( } return GetClusterMetricsResponse.newInstance(tmp); } + + /** + * Merges a list of ApplicationReports grouping by ApplicationId. + * Our current policy is to merge the application reports from the reachable + * SubClusters. + * @param responses a list of ApplicationResponse to merge + * @param returnPartialResult if the merge ApplicationReports should contain + * partial result or not + * @return the merged ApplicationsResponse + */ + public static GetApplicationsResponse mergeApplications( + Collection responses, + boolean returnPartialResult){ +Map federationAM = new HashMap<>(); +Map federationUAMSum = new HashMap<>(); + +for (GetApplicationsResponse appResponse : responses){ + for (ApplicationReport appReport : appResponse.getApplicationList()){ +ApplicationId appId = appReport.getApplicationId(); +// Check if this ApplicationReport is an AM +if (appReport.getHost() != null) { + // Insert in the list of AM + federationAM.put(appId, appReport); + // Check if there are any UAM found before + if (federationUAMSum.containsKey(appId)) { +// Merge the current AM with the found UAM +mergeAMWithUAM(appReport, federationUAMSum.get(appId)); +// Remove the sum of the UAMs +federationUAMSum.remove(appId); + } + // This ApplicationReport is an UAM +} else if (federationAM.containsKey(appId)) { + // Merge the current UAM with its own AM + mergeAMWithUAM(federationAM.get(appId), appReport); +} else if (federationUAMSum.containsKey(appId)) { + // Merge the current UAM with its own UAM and update the list of UAM + federationUAMSum.put(appId, + mergeUAMWithUAM(federationUAMSum.get(appId), appReport)); +} else { + // Insert in the list of UAM + federationUAMSum.put(appId, appReport); +} + } +} +// Check the remaining UAMs are depending or not from federation +for (ApplicationReport appReport : federationUAMSum.values()) { + if (mergeUamToReport(appReport.getName(), returnPartialResult)) { +federationAM.put(appReport.getApplicationId(), appReport); + } +} + +return GetApplicationsResponse.newInstance(federationAM.values()); + } + + private static ApplicationReport mergeUAMWithUAM(ApplicationReport uam1, + ApplicationReport uam2){ +uam1.setName(PARTIAL_REPORT + uam1.getApplicationId()); +mergeAMWithUAM(uam1, uam1); +mergeAMWithUAM(uam1, uam2); +return uam1; + } + + private static void mergeAMWithUAM(ApplicationReport am, + ApplicationReport uam){ +ApplicationResourceUsageReport amResourceReport = +am.getApplicationResourceUsageReport(); + +ApplicationResourceUsageReport uamResourceReport = +uam.getApplicationResourceUsageReport(); + +amResourceReport.setNumUsedContainers( +amResourceReport.getNumUsedContainers() + +uamResourceReport.getNumUsedContainers()); + +amResourceReport.setNumReservedContainers( +amResourceReport.getNumReservedContainers() + +uamResourceReport.getNumReservedContainers()); + +amResourceReport.setUsedResources(Resources.add( +amResourceReport.getUsedResources(), +uamResourceReport.getUsedResources())); + +amResourceReport.setReservedResources(Resources.add( +amResourceReport.getReservedResources(), +uamResourceReport.getReservedResources())); + +amResourceReport.setNeededResources(Resources.add( +amResourceReport.getNeededResources(), +uamResourceReport.getNeededResources())); + +amResourceReport.setMemorySeconds( +amResourceReport.getMemorySeconds() + +uamResourceReport.getMemorySeconds()); + +amResourceReport.setVcoreSeconds( +amResourceReport.getVcoreSeconds() + +uamResourceReport.getVcoreSeconds()); + +amResourceReport.setQueueUsagePercentage( +amResourceReport.getQueueUsagePercentage() + +uamResourceReport.getQueueUsagePercentage()); + +amResourceReport.setClusterUsagePercentage( +amResourceReport.getClusterUsagePercentage() + +uamResourceReport.getClusterUsagePercentage()); + +am.getApplicationTags().addAll(uam.getApplicationTags()); + } + + /** + * Returns whether or not to add an unmanaged application to the report. + * @param appName Application Name + * @param returnPartialResult if the merge ApplicationReports should c
[GitHub] [hadoop] akshatb1 commented on a change in pull request #3135: YARN-10829. Support getApplications API in FederationClientInterceptor
akshatb1 commented on a change in pull request #3135: URL: https://github.com/apache/hadoop/pull/3135#discussion_r658785208 ## File path: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/clientrm/TestFederationClientInterceptor.java ## @@ -531,4 +536,107 @@ public void testGetClusterMetricsRequest() throws YarnException, IOException { GetClusterMetricsResponse.class); Assert.assertEquals(true, clusterMetrics.isEmpty()); } + + /** + * This test validates the correctness of + * GetApplicationsResponse in case the + * application exists in the cluster. + */ + @Test + public void testGetApplicationsResponse() + throws YarnException, IOException, InterruptedException { +LOG.info("Test FederationClientInterceptor: " + Review comment: Done. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] akshatb1 commented on a change in pull request #3135: YARN-10829. Support getApplications API in FederationClientInterceptor
akshatb1 commented on a change in pull request #3135: URL: https://github.com/apache/hadoop/pull/3135#discussion_r658784947 ## File path: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/clientrm/TestFederationClientInterceptor.java ## @@ -531,4 +536,110 @@ public void testGetClusterMetricsRequest() throws YarnException, IOException { GetClusterMetricsResponse.class); Assert.assertEquals(true, clusterMetrics.isEmpty()); } + + /** + * This test validates the correctness of + * GetApplicationsResponse in case the + * application exists in the cluster. + */ + @Test + public void testGetApplicationsResponse() + throws YarnException, IOException, InterruptedException { +LOG.info("Test FederationClientInterceptor: " + +"Get Applications Response"); +ApplicationId appId = +ApplicationId.newInstance(System.currentTimeMillis(), 1); + +SubmitApplicationRequest request = mockSubmitApplicationRequest(appId); +SubmitApplicationResponse response = interceptor.submitApplication(request); + +Assert.assertNotNull(response); +Assert.assertNotNull(stateStoreUtil.queryApplicationHomeSC(appId)); + +Set appTypes = Collections.singleton("MockApp"); +GetApplicationsRequest requestGet = +GetApplicationsRequest.newInstance(appTypes); + +GetApplicationsResponse responseGet = +interceptor.getApplications(requestGet); + +Assert.assertNotNull(responseGet); + } + + /** + * This test validates + * the correctness of GetApplicationsResponse in case of + * empty request. + */ + @Test + public void testGetApplicationsNullRequest() throws Exception { +LOG.info("Test FederationClientInterceptor : Get Applications request"); Review comment: Done. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] akshatb1 commented on a change in pull request #3135: YARN-10829. Support getApplications API in FederationClientInterceptor
akshatb1 commented on a change in pull request #3135: URL: https://github.com/apache/hadoop/pull/3135#discussion_r658784725 ## File path: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/clientrm/TestFederationClientInterceptor.java ## @@ -531,4 +536,110 @@ public void testGetClusterMetricsRequest() throws YarnException, IOException { GetClusterMetricsResponse.class); Assert.assertEquals(true, clusterMetrics.isEmpty()); } + + /** + * This test validates the correctness of + * GetApplicationsResponse in case the + * application exists in the cluster. + */ + @Test + public void testGetApplicationsResponse() + throws YarnException, IOException, InterruptedException { +LOG.info("Test FederationClientInterceptor: " + +"Get Applications Response"); +ApplicationId appId = +ApplicationId.newInstance(System.currentTimeMillis(), 1); + +SubmitApplicationRequest request = mockSubmitApplicationRequest(appId); +SubmitApplicationResponse response = interceptor.submitApplication(request); + +Assert.assertNotNull(response); +Assert.assertNotNull(stateStoreUtil.queryApplicationHomeSC(appId)); + +Set appTypes = Collections.singleton("MockApp"); +GetApplicationsRequest requestGet = +GetApplicationsRequest.newInstance(appTypes); + +GetApplicationsResponse responseGet = +interceptor.getApplications(requestGet); + +Assert.assertNotNull(responseGet); + } + + /** + * This test validates + * the correctness of GetApplicationsResponse in case of + * empty request. + */ + @Test + public void testGetApplicationsNullRequest() throws Exception { +LOG.info("Test FederationClientInterceptor : Get Applications request"); +LambdaTestUtils.intercept(YarnException.class, +"Missing getApplications request.", +() -> interceptor.getApplications(null)); + } + + /** + * This test validates + * the correctness of GetApplicationsResponse in case applications + * with given type does not exist. + */ + @Test + public void testGetApplicationsApplicationTypeNotExists() throws Exception{ +LOG.info("Test FederationClientInterceptor :" + Review comment: Fixed space before ":". Could not move it to single line since it exceeds 80 characters. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] akshatb1 commented on a change in pull request #3135: YARN-10829. Support getApplications API in FederationClientInterceptor
akshatb1 commented on a change in pull request #3135: URL: https://github.com/apache/hadoop/pull/3135#discussion_r658784084 ## File path: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/clientrm/TestFederationClientInterceptor.java ## @@ -531,4 +536,110 @@ public void testGetClusterMetricsRequest() throws YarnException, IOException { GetClusterMetricsResponse.class); Assert.assertEquals(true, clusterMetrics.isEmpty()); } + + /** + * This test validates the correctness of + * GetApplicationsResponse in case the + * application exists in the cluster. + */ + @Test + public void testGetApplicationsResponse() + throws YarnException, IOException, InterruptedException { +LOG.info("Test FederationClientInterceptor: " + +"Get Applications Response"); +ApplicationId appId = +ApplicationId.newInstance(System.currentTimeMillis(), 1); + +SubmitApplicationRequest request = mockSubmitApplicationRequest(appId); +SubmitApplicationResponse response = interceptor.submitApplication(request); + +Assert.assertNotNull(response); +Assert.assertNotNull(stateStoreUtil.queryApplicationHomeSC(appId)); + +Set appTypes = Collections.singleton("MockApp"); +GetApplicationsRequest requestGet = +GetApplicationsRequest.newInstance(appTypes); + +GetApplicationsResponse responseGet = +interceptor.getApplications(requestGet); + +Assert.assertNotNull(responseGet); + } + + /** + * This test validates + * the correctness of GetApplicationsResponse in case of + * empty request. + */ + @Test + public void testGetApplicationsNullRequest() throws Exception { +LOG.info("Test FederationClientInterceptor : Get Applications request"); +LambdaTestUtils.intercept(YarnException.class, +"Missing getApplications request.", +() -> interceptor.getApplications(null)); + } + + /** + * This test validates + * the correctness of GetApplicationsResponse in case applications + * with given type does not exist. + */ + @Test + public void testGetApplicationsApplicationTypeNotExists() throws Exception{ +LOG.info("Test FederationClientInterceptor :" + +" Application with type does not exist"); + +ApplicationId appId = +ApplicationId.newInstance(System.currentTimeMillis(), 1); + +SubmitApplicationRequest request = mockSubmitApplicationRequest(appId); +SubmitApplicationResponse response = interceptor.submitApplication(request); + +Assert.assertNotNull(response); +Assert.assertNotNull(stateStoreUtil.queryApplicationHomeSC(appId)); + +Set appTypes = Collections.singleton("SPARK"); + +GetApplicationsRequest requestGet = +GetApplicationsRequest.newInstance(appTypes); + +GetApplicationsResponse responseGet = +interceptor.getApplications(requestGet); + +Assert.assertNotNull(responseGet); +Assert.assertTrue(responseGet.getApplicationList().isEmpty()); + } + + /** + * This test validates + * the correctness of GetApplicationsResponse in case applications + * with given YarnApplicationState does not exist. + */ + @Test + public void testGetApplicationsApplicationStateNotExists() throws Exception{ +LOG.info("Test FederationClientInterceptor :" + +" Application with state does not exist"); + +ApplicationId appId = +ApplicationId.newInstance(System.currentTimeMillis(), 1); Review comment: Fixed here and at the other places in the same method. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] akshatb1 commented on a change in pull request #3135: YARN-10829. Support getApplications API in FederationClientInterceptor
akshatb1 commented on a change in pull request #3135: URL: https://github.com/apache/hadoop/pull/3135#discussion_r658783566 ## File path: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/clientrm/TestFederationClientInterceptor.java ## @@ -531,4 +536,110 @@ public void testGetClusterMetricsRequest() throws YarnException, IOException { GetClusterMetricsResponse.class); Assert.assertEquals(true, clusterMetrics.isEmpty()); } + + /** + * This test validates the correctness of + * GetApplicationsResponse in case the + * application exists in the cluster. + */ + @Test + public void testGetApplicationsResponse() + throws YarnException, IOException, InterruptedException { +LOG.info("Test FederationClientInterceptor: " + +"Get Applications Response"); +ApplicationId appId = +ApplicationId.newInstance(System.currentTimeMillis(), 1); + +SubmitApplicationRequest request = mockSubmitApplicationRequest(appId); +SubmitApplicationResponse response = interceptor.submitApplication(request); + +Assert.assertNotNull(response); +Assert.assertNotNull(stateStoreUtil.queryApplicationHomeSC(appId)); + +Set appTypes = Collections.singleton("MockApp"); +GetApplicationsRequest requestGet = +GetApplicationsRequest.newInstance(appTypes); + +GetApplicationsResponse responseGet = +interceptor.getApplications(requestGet); + +Assert.assertNotNull(responseGet); + } + + /** + * This test validates + * the correctness of GetApplicationsResponse in case of + * empty request. + */ + @Test + public void testGetApplicationsNullRequest() throws Exception { +LOG.info("Test FederationClientInterceptor : Get Applications request"); +LambdaTestUtils.intercept(YarnException.class, +"Missing getApplications request.", +() -> interceptor.getApplications(null)); + } + + /** + * This test validates + * the correctness of GetApplicationsResponse in case applications + * with given type does not exist. + */ + @Test + public void testGetApplicationsApplicationTypeNotExists() throws Exception{ +LOG.info("Test FederationClientInterceptor :" + +" Application with type does not exist"); + +ApplicationId appId = +ApplicationId.newInstance(System.currentTimeMillis(), 1); + +SubmitApplicationRequest request = mockSubmitApplicationRequest(appId); +SubmitApplicationResponse response = interceptor.submitApplication(request); + +Assert.assertNotNull(response); +Assert.assertNotNull(stateStoreUtil.queryApplicationHomeSC(appId)); + +Set appTypes = Collections.singleton("SPARK"); + +GetApplicationsRequest requestGet = +GetApplicationsRequest.newInstance(appTypes); + +GetApplicationsResponse responseGet = +interceptor.getApplications(requestGet); + +Assert.assertNotNull(responseGet); +Assert.assertTrue(responseGet.getApplicationList().isEmpty()); + } + + /** + * This test validates + * the correctness of GetApplicationsResponse in case applications + * with given YarnApplicationState does not exist. + */ + @Test + public void testGetApplicationsApplicationStateNotExists() throws Exception{ +LOG.info("Test FederationClientInterceptor :" + +" Application with state does not exist"); + +ApplicationId appId = +ApplicationId.newInstance(System.currentTimeMillis(), 1); + +SubmitApplicationRequest request = mockSubmitApplicationRequest(appId); +SubmitApplicationResponse response = interceptor.submitApplication(request); + +Assert.assertNotNull(response); +Assert.assertNotNull(stateStoreUtil.queryApplicationHomeSC(appId)); + +EnumSet applicationStates = EnumSet.noneOf( +YarnApplicationState.class); +applicationStates.add(YarnApplicationState.KILLED); + +GetApplicationsRequest requestGet = +GetApplicationsRequest.newInstance(applicationStates); + +GetApplicationsResponse responseGet = +interceptor.getApplications(requestGet); + +Assert.assertNotNull(responseGet); +Assert.assertEquals(0, responseGet.getApplicationList().size()); Review comment: Done. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16109) Parquet reading S3AFileSystem causes EOF
[ https://issues.apache.org/jira/browse/HADOOP-16109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-16109: Fix Version/s: 3.2.2 > Parquet reading S3AFileSystem causes EOF > > > Key: HADOOP-16109 > URL: https://issues.apache.org/jira/browse/HADOOP-16109 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.9.2, 2.8.5, 3.3.0, 3.1.2 >Reporter: Dave Christianson >Assignee: Steve Loughran >Priority: Blocker > Fix For: 2.10.0, 3.0.4, 3.3.0, 2.8.6, 3.2.1, 2.9.3, 3.1.3, 3.1.4, > 3.2.2 > > Attachments: HADOOP-16109-branch-3.1-003.patch > > > When using S3AFileSystem to read Parquet files a specific set of > circumstances causes an EOFException that is not thrown when reading the > same file from local disk > Note this has only been observed under specific circumstances: > - when the reader is doing a projection (will cause it to do a seek > backwards and put the filesystem into random mode) > - when the file is larger than the readahead buffer size > - when the seek behavior of the Parquet reader causes the reader to seek > towards the end of the current input stream without reopening, such that the > next read on the currently open stream will read past the end of the > currently open stream. > Exception from Parquet reader is as follows: > {code} > Caused by: java.io.EOFException: Reached the end of stream with 51 bytes left > to read > at > org.apache.parquet.io.DelegatingSeekableInputStream.readFully(DelegatingSeekableInputStream.java:104) > at > org.apache.parquet.io.DelegatingSeekableInputStream.readFullyHeapBuffer(DelegatingSeekableInputStream.java:127) > at > org.apache.parquet.io.DelegatingSeekableInputStream.readFully(DelegatingSeekableInputStream.java:91) > at > org.apache.parquet.hadoop.ParquetFileReader$ConsecutiveChunkList.readAll(ParquetFileReader.java:1174) > at > org.apache.parquet.hadoop.ParquetFileReader.readNextRowGroup(ParquetFileReader.java:805) > at > org.apache.parquet.hadoop.InternalParquetRecordReader.checkRead(InternalParquetRecordReader.java:127) > at > org.apache.parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:222) > at > org.apache.parquet.hadoop.ParquetRecordReader.nextKeyValue(ParquetRecordReader.java:207) > at > org.apache.flink.api.java.hadoop.mapreduce.HadoopInputFormatBase.fetchNext(HadoopInputFormatBase.java:206) > at > org.apache.flink.api.java.hadoop.mapreduce.HadoopInputFormatBase.reachedEnd(HadoopInputFormatBase.java:199) > at > org.apache.flink.runtime.operators.DataSourceTask.invoke(DataSourceTask.java:190) > at org.apache.flink.runtime.taskmanager.Task.run(Task.java:711) > at java.lang.Thread.run(Thread.java:748) > {code} > The following example program generate the same root behavior (sans finding a > Parquet file that happens to trigger this condition) by purposely reading > past the already active readahead range on any file >= 1029 bytes in size.. > {code:java} > final Configuration conf = new Configuration(); > conf.set("fs.s3a.readahead.range", "1K"); > conf.set("fs.s3a.experimental.input.fadvise", "random"); > final FileSystem fs = FileSystem.get(path.toUri(), conf); > // forward seek reading across readahead boundary > try (FSDataInputStream in = fs.open(path)) { > final byte[] temp = new byte[5]; > in.readByte(); > in.readFully(1023, temp); // <-- works > } > // forward seek reading from end of readahead boundary > try (FSDataInputStream in = fs.open(path)) { > final byte[] temp = new byte[5]; > in.readByte(); > in.readFully(1024, temp); // <-- throws EOFException > } > {code} > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] akshatb1 commented on a change in pull request #3135: YARN-10829. Support getApplications API in FederationClientInterceptor
akshatb1 commented on a change in pull request #3135: URL: https://github.com/apache/hadoop/pull/3135#discussion_r658783221 ## File path: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/clientrm/TestRouterYarnClientUtils.java ## @@ -54,4 +64,69 @@ public GetClusterMetricsResponse getClusterMetricsResponse(int value) { metrics.setNumNodeManagers(value); return GetClusterMetricsResponse.newInstance(metrics); } + + /** + * This test validates the correctness of + * RouterYarnClientUtils#mergeApplications. + */ + @Test + public void testMergeApplications() { +ArrayList responses = new ArrayList<>(); +responses.add(getApplicationsResponse(1)); +responses.add(getApplicationsResponse(2)); +GetApplicationsResponse result = RouterYarnClientUtils. +mergeApplications(responses, false); +Assert.assertNotNull(result); +Assert.assertEquals(2, result.getApplicationList().size()); + } + + /** + * This generates a GetApplicationsResponse with 2 applications with + * same ApplicationId. One of them is added with host value equal to + * null to validate unmanaged application merge with managed application. + * @param value Used as Id in ApplicationId + * @return GetApplicationsResponse + */ + private GetApplicationsResponse getApplicationsResponse(int value) { +List applications = new ArrayList<>(); + +//Add managed application to list +ApplicationId appId = ApplicationId.newInstance(1234, Review comment: Done. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] akshatb1 commented on a change in pull request #3135: YARN-10829. Support getApplications API in FederationClientInterceptor
akshatb1 commented on a change in pull request #3135: URL: https://github.com/apache/hadoop/pull/3135#discussion_r658782875 ## File path: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/clientrm/TestRouterYarnClientUtils.java ## @@ -54,4 +64,69 @@ public GetClusterMetricsResponse getClusterMetricsResponse(int value) { metrics.setNumNodeManagers(value); return GetClusterMetricsResponse.newInstance(metrics); } + + /** + * This test validates the correctness of + * RouterYarnClientUtils#mergeApplications. + */ + @Test + public void testMergeApplications() { +ArrayList responses = new ArrayList<>(); +responses.add(getApplicationsResponse(1)); +responses.add(getApplicationsResponse(2)); +GetApplicationsResponse result = RouterYarnClientUtils. +mergeApplications(responses, false); +Assert.assertNotNull(result); +Assert.assertEquals(2, result.getApplicationList().size()); + } + + /** + * This generates a GetApplicationsResponse with 2 applications with + * same ApplicationId. One of them is added with host value equal to + * null to validate unmanaged application merge with managed application. + * @param value Used as Id in ApplicationId + * @return GetApplicationsResponse + */ + private GetApplicationsResponse getApplicationsResponse(int value) { +List applications = new ArrayList<>(); + +//Add managed application to list +ApplicationId appId = ApplicationId.newInstance(1234, +value); +Resource resource = Resource.newInstance(1024, 1); +ApplicationResourceUsageReport appResourceUsageReport = +ApplicationResourceUsageReport.newInstance( +1, 2, resource, resource, +resource, null, 0.1f, +0.1f, null); + +ApplicationReport appReport = ApplicationReport.newInstance( +appId, ApplicationAttemptId.newInstance(appId, +1), Review comment: Done. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] akshatb1 commented on a change in pull request #3135: YARN-10829. Support getApplications API in FederationClientInterceptor
akshatb1 commented on a change in pull request #3135: URL: https://github.com/apache/hadoop/pull/3135#discussion_r658782458 ## File path: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/clientrm/TestRouterYarnClientUtils.java ## @@ -54,4 +64,69 @@ public GetClusterMetricsResponse getClusterMetricsResponse(int value) { metrics.setNumNodeManagers(value); return GetClusterMetricsResponse.newInstance(metrics); } + + /** + * This test validates the correctness of + * RouterYarnClientUtils#mergeApplications. + */ + @Test + public void testMergeApplications() { +ArrayList responses = new ArrayList<>(); +responses.add(getApplicationsResponse(1)); +responses.add(getApplicationsResponse(2)); +GetApplicationsResponse result = RouterYarnClientUtils. +mergeApplications(responses, false); +Assert.assertNotNull(result); +Assert.assertEquals(2, result.getApplicationList().size()); + } + + /** + * This generates a GetApplicationsResponse with 2 applications with + * same ApplicationId. One of them is added with host value equal to + * null to validate unmanaged application merge with managed application. + * @param value Used as Id in ApplicationId + * @return GetApplicationsResponse + */ + private GetApplicationsResponse getApplicationsResponse(int value) { +List applications = new ArrayList<>(); + +//Add managed application to list +ApplicationId appId = ApplicationId.newInstance(1234, +value); +Resource resource = Resource.newInstance(1024, 1); +ApplicationResourceUsageReport appResourceUsageReport = +ApplicationResourceUsageReport.newInstance( +1, 2, resource, resource, +resource, null, 0.1f, +0.1f, null); + +ApplicationReport appReport = ApplicationReport.newInstance( +appId, ApplicationAttemptId.newInstance(appId, +1), +"user", "queue", "appname", "host", +124, null, YarnApplicationState.RUNNING, +"diagnostics", "url", 0, 0, +0, FinalApplicationStatus.SUCCEEDED, +appResourceUsageReport, +"N/A", 0.53789f, "YARN", +null); + +//Add unmanaged application to list +ApplicationId appId2 = ApplicationId.newInstance(1234, +value); +ApplicationReport appReport2 = ApplicationReport.newInstance( +appId2, ApplicationAttemptId.newInstance(appId2, Review comment: Done. ## File path: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/clientrm/TestRouterYarnClientUtils.java ## @@ -54,4 +64,69 @@ public GetClusterMetricsResponse getClusterMetricsResponse(int value) { metrics.setNumNodeManagers(value); return GetClusterMetricsResponse.newInstance(metrics); } + + /** + * This test validates the correctness of + * RouterYarnClientUtils#mergeApplications. + */ + @Test + public void testMergeApplications() { +ArrayList responses = new ArrayList<>(); +responses.add(getApplicationsResponse(1)); +responses.add(getApplicationsResponse(2)); +GetApplicationsResponse result = RouterYarnClientUtils. +mergeApplications(responses, false); +Assert.assertNotNull(result); +Assert.assertEquals(2, result.getApplicationList().size()); + } + + /** + * This generates a GetApplicationsResponse with 2 applications with + * same ApplicationId. One of them is added with host value equal to + * null to validate unmanaged application merge with managed application. + * @param value Used as Id in ApplicationId + * @return GetApplicationsResponse + */ + private GetApplicationsResponse getApplicationsResponse(int value) { +List applications = new ArrayList<>(); + +//Add managed application to list +ApplicationId appId = ApplicationId.newInstance(1234, +value); +Resource resource = Resource.newInstance(1024, 1); +ApplicationResourceUsageReport appResourceUsageReport = +ApplicationResourceUsageReport.newInstance( +1, 2, resource, resource, +resource, null, 0.1f, +0.1f, null); + +ApplicationReport appReport = ApplicationReport.newInstance( +appId, ApplicationAttemptId.newInstance(appId, +1), +"user", "queue", "appname", "host", +124, null, YarnApplicationState.RUNNING, +"diagnostics", "url", 0, 0, +0, FinalApplicationStatus.SUCCEEDED, +appResourceUsageReport, +"N/A", 0.53789f, "YARN", +null); + +//Add unmanaged application to list +ApplicationId appId2 = ApplicationId.newInstance(1234, Review comment: Done. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe
[jira] [Work logged] (HADOOP-17139) Re-enable optimized copyFromLocal implementation in S3AFileSystem
[ https://issues.apache.org/jira/browse/HADOOP-17139?focusedWorklogId=615002&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-615002 ] ASF GitHub Bot logged work on HADOOP-17139: --- Author: ASF GitHub Bot Created on: 25/Jun/21 13:49 Start Date: 25/Jun/21 13:49 Worklog Time Spent: 10m Work Description: bogthe commented on pull request #3101: URL: https://github.com/apache/hadoop/pull/3101#issuecomment-868513275 **This PR is still in development** Alright, finally got some time to implement some of the suggested changes. What new up until now: - `listFilesAndDirs` a new `RemoteIterator` similar to `listFiles` that includes `LocatedFileStatus` for directories too. It's handy when we want to detect empty directories; - The new `CopyFromLocalOperation` in S3a which "borrows" ideas from [the Cloudup project](https://github.com/steveloughran/cloudstore/blob/trunk/src/main/java/org/apache/hadoop/fs/tools/cloudup/Cloudup.java); What's left : - Write up the test cases for an `AbstractContractCopyFromLocalTest` class as described above; - Update / Add documentation; - Do one final "polish" of rough edges; What was surprising: - `trackDurationAndSpan(stat, path, new CopyFromLocalOperation(...))` did create a valid span however the operation class "didn't have access to it" (i.e. any span from inside of `CopyFromLocalOperation` was inactive) hence the `() -> new CopyFromLocalOperation(...).execute()` call. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 615002) Remaining Estimate: 0h Time Spent: 10m > Re-enable optimized copyFromLocal implementation in S3AFileSystem > - > > Key: HADOOP-17139 > URL: https://issues.apache.org/jira/browse/HADOOP-17139 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.3.0, 3.2.1 >Reporter: Sahil Takiar >Assignee: Bogdan Stolojan >Priority: Minor > Time Spent: 10m > Remaining Estimate: 0h > > It looks like HADOOP-15932 disabled the optimized copyFromLocal > implementation in S3A for correctness reasons. innerCopyFromLocalFile should > be fixed and re-enabled. The current implementation uses > FileSystem.copyFromLocal which will open an input stream from the local fs > and an output stream to the destination fs, and then call IOUtils.copyBytes. > With default configs, this will cause S3A to read the file into memory, write > it back to a file on the local fs, and then when the file is closed, upload > it to S3. > The optimized version of copyFromLocal in innerCopyFromLocalFile, directly > creates a PutObjectRequest request with the local file as the input. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17139) Re-enable optimized copyFromLocal implementation in S3AFileSystem
[ https://issues.apache.org/jira/browse/HADOOP-17139?focusedWorklogId=615003&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-615003 ] ASF GitHub Bot logged work on HADOOP-17139: --- Author: ASF GitHub Bot Created on: 25/Jun/21 13:49 Start Date: 25/Jun/21 13:49 Worklog Time Spent: 10m Work Description: bogthe edited a comment on pull request #3101: URL: https://github.com/apache/hadoop/pull/3101#issuecomment-868513275 **This PR is still in development** Alright, finally got some time to implement some of the suggested changes. What's new up until now: - `listFilesAndDirs` a new `RemoteIterator` similar to `listFiles` that includes `LocatedFileStatus` for directories too. It's handy when we want to detect empty directories; - The new `CopyFromLocalOperation` in S3a which "borrows" ideas from [the Cloudup project](https://github.com/steveloughran/cloudstore/blob/trunk/src/main/java/org/apache/hadoop/fs/tools/cloudup/Cloudup.java); What's left : - Write up the test cases for an `AbstractContractCopyFromLocalTest` class as described above; - Update / Add documentation; - Do one final "polish" of rough edges; What was surprising: - `trackDurationAndSpan(stat, path, new CopyFromLocalOperation(...))` did create a valid span however the operation class "didn't have access to it" (i.e. any span from inside of `CopyFromLocalOperation` was inactive) hence the `() -> new CopyFromLocalOperation(...).execute()` call. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 615003) Time Spent: 20m (was: 10m) > Re-enable optimized copyFromLocal implementation in S3AFileSystem > - > > Key: HADOOP-17139 > URL: https://issues.apache.org/jira/browse/HADOOP-17139 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.3.0, 3.2.1 >Reporter: Sahil Takiar >Assignee: Bogdan Stolojan >Priority: Minor > Labels: pull-request-available > Time Spent: 20m > Remaining Estimate: 0h > > It looks like HADOOP-15932 disabled the optimized copyFromLocal > implementation in S3A for correctness reasons. innerCopyFromLocalFile should > be fixed and re-enabled. The current implementation uses > FileSystem.copyFromLocal which will open an input stream from the local fs > and an output stream to the destination fs, and then call IOUtils.copyBytes. > With default configs, this will cause S3A to read the file into memory, write > it back to a file on the local fs, and then when the file is closed, upload > it to S3. > The optimized version of copyFromLocal in innerCopyFromLocalFile, directly > creates a PutObjectRequest request with the local file as the input. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17139) Re-enable optimized copyFromLocal implementation in S3AFileSystem
[ https://issues.apache.org/jira/browse/HADOOP-17139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HADOOP-17139: Labels: pull-request-available (was: ) > Re-enable optimized copyFromLocal implementation in S3AFileSystem > - > > Key: HADOOP-17139 > URL: https://issues.apache.org/jira/browse/HADOOP-17139 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.3.0, 3.2.1 >Reporter: Sahil Takiar >Assignee: Bogdan Stolojan >Priority: Minor > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > It looks like HADOOP-15932 disabled the optimized copyFromLocal > implementation in S3A for correctness reasons. innerCopyFromLocalFile should > be fixed and re-enabled. The current implementation uses > FileSystem.copyFromLocal which will open an input stream from the local fs > and an output stream to the destination fs, and then call IOUtils.copyBytes. > With default configs, this will cause S3A to read the file into memory, write > it back to a file on the local fs, and then when the file is closed, upload > it to S3. > The optimized version of copyFromLocal in innerCopyFromLocalFile, directly > creates a PutObjectRequest request with the local file as the input. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bogthe edited a comment on pull request #3101: HADOOP-17139 Re-enable optimized copyFromLocal implementation in S3AFileSystem
bogthe edited a comment on pull request #3101: URL: https://github.com/apache/hadoop/pull/3101#issuecomment-868513275 **This PR is still in development** Alright, finally got some time to implement some of the suggested changes. What's new up until now: - `listFilesAndDirs` a new `RemoteIterator` similar to `listFiles` that includes `LocatedFileStatus` for directories too. It's handy when we want to detect empty directories; - The new `CopyFromLocalOperation` in S3a which "borrows" ideas from [the Cloudup project](https://github.com/steveloughran/cloudstore/blob/trunk/src/main/java/org/apache/hadoop/fs/tools/cloudup/Cloudup.java); What's left : - Write up the test cases for an `AbstractContractCopyFromLocalTest` class as described above; - Update / Add documentation; - Do one final "polish" of rough edges; What was surprising: - `trackDurationAndSpan(stat, path, new CopyFromLocalOperation(...))` did create a valid span however the operation class "didn't have access to it" (i.e. any span from inside of `CopyFromLocalOperation` was inactive) hence the `() -> new CopyFromLocalOperation(...).execute()` call. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bogthe commented on pull request #3101: HADOOP-17139 Re-enable optimized copyFromLocal implementation in S3AFileSystem
bogthe commented on pull request #3101: URL: https://github.com/apache/hadoop/pull/3101#issuecomment-868513275 **This PR is still in development** Alright, finally got some time to implement some of the suggested changes. What new up until now: - `listFilesAndDirs` a new `RemoteIterator` similar to `listFiles` that includes `LocatedFileStatus` for directories too. It's handy when we want to detect empty directories; - The new `CopyFromLocalOperation` in S3a which "borrows" ideas from [the Cloudup project](https://github.com/steveloughran/cloudstore/blob/trunk/src/main/java/org/apache/hadoop/fs/tools/cloudup/Cloudup.java); What's left : - Write up the test cases for an `AbstractContractCopyFromLocalTest` class as described above; - Update / Add documentation; - Do one final "polish" of rough edges; What was surprising: - `trackDurationAndSpan(stat, path, new CopyFromLocalOperation(...))` did create a valid span however the operation class "didn't have access to it" (i.e. any span from inside of `CopyFromLocalOperation` was inactive) hence the `() -> new CopyFromLocalOperation(...).execute()` call. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17774) bytesRead FS statistic showing twice the correct value in S3A
[ https://issues.apache.org/jira/browse/HADOOP-17774?focusedWorklogId=614985&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-614985 ] ASF GitHub Bot logged work on HADOOP-17774: --- Author: ASF GitHub Bot Created on: 25/Jun/21 13:17 Start Date: 25/Jun/21 13:17 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3144: URL: https://github.com/apache/hadoop/pull/3144#issuecomment-868492867 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 38s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 31m 7s | | trunk passed | | +1 :green_heart: | compile | 0m 45s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 0m 39s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 0m 29s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 48s | | trunk passed | | +1 :green_heart: | javadoc | 0m 25s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 33s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 1m 11s | | trunk passed | | +1 :green_heart: | shadedclient | 15m 20s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 37s | | the patch passed | | +1 :green_heart: | compile | 0m 43s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 0m 43s | | the patch passed | | +1 :green_heart: | compile | 0m 33s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 0m 33s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 20s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 38s | | the patch passed | | +1 :green_heart: | javadoc | 0m 16s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 28s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 1m 19s | | the patch passed | | +1 :green_heart: | shadedclient | 15m 10s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 2m 42s | [/patch-unit-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3144/1/artifact/out/patch-unit-hadoop-tools_hadoop-aws.txt) | hadoop-aws in the patch passed. | | +1 :green_heart: | asflicense | 0m 34s | | The patch does not generate ASF License warnings. | | | | 76m 3s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.fs.s3a.commit.staging.TestStagingCommitter | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3144/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3144 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 920b270e6c56 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / bf70d8da9bbc4732ed61264fa84ee825937f856c | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3144/1/testReport/ | | Max. process+thread count | 543 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-aws U: hadoo
[GitHub] [hadoop] hadoop-yetus commented on pull request #3144: HADOOP-17774. bytesRead FS statistic showing twice the correct value in S3A
hadoop-yetus commented on pull request #3144: URL: https://github.com/apache/hadoop/pull/3144#issuecomment-868492867 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 38s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 31m 7s | | trunk passed | | +1 :green_heart: | compile | 0m 45s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 0m 39s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 0m 29s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 48s | | trunk passed | | +1 :green_heart: | javadoc | 0m 25s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 33s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 1m 11s | | trunk passed | | +1 :green_heart: | shadedclient | 15m 20s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 37s | | the patch passed | | +1 :green_heart: | compile | 0m 43s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 0m 43s | | the patch passed | | +1 :green_heart: | compile | 0m 33s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 0m 33s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 20s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 38s | | the patch passed | | +1 :green_heart: | javadoc | 0m 16s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 28s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 1m 19s | | the patch passed | | +1 :green_heart: | shadedclient | 15m 10s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 2m 42s | [/patch-unit-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3144/1/artifact/out/patch-unit-hadoop-tools_hadoop-aws.txt) | hadoop-aws in the patch passed. | | +1 :green_heart: | asflicense | 0m 34s | | The patch does not generate ASF License warnings. | | | | 76m 3s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.fs.s3a.commit.staging.TestStagingCommitter | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3144/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3144 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 920b270e6c56 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / bf70d8da9bbc4732ed61264fa84ee825937f856c | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3144/1/testReport/ | | Max. process+thread count | 543 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3144/1/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use
[jira] [Work logged] (HADOOP-17250) ABFS: Random read perf improvement
[ https://issues.apache.org/jira/browse/HADOOP-17250?focusedWorklogId=614982&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-614982 ] ASF GitHub Bot logged work on HADOOP-17250: --- Author: ASF GitHub Bot Created on: 25/Jun/21 13:07 Start Date: 25/Jun/21 13:07 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #3110: URL: https://github.com/apache/hadoop/pull/3110#issuecomment-868487078 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 47s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 4 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 32m 53s | | trunk passed | | +1 :green_heart: | compile | 0m 36s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 0m 32s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 0m 24s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 35s | | trunk passed | | +1 :green_heart: | javadoc | 0m 29s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 26s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 0m 59s | | trunk passed | | +1 :green_heart: | shadedclient | 16m 39s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 28s | | the patch passed | | +1 :green_heart: | compile | 0m 30s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 0m 30s | | the patch passed | | +1 :green_heart: | compile | 0m 25s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 0m 25s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 17s | [/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3110/2/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt) | hadoop-tools/hadoop-azure: The patch generated 1 new + 2 unchanged - 0 fixed = 3 total (was 2) | | +1 :green_heart: | mvnsite | 0m 29s | | the patch passed | | +1 :green_heart: | javadoc | 0m 21s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 19s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 1m 2s | | the patch passed | | +1 :green_heart: | shadedclient | 16m 31s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 1m 57s | | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 29s | | The patch does not generate ASF License warnings. | | | | 78m 14s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3110/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3110 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 67b4de0b8b94 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 88cde3f423c27a72f4079b915c073bc14794b62e | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3110/2/testReport/ | | Max. process+thread count | 521 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azu
[GitHub] [hadoop] hadoop-yetus commented on pull request #3110: HADOOP-17250 Lot of short reads can be merged with readahead.
hadoop-yetus commented on pull request #3110: URL: https://github.com/apache/hadoop/pull/3110#issuecomment-868487078 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 47s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 4 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 32m 53s | | trunk passed | | +1 :green_heart: | compile | 0m 36s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 0m 32s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 0m 24s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 35s | | trunk passed | | +1 :green_heart: | javadoc | 0m 29s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 26s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 0m 59s | | trunk passed | | +1 :green_heart: | shadedclient | 16m 39s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 28s | | the patch passed | | +1 :green_heart: | compile | 0m 30s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 0m 30s | | the patch passed | | +1 :green_heart: | compile | 0m 25s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 0m 25s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 17s | [/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3110/2/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt) | hadoop-tools/hadoop-azure: The patch generated 1 new + 2 unchanged - 0 fixed = 3 total (was 2) | | +1 :green_heart: | mvnsite | 0m 29s | | the patch passed | | +1 :green_heart: | javadoc | 0m 21s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 19s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 1m 2s | | the patch passed | | +1 :green_heart: | shadedclient | 16m 31s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 1m 57s | | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 29s | | The patch does not generate ASF License warnings. | | | | 78m 14s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3110/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3110 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 67b4de0b8b94 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 88cde3f423c27a72f4079b915c073bc14794b62e | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3110/2/testReport/ | | Max. process+thread count | 521 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3110/2/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above t
[GitHub] [hadoop] zhuxiangyi commented on a change in pull request #3063: HDFS-16043. HDFS: Delete performance optimization
zhuxiangyi commented on a change in pull request #3063: URL: https://github.com/apache/hadoop/pull/3063#discussion_r658741452 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java ## @@ -3344,7 +3344,8 @@ boolean delete(String src, boolean recursive, boolean logRetryCache) getEditLog().logSync(); logAuditEvent(ret, operationName, src); if (toRemovedBlocks != null) { - removeBlocks(toRemovedBlocks); // Incremental deletion of blocks + blockManager.getMarkedDeleteQueue().add( + toRemovedBlocks.getToDeleteList()); Review comment: Thank you for your reminder, I will revise them next. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] zhuxiangyi commented on pull request #3063: HDFS-16043. HDFS: Delete performance optimization
zhuxiangyi commented on pull request #3063: URL: https://github.com/apache/hadoop/pull/3063#issuecomment-868476838 > Thanks for the work and sharing the flame graph, which makes it easy to validate the change. > > However, I am still not able to understand why the change improves delete performance. The delete op is done in two steps, step 1 acquire lock, collect blocks, release lock. step 2 acquire lock, delete blocks, release lock. > > The change essentially moves the step2 to another thread. IMO, this approach reduces client perceived latency, which is good. But deleting the blocks still requires holding namespace lock. Why does it avoid NN unresponsiveness? > > Is it because instead of releasing the lock after a specified number of blocks, it releases the lock after an absolute time. I can image the absolute time is a better metric because deleting a block does take a variable duration of time, not constant. > > A few minor comments changes requested: @jojochuang Thanks for your comment and review, as you commented, the current modification is only to delete the block asynchronously. The QuotaCount calculation optimization described in jira can reduce the time to collect blocks. I plan to open a new problem to solve it. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] zhuxiangyi commented on a change in pull request #3063: HDFS-16043. HDFS: Delete performance optimization
zhuxiangyi commented on a change in pull request #3063: URL: https://github.com/apache/hadoop/pull/3063#discussion_r658737120 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java ## @@ -3344,7 +3344,8 @@ boolean delete(String src, boolean recursive, boolean logRetryCache) getEditLog().logSync(); logAuditEvent(ret, operationName, src); if (toRemovedBlocks != null) { - removeBlocks(toRemovedBlocks); // Incremental deletion of blocks + blockManager.getMarkedDeleteQueue().add( + toRemovedBlocks.getToDeleteList()); Review comment: > Thanks for the work and sharing the flame graph, which makes it easy to validate the change. > > However, I am still not able to understand why the change improves delete performance. The delete op is done in two steps, step 1 acquire lock, collect blocks, release lock. step 2 acquire lock, delete blocks, release lock. > > The change essentially moves the step2 to another thread. IMO, this approach reduces client perceived latency, which is good. But deleting the blocks still requires holding namespace lock. Why does it avoid NN unresponsiveness? > > Is it because instead of releasing the lock after a specified number of blocks, it releases the lock after an absolute time. I can image the absolute time is a better metric because deleting a block does take a variable duration of time, not constant. > > A few minor comments changes requested: Thanks for your comment, as you have commented, the current modification is just to delete the block asynchronously. The QuotaCount calculation optimization described in jira can reduce the time to collect blocks. I plan to open a new problem to solve it. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17774) bytesRead FS statistic showing twice the correct value in S3A
[ https://issues.apache.org/jira/browse/HADOOP-17774?focusedWorklogId=614953&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-614953 ] ASF GitHub Bot logged work on HADOOP-17774: --- Author: ASF GitHub Bot Created on: 25/Jun/21 11:59 Start Date: 25/Jun/21 11:59 Worklog Time Spent: 10m Work Description: mehakmeet opened a new pull request #3144: URL: https://github.com/apache/hadoop/pull/3144 Test command: ```mvn clean verify -Dparallel-tests -DtestsThreadCount=4 -Dscale``` Region: ap-south-1 ``` INFO] Results: [INFO] [WARNING] Tests run: 568, Failures: 0, Errors: 0, Skipped: 5 ``` ``` [INFO] Results: [INFO] [ERROR] Errors: [ERROR] ITestS3AMiscOperationCost.testGetContentSummaryRoot:96->AbstractS3ACostTest.verifyMetrics:376->lambda$testGetContentSummaryRoot$1:96->getContentSummary:140 » TestTimedOut [ERROR] ITestS3AMiscOperationCost.testGetContentSummaryRoot:96->AbstractS3ACostTest.verifyMetrics:376->lambda$testGetContentSummaryRoot$1:96->getContentSummary:140 » TestTimedOut [INFO] [ERROR] Tests run: 1460, Failures: 0, Errors: 2, Skipped: 462 ``` ``` [ERROR] Errors: [ERROR] ITestS3AContractRootDir>AbstractContractRootDirectoryTest.testRecursiveRootListing:267 » TestTimedOut [INFO] [ERROR] Tests run: 151, Failures: 2, Errors: 1, Skipped: 28 ``` Seeing these errors: ``` [ERROR] Failures: [ERROR] ITestS3AContractRootDir.testListEmptyRootDirectory:82->AbstractContractRootDirectoryTest.testListEmptyRootDirectory:196->Assert.fail:89 Deleted file: unexpectedly found s3a://mehakmeet-singh-data/user as S3AFileStatus{path=s3a://mehakmeet-singh-data/user; isDirectory=true; modification_time=0; access_time=0; owner=mehakmeet.singh; group=mehakmeet.singh; permission=rwxrwxrwx; isSymlink=false; hasAcl=false; isEncrypted=true; isErasureCoded=false} isEmptyDirectory=FALSE eTag=null versionId=null [ERROR] ITestS3AContractRootDir>AbstractContractRootDirectoryTest.testRmEmptyRootDirNonRecursive:101->Assert.fail:89 After 20 attempts: listing after rm /* not empty final [00] S3AFileStatus{path=s3a://mehakmeet-singh-data/user; isDirectory=true; modification_time=0; access_time=0; owner=mehakmeet.singh; group=mehakmeet.singh; permission=rwxrwxrwx; isSymlink=false; hasAcl=false; isEncrypted=true; isErasureCoded=false} isEmptyDirectory=FALSE eTag=null versionId=null ``` have seen these errors intermittently, due to some issue with DynamoDB table. @steveloughran suggested ```hadoop org.apache.hadoop.fs.s3a.s3guard.PurgeS3GuardDynamoTable -force s3a://example-bucket/``` But that fails with an error: ``` 2021-06-25 16:18:43,464 INFO service.AbstractService: Service PurgeS3GuardDynamoTable failed in state STARTED -1: Filesystem has no metadata store: s3a://mehakmeet-singh-data at org.apache.hadoop.fs.s3a.s3guard.AbstractS3GuardDynamoDBDiagnostic.failure(AbstractS3GuardDynamoDBDiagnostic.java:115) at org.apache.hadoop.fs.s3a.s3guard.AbstractS3GuardDynamoDBDiagnostic.require(AbstractS3GuardDynamoDBDiagnostic.java:94) at org.apache.hadoop.fs.s3a.s3guard.AbstractS3GuardDynamoDBDiagnostic.bindStore(AbstractS3GuardDynamoDBDiagnostic.java:157) at org.apache.hadoop.fs.s3a.s3guard.AbstractS3GuardDynamoDBDiagnostic.bindFromCLI(AbstractS3GuardDynamoDBDiagnostic.java:147) at org.apache.hadoop.fs.s3a.s3guard.PurgeS3GuardDynamoTable.serviceStart(PurgeS3GuardDynamoTable.java:123) at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194) at org.apache.hadoop.service.launcher.ServiceLauncher.coreServiceLaunch(ServiceLauncher.java:619) at org.apache.hadoop.service.launcher.ServiceLauncher.launchService(ServiceLauncher.java:494) at org.apache.hadoop.fs.s3a.s3guard.DumpS3GuardDynamoTable.serviceMain(DumpS3GuardDynamoTable.java:517) at org.apache.hadoop.fs.s3a.s3guard.PurgeS3GuardDynamoTable.main(PurgeS3GuardDynamoTable.java:205) 2021-06-25 16:18:43,467 INFO util.ExitUtil: Exiting with status -1: Filesystem has no metadata store: s3a://mehakmeet-singh-data ``` Would like the reviewers to run the aws test suite once in their setup while reviewing as well. CC: @steveloughran @mukund-thakur @bogthe -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 614953) Remaining Estimate: 0h Time Spent: 10m > bytesRead FS statistic showing twice the correct value in S3A > - > > Key: HADOOP
[jira] [Updated] (HADOOP-17774) bytesRead FS statistic showing twice the correct value in S3A
[ https://issues.apache.org/jira/browse/HADOOP-17774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HADOOP-17774: Labels: pull-request-available (was: ) > bytesRead FS statistic showing twice the correct value in S3A > - > > Key: HADOOP-17774 > URL: https://issues.apache.org/jira/browse/HADOOP-17774 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Mehakmeet Singh >Assignee: Mehakmeet Singh >Priority: Major > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > S3A "bytes read" statistic is being incremented twice. Firstly while reading > in S3AInputStream and then in merge() of S3AInstrumentation when > S3AInputStream is closed. > This makes "bytes read" statistic equal to sum of stream_read_bytes and > stream_read_total_bytes. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] mehakmeet opened a new pull request #3144: HADOOP-17774. bytesRead FS statistic showing twice the correct value in S3A
mehakmeet opened a new pull request #3144: URL: https://github.com/apache/hadoop/pull/3144 Test command: ```mvn clean verify -Dparallel-tests -DtestsThreadCount=4 -Dscale``` Region: ap-south-1 ``` INFO] Results: [INFO] [WARNING] Tests run: 568, Failures: 0, Errors: 0, Skipped: 5 ``` ``` [INFO] Results: [INFO] [ERROR] Errors: [ERROR] ITestS3AMiscOperationCost.testGetContentSummaryRoot:96->AbstractS3ACostTest.verifyMetrics:376->lambda$testGetContentSummaryRoot$1:96->getContentSummary:140 » TestTimedOut [ERROR] ITestS3AMiscOperationCost.testGetContentSummaryRoot:96->AbstractS3ACostTest.verifyMetrics:376->lambda$testGetContentSummaryRoot$1:96->getContentSummary:140 » TestTimedOut [INFO] [ERROR] Tests run: 1460, Failures: 0, Errors: 2, Skipped: 462 ``` ``` [ERROR] Errors: [ERROR] ITestS3AContractRootDir>AbstractContractRootDirectoryTest.testRecursiveRootListing:267 » TestTimedOut [INFO] [ERROR] Tests run: 151, Failures: 2, Errors: 1, Skipped: 28 ``` Seeing these errors: ``` [ERROR] Failures: [ERROR] ITestS3AContractRootDir.testListEmptyRootDirectory:82->AbstractContractRootDirectoryTest.testListEmptyRootDirectory:196->Assert.fail:89 Deleted file: unexpectedly found s3a://mehakmeet-singh-data/user as S3AFileStatus{path=s3a://mehakmeet-singh-data/user; isDirectory=true; modification_time=0; access_time=0; owner=mehakmeet.singh; group=mehakmeet.singh; permission=rwxrwxrwx; isSymlink=false; hasAcl=false; isEncrypted=true; isErasureCoded=false} isEmptyDirectory=FALSE eTag=null versionId=null [ERROR] ITestS3AContractRootDir>AbstractContractRootDirectoryTest.testRmEmptyRootDirNonRecursive:101->Assert.fail:89 After 20 attempts: listing after rm /* not empty final [00] S3AFileStatus{path=s3a://mehakmeet-singh-data/user; isDirectory=true; modification_time=0; access_time=0; owner=mehakmeet.singh; group=mehakmeet.singh; permission=rwxrwxrwx; isSymlink=false; hasAcl=false; isEncrypted=true; isErasureCoded=false} isEmptyDirectory=FALSE eTag=null versionId=null ``` have seen these errors intermittently, due to some issue with DynamoDB table. @steveloughran suggested ```hadoop org.apache.hadoop.fs.s3a.s3guard.PurgeS3GuardDynamoTable -force s3a://example-bucket/``` But that fails with an error: ``` 2021-06-25 16:18:43,464 INFO service.AbstractService: Service PurgeS3GuardDynamoTable failed in state STARTED -1: Filesystem has no metadata store: s3a://mehakmeet-singh-data at org.apache.hadoop.fs.s3a.s3guard.AbstractS3GuardDynamoDBDiagnostic.failure(AbstractS3GuardDynamoDBDiagnostic.java:115) at org.apache.hadoop.fs.s3a.s3guard.AbstractS3GuardDynamoDBDiagnostic.require(AbstractS3GuardDynamoDBDiagnostic.java:94) at org.apache.hadoop.fs.s3a.s3guard.AbstractS3GuardDynamoDBDiagnostic.bindStore(AbstractS3GuardDynamoDBDiagnostic.java:157) at org.apache.hadoop.fs.s3a.s3guard.AbstractS3GuardDynamoDBDiagnostic.bindFromCLI(AbstractS3GuardDynamoDBDiagnostic.java:147) at org.apache.hadoop.fs.s3a.s3guard.PurgeS3GuardDynamoTable.serviceStart(PurgeS3GuardDynamoTable.java:123) at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194) at org.apache.hadoop.service.launcher.ServiceLauncher.coreServiceLaunch(ServiceLauncher.java:619) at org.apache.hadoop.service.launcher.ServiceLauncher.launchService(ServiceLauncher.java:494) at org.apache.hadoop.fs.s3a.s3guard.DumpS3GuardDynamoTable.serviceMain(DumpS3GuardDynamoTable.java:517) at org.apache.hadoop.fs.s3a.s3guard.PurgeS3GuardDynamoTable.main(PurgeS3GuardDynamoTable.java:205) 2021-06-25 16:18:43,467 INFO util.ExitUtil: Exiting with status -1: Filesystem has no metadata store: s3a://mehakmeet-singh-data ``` Would like the reviewers to run the aws test suite once in their setup while reviewing as well. CC: @steveloughran @mukund-thakur @bogthe -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17290) ABFS: Add Identifiers to Client Request Header
[ https://issues.apache.org/jira/browse/HADOOP-17290?focusedWorklogId=614952&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-614952 ] ASF GitHub Bot logged work on HADOOP-17290: --- Author: ASF GitHub Bot Created on: 25/Jun/21 11:57 Start Date: 25/Jun/21 11:57 Worklog Time Spent: 10m Work Description: anoopsjohn commented on a change in pull request #2520: URL: https://github.com/apache/hadoop/pull/2520#discussion_r658613610 ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsLease.java ## @@ -114,13 +119,15 @@ public AbfsLease(AbfsClient client, String path, int acquireMaxRetries, LOG.debug("Acquired lease {} on {}", leaseID, path); } - private void acquireLease(RetryPolicy retryPolicy, int numRetries, int retryInterval, long delay) + private void acquireLease(RetryPolicy retryPolicy, int numRetries, + int retryInterval, long delay) throws LeaseException { LOG.debug("Attempting to acquire lease on {}, retry {}", path, numRetries); if (future != null && !future.isDone()) { throw new LeaseException(ERR_LEASE_FUTURE_EXISTS); } -future = client.schedule(() -> client.acquireLease(path, INFINITE_LEASE_DURATION), +future = client.schedule(() -> client.acquireLease(path, +INFINITE_LEASE_DURATION, new TracingContext(tracingContext)), Review comment: This clone of Context needed here? What gets changed? ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsOutputStream.java ## @@ -160,6 +170,14 @@ public AbfsOutputStream( if (outputStreamStatistics != null) { this.ioStatistics = outputStreamStatistics.getIOStatistics(); } +this.outputStreamId = getOutputStreamId(); +this.tracingContext = new TracingContext(tracingContext); +this.tracingContext.setStreamID(outputStreamId); +this.tracingContext.setOperation(FSOperationType.WRITE); + } + + private String getOutputStreamId() { Review comment: getOutputStreamId() and getStreamID() -> Both create some confusion. Normally the Getter just return a already available value. getStreamID() make sense. You can use createOutputStreamId() instead? ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsOutputStream.java ## @@ -385,7 +412,9 @@ private void writeAppendBlobCurrentBufferToService() throws IOException { "writeCurrentBufferToService", "append")) { AppendRequestParameters reqParams = new AppendRequestParameters(offset, 0, bytesLength, APPEND_MODE, true, leaseId); - AbfsRestOperation op = client.append(path, bytes, reqParams, cachedSasToken.get()); + AbfsRestOperation op = client + .append(path, bytes, reqParams, cachedSasToken.get(), + new TracingContext(tracingContext)); Review comment: Ok the context been passed here might get changed at least wrt the retryCount. that is why been cloned here? ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsRestOperation.java ## @@ -202,9 +206,10 @@ private void completeExecute() throws AzureBlobFileSystemException { retryCount = 0; LOG.debug("First execution of REST operation - {}", operationType); -while (!executeHttpOperation(retryCount)) { +while (!executeHttpOperation(retryCount, tracingContext)) { try { ++retryCount; +tracingContext.setRetryCount(retryCount); Review comment: Yaaa here. ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsOutputStream.java ## @@ -160,6 +170,14 @@ public AbfsOutputStream( if (outputStreamStatistics != null) { this.ioStatistics = outputStreamStatistics.getIOStatistics(); } +this.outputStreamId = getOutputStreamId(); +this.tracingContext = new TracingContext(tracingContext); +this.tracingContext.setStreamID(outputStreamId); +this.tracingContext.setOperation(FSOperationType.WRITE); + } + + private String getOutputStreamId() { Review comment: The same thing in ABFSInputStream also ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/utils/TracingContext.java ## @@ -0,0 +1,170 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.a
[GitHub] [hadoop] anoopsjohn commented on a change in pull request #2520: HADOOP-17290. ABFS: Add Identifiers to Client Request Header
anoopsjohn commented on a change in pull request #2520: URL: https://github.com/apache/hadoop/pull/2520#discussion_r658613610 ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsLease.java ## @@ -114,13 +119,15 @@ public AbfsLease(AbfsClient client, String path, int acquireMaxRetries, LOG.debug("Acquired lease {} on {}", leaseID, path); } - private void acquireLease(RetryPolicy retryPolicy, int numRetries, int retryInterval, long delay) + private void acquireLease(RetryPolicy retryPolicy, int numRetries, + int retryInterval, long delay) throws LeaseException { LOG.debug("Attempting to acquire lease on {}, retry {}", path, numRetries); if (future != null && !future.isDone()) { throw new LeaseException(ERR_LEASE_FUTURE_EXISTS); } -future = client.schedule(() -> client.acquireLease(path, INFINITE_LEASE_DURATION), +future = client.schedule(() -> client.acquireLease(path, +INFINITE_LEASE_DURATION, new TracingContext(tracingContext)), Review comment: This clone of Context needed here? What gets changed? ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsOutputStream.java ## @@ -160,6 +170,14 @@ public AbfsOutputStream( if (outputStreamStatistics != null) { this.ioStatistics = outputStreamStatistics.getIOStatistics(); } +this.outputStreamId = getOutputStreamId(); +this.tracingContext = new TracingContext(tracingContext); +this.tracingContext.setStreamID(outputStreamId); +this.tracingContext.setOperation(FSOperationType.WRITE); + } + + private String getOutputStreamId() { Review comment: getOutputStreamId() and getStreamID() -> Both create some confusion. Normally the Getter just return a already available value. getStreamID() make sense. You can use createOutputStreamId() instead? ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsOutputStream.java ## @@ -385,7 +412,9 @@ private void writeAppendBlobCurrentBufferToService() throws IOException { "writeCurrentBufferToService", "append")) { AppendRequestParameters reqParams = new AppendRequestParameters(offset, 0, bytesLength, APPEND_MODE, true, leaseId); - AbfsRestOperation op = client.append(path, bytes, reqParams, cachedSasToken.get()); + AbfsRestOperation op = client + .append(path, bytes, reqParams, cachedSasToken.get(), + new TracingContext(tracingContext)); Review comment: Ok the context been passed here might get changed at least wrt the retryCount. that is why been cloned here? ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsRestOperation.java ## @@ -202,9 +206,10 @@ private void completeExecute() throws AzureBlobFileSystemException { retryCount = 0; LOG.debug("First execution of REST operation - {}", operationType); -while (!executeHttpOperation(retryCount)) { +while (!executeHttpOperation(retryCount, tracingContext)) { try { ++retryCount; +tracingContext.setRetryCount(retryCount); Review comment: Yaaa here. ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsOutputStream.java ## @@ -160,6 +170,14 @@ public AbfsOutputStream( if (outputStreamStatistics != null) { this.ioStatistics = outputStreamStatistics.getIOStatistics(); } +this.outputStreamId = getOutputStreamId(); +this.tracingContext = new TracingContext(tracingContext); +this.tracingContext.setStreamID(outputStreamId); +this.tracingContext.setOperation(FSOperationType.WRITE); + } + + private String getOutputStreamId() { Review comment: The same thing in ABFSInputStream also ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/utils/TracingContext.java ## @@ -0,0 +1,170 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.azurebfs.utils; + +import java.uti
[jira] [Work logged] (HADOOP-17250) ABFS: Random read perf improvement
[ https://issues.apache.org/jira/browse/HADOOP-17250?focusedWorklogId=614951&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-614951 ] ASF GitHub Bot logged work on HADOOP-17250: --- Author: ASF GitHub Bot Created on: 25/Jun/21 11:49 Start Date: 25/Jun/21 11:49 Worklog Time Spent: 10m Work Description: hadoop-yetus removed a comment on pull request #3110: URL: https://github.com/apache/hadoop/pull/3110#issuecomment-862423671 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 50s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 4 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 33m 6s | | trunk passed | | +1 :green_heart: | compile | 0m 36s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 0m 30s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 0m 23s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 36s | | trunk passed | | +1 :green_heart: | javadoc | 0m 29s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 26s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 1m 1s | | trunk passed | | +1 :green_heart: | shadedclient | 16m 35s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 28s | | the patch passed | | +1 :green_heart: | compile | 0m 30s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 0m 30s | | the patch passed | | +1 :green_heart: | compile | 0m 25s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 0m 25s | | the patch passed | | -1 :x: | blanks | 0m 0s | [/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3110/1/artifact/out/blanks-eol.txt) | The patch has 1 line(s) that end in blanks. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply | | -0 :warning: | checkstyle | 0m 16s | [/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3110/1/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt) | hadoop-tools/hadoop-azure: The patch generated 5 new + 2 unchanged - 0 fixed = 7 total (was 2) | | +1 :green_heart: | mvnsite | 0m 28s | | the patch passed | | +1 :green_heart: | javadoc | 0m 22s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 19s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 1m 1s | | the patch passed | | +1 :green_heart: | shadedclient | 16m 20s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 1m 57s | | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 29s | | The patch does not generate ASF License warnings. | | | | 78m 4s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3110/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3110 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux b406f51affe9 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 9debcbb3586c7076ea852da7398baaf8d27cde4c | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-
[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #3110: HADOOP-17250 Lot of short reads can be merged with readahead.
hadoop-yetus removed a comment on pull request #3110: URL: https://github.com/apache/hadoop/pull/3110#issuecomment-862423671 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 50s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 4 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 33m 6s | | trunk passed | | +1 :green_heart: | compile | 0m 36s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 0m 30s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 0m 23s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 36s | | trunk passed | | +1 :green_heart: | javadoc | 0m 29s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 26s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 1m 1s | | trunk passed | | +1 :green_heart: | shadedclient | 16m 35s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 28s | | the patch passed | | +1 :green_heart: | compile | 0m 30s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 0m 30s | | the patch passed | | +1 :green_heart: | compile | 0m 25s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 0m 25s | | the patch passed | | -1 :x: | blanks | 0m 0s | [/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3110/1/artifact/out/blanks-eol.txt) | The patch has 1 line(s) that end in blanks. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply | | -0 :warning: | checkstyle | 0m 16s | [/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3110/1/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt) | hadoop-tools/hadoop-azure: The patch generated 5 new + 2 unchanged - 0 fixed = 7 total (was 2) | | +1 :green_heart: | mvnsite | 0m 28s | | the patch passed | | +1 :green_heart: | javadoc | 0m 22s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 19s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 1m 1s | | the patch passed | | +1 :green_heart: | shadedclient | 16m 20s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 1m 57s | | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 29s | | The patch does not generate ASF License warnings. | | | | 78m 4s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3110/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3110 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux b406f51affe9 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 9debcbb3586c7076ea852da7398baaf8d27cde4c | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3110/1/testReport/ | | Max. process+thread count | 524 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3110/1/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
[GitHub] [hadoop] tomscut commented on pull request #3140: HDFS-16088. Standby NameNode process getLiveDatanodeStorageReport req…
tomscut commented on pull request #3140: URL: https://github.com/apache/hadoop/pull/3140#issuecomment-868404825 These so many UTs all work fine locally. Hi @Hexiaoqiao @tasanuma @jojochuang @aajisaka @ayushtkn , please help to review the code when you have time. Thanks a lot. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] PrabhuJoseph commented on pull request #3142: YARN-10820: Handle yarn node list synchronization issue
PrabhuJoseph commented on pull request #3142: URL: https://github.com/apache/hadoop/pull/3142#issuecomment-868404713 Thanks @swathic95 for the patch. It looks good, +1. Will commit it shortly. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #3143: YARN-10832. Aggregation failure,but the local log on nodemanager is also deleted
hadoop-yetus commented on pull request #3143: URL: https://github.com/apache/hadoop/pull/3143#issuecomment-868392892 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 36s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 32m 17s | | trunk passed | | +1 :green_heart: | compile | 1m 41s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 1m 31s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 0m 36s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 51s | | trunk passed | | +1 :green_heart: | javadoc | 0m 40s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 38s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 1m 32s | | trunk passed | | +1 :green_heart: | shadedclient | 16m 45s | | branch has no errors when building and testing our client artifacts. | | -0 :warning: | patch | 17m 5s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 42s | | the patch passed | | +1 :green_heart: | compile | 1m 25s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 1m 25s | | the patch passed | | +1 :green_heart: | compile | 1m 25s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 1m 25s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 26s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 38s | | the patch passed | | +1 :green_heart: | javadoc | 0m 32s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 29s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 1m 29s | | the patch passed | | +1 :green_heart: | shadedclient | 16m 47s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 23m 19s | | hadoop-yarn-server-nodemanager in the patch passed. | | +1 :green_heart: | asflicense | 0m 33s | | The patch does not generate ASF License warnings. | | | | 105m 13s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3143/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3143 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux cfedaa3fe961 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 55a7a475ebe4ddc1c678d22d7f2e63213c7e3c44 | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3143/2/testReport/ | | Max. process+thread count | 544 (vs. ulimit of 5500) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3143/2/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
[jira] [Work logged] (HADOOP-17250) ABFS: Random read perf improvement
[ https://issues.apache.org/jira/browse/HADOOP-17250?focusedWorklogId=614922&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-614922 ] ASF GitHub Bot logged work on HADOOP-17250: --- Author: ASF GitHub Bot Created on: 25/Jun/21 10:11 Start Date: 25/Jun/21 10:11 Worklog Time Spent: 10m Work Description: mukund-thakur commented on a change in pull request #3110: URL: https://github.com/apache/hadoop/pull/3110#discussion_r658653973 ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsInputStream.java ## @@ -696,6 +713,11 @@ public boolean hasCapability(String capability) { return buffer; } + @VisibleForTesting + public int getReadAheadRange() { Review comment: We are not adding other configs so doesn't make sense for just this one. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 614922) Time Spent: 3h 40m (was: 3.5h) > ABFS: Random read perf improvement > -- > > Key: HADOOP-17250 > URL: https://issues.apache.org/jira/browse/HADOOP-17250 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.3.0 >Reporter: Sneha Vijayarajan >Assignee: Mukund Thakur >Priority: Major > Labels: abfsactive, pull-request-available > Time Spent: 3h 40m > Remaining Estimate: 0h > > Random read if marginally read ahead was seen to improve perf for a TPCH > query. > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] mukund-thakur commented on a change in pull request #3110: HADOOP-17250 Lot of short reads can be merged with readahead.
mukund-thakur commented on a change in pull request #3110: URL: https://github.com/apache/hadoop/pull/3110#discussion_r658653973 ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsInputStream.java ## @@ -696,6 +713,11 @@ public boolean hasCapability(String capability) { return buffer; } + @VisibleForTesting + public int getReadAheadRange() { Review comment: We are not adding other configs so doesn't make sense for just this one. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #3140: HDFS-16088. Standby NameNode process getLiveDatanodeStorageReport req…
hadoop-yetus commented on pull request #3140: URL: https://github.com/apache/hadoop/pull/3140#issuecomment-868392333 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 47s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 33m 11s | | trunk passed | | +1 :green_heart: | compile | 1m 23s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 1m 19s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 1m 9s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 34s | | trunk passed | | +1 :green_heart: | javadoc | 0m 58s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 32s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 37s | | trunk passed | | +1 :green_heart: | shadedclient | 19m 24s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 24s | | the patch passed | | +1 :green_heart: | compile | 1m 21s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 1m 21s | | the patch passed | | +1 :green_heart: | compile | 1m 11s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 1m 11s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 52s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 14s | | the patch passed | | +1 :green_heart: | javadoc | 0m 48s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 24s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 24s | | the patch passed | | +1 :green_heart: | shadedclient | 4m 34s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 348m 13s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3140/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 34s | | The patch does not generate ASF License warnings. | | | | 426m 56s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.fs.viewfs.TestViewFileSystemHdfs | | | hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot | | | hadoop.fs.viewfs.TestViewFsHdfs | | | hadoop.hdfs.TestDFSShell | | | hadoop.hdfs.TestListFilesInDFS | | | hadoop.fs.TestHDFSFileContextMainOperations | | | hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor | | | hadoop.fs.viewfs.TestViewFileSystemLinkFallback | | | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby | | | hadoop.hdfs.server.namenode.TestDecommissioningStatus | | | hadoop.hdfs.TestListFilesInFileContext | | | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsVolumeList | | | hadoop.hdfs.server.namenode.ha.TestEditLogTailer | | | hadoop.fs.contract.hdfs.TestHDFSContractGetFileStatus | | | hadoop.hdfs.TestViewDistributedFileSystemContract | | | hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory | | | hadoop.fs.viewfs.TestViewFileSystemLinkRegex | | | hadoop.fs.viewfs.TestViewFsAtHdfsRoot | | | hadoop.fs.viewfs.TestViewFileSystemLinkMergeSlash | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3140/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3140 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | unam
[GitHub] [hadoop] hadoop-yetus commented on pull request #3143: YARN-10832. Aggregation failure,but the local log on nodemanager is also deleted
hadoop-yetus commented on pull request #3143: URL: https://github.com/apache/hadoop/pull/3143#issuecomment-868391764 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 36s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 29m 56s | | trunk passed | | +1 :green_heart: | compile | 1m 29s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 1m 25s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 0m 35s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 47s | | trunk passed | | +1 :green_heart: | javadoc | 0m 41s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 36s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 1m 24s | | trunk passed | | +1 :green_heart: | shadedclient | 14m 34s | | branch has no errors when building and testing our client artifacts. | | -0 :warning: | patch | 14m 54s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 36s | | the patch passed | | +1 :green_heart: | compile | 1m 21s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 1m 21s | | the patch passed | | +1 :green_heart: | compile | 1m 16s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 1m 16s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 26s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 37s | | the patch passed | | +1 :green_heart: | javadoc | 0m 31s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 28s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 1m 25s | | the patch passed | | +1 :green_heart: | shadedclient | 14m 49s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 22m 48s | | hadoop-yarn-server-nodemanager in the patch passed. | | +1 :green_heart: | asflicense | 0m 35s | | The patch does not generate ASF License warnings. | | | | 97m 26s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3143/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3143 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 9d8796ecf914 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 6c9a6812ae6a3f122a5c5bdf129e731d9e16b3e2 | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3143/3/testReport/ | | Max. process+thread count | 622 (vs. ulimit of 5500) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3143/3/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | Th
[GitHub] [hadoop] hadoop-yetus commented on pull request #3141: HDFS-16087. Fix stuck issue in rbfbalance tool.
hadoop-yetus commented on pull request #3141: URL: https://github.com/apache/hadoop/pull/3141#issuecomment-868362493 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 34s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 11m 37s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 20m 30s | | trunk passed | | +1 :green_heart: | compile | 22m 23s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 19m 47s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 3m 46s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 34s | | trunk passed | | +1 :green_heart: | javadoc | 1m 28s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 44s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 2m 30s | | trunk passed | | +1 :green_heart: | shadedclient | 14m 47s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 27s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 0s | | the patch passed | | +1 :green_heart: | compile | 20m 54s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 20m 54s | | the patch passed | | +1 :green_heart: | compile | 18m 59s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 18m 59s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 4m 55s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 37s | | the patch passed | | +1 :green_heart: | javadoc | 1m 38s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 50s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 2m 50s | | the patch passed | | +1 :green_heart: | shadedclient | 15m 5s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 6m 43s | | hadoop-federation-balance in the patch passed. | | -1 :x: | unit | 20m 13s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3141/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt) | hadoop-hdfs-rbf in the patch passed. | | +1 :green_heart: | asflicense | 0m 59s | | The patch does not generate ASF License warnings. | | | | 202m 6s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.fs.contract.router.TestRouterHDFSContractRootDirectory | | | hadoop.fs.contract.router.TestRouterHDFSContractGetFileStatusSecure | | | hadoop.fs.contract.router.TestRouterHDFSContractRootDirectorySecure | | | hadoop.fs.contract.router.TestRouterHDFSContractGetFileStatus | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3141/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3141 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 98f9b81508cd 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / fd44678bd7141a0d049c166d76e9cbd29b2974e5 | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1
[GitHub] [hadoop] hadoop-yetus commented on pull request #3143: YARN-10832. Aggregation failure,but the local log on nodemanager is also deleted
hadoop-yetus commented on pull request #3143: URL: https://github.com/apache/hadoop/pull/3143#issuecomment-868319978 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 33s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 30m 7s | | trunk passed | | +1 :green_heart: | compile | 1m 31s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 1m 24s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 0m 35s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 47s | | trunk passed | | +1 :green_heart: | javadoc | 0m 39s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 36s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 1m 22s | | trunk passed | | +1 :green_heart: | shadedclient | 14m 39s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 37s | | the patch passed | | +1 :green_heart: | compile | 1m 23s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 1m 23s | | the patch passed | | +1 :green_heart: | compile | 1m 19s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 1m 19s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 25s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 37s | | the patch passed | | +1 :green_heart: | javadoc | 0m 30s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 28s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 1m 24s | | the patch passed | | +1 :green_heart: | shadedclient | 14m 22s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 22m 57s | | hadoop-yarn-server-nodemanager in the patch passed. | | +1 :green_heart: | asflicense | 0m 34s | | The patch does not generate ASF License warnings. | | | | 97m 25s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3143/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3143 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux dc3c457637e7 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 5b406f27919503c7b041d31d9d69349b217a8fc1 | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3143/1/testReport/ | | Max. process+thread count | 544 (vs. ulimit of 5500) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3143/1/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above t
[GitHub] [hadoop] hadoop-yetus commented on pull request #3142: YARN-10820: Handle yarn node list synchronization issue
hadoop-yetus commented on pull request #3142: URL: https://github.com/apache/hadoop/pull/3142#issuecomment-868295855 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 33s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 31m 47s | | trunk passed | | +1 :green_heart: | compile | 0m 42s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 0m 41s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 0m 33s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 47s | | trunk passed | | +1 :green_heart: | javadoc | 0m 44s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 44s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 1m 41s | | trunk passed | | +1 :green_heart: | shadedclient | 16m 2s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 40s | | the patch passed | | +1 :green_heart: | compile | 0m 43s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 0m 43s | | the patch passed | | +1 :green_heart: | compile | 0m 39s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 0m 39s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 25s | [/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3142/1/artifact/out/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt) | hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common: The patch generated 1 new + 6 unchanged - 1 fixed = 7 total (was 7) | | +1 :green_heart: | mvnsite | 0m 40s | | the patch passed | | +1 :green_heart: | javadoc | 0m 37s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 37s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 1m 45s | | the patch passed | | +1 :green_heart: | shadedclient | 14m 21s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 4m 46s | | hadoop-yarn-common in the patch passed. | | +1 :green_heart: | asflicense | 0m 30s | | The patch does not generate ASF License warnings. | | | | 79m 50s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3142/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3142 | | JIRA Issue | YARN-10820 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 5b05dc69630b 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 96dc7386b665310f01cc21b71fb678eec39d9177 | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3142/1/testReport/ | | Max. process+thread count | 661 (vs. ulimit of 5500) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3142/1/console | | versions | git=2.25.1 maven=3.6.3 spot