[GitHub] [hadoop] LeonGao91 opened a new pull request #2704: HDFS-15781. Add metrics for how blocks are moved in replaceBlock.
LeonGao91 opened a new pull request #2704: URL: https://github.com/apache/hadoop/pull/2704 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17527) ABFS: Fix boundary conditions in InputStream seek and skip
[ https://issues.apache.org/jira/browse/HADOOP-17527?focusedWorklogId=552837&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-552837 ] ASF GitHub Bot logged work on HADOOP-17527: --- Author: ASF GitHub Bot Created on: 16/Feb/21 06:58 Start Date: 16/Feb/21 06:58 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #2698: URL: https://github.com/apache/hadoop/pull/2698#issuecomment-779630975 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 33s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | | 0m 0s | [test4tests](test4tests) | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 32m 16s | | trunk passed | | +1 :green_heart: | compile | 0m 38s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 0m 35s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 30s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 40s | | trunk passed | | +1 :green_heart: | shadedclient | 14m 26s | | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 34s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 0m 29s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +0 :ok: | spotbugs | 1m 2s | | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 0m 59s | | trunk passed | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 29s | | the patch passed | | +1 :green_heart: | compile | 0m 31s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 0m 31s | | the patch passed | | +1 :green_heart: | compile | 0m 26s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | javac | 0m 26s | | the patch passed | | +1 :green_heart: | checkstyle | 0m 18s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 28s | | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 12m 37s | | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 27s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 0m 25s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | findbugs | 0m 59s | | the patch passed | _ Other Tests _ | | +1 :green_heart: | unit | 1m 53s | | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 33s | | The patch does not generate ASF License warnings. | | | | 73m 1s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2698/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2698 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 5931251a307d 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 07a4220cd27 | | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2698/3/testReport/ | | Max. process+thread count | 670 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2698/3/console | | versions | git=2.25.1 maven=3.6.3 findbugs=4.0.6 | | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org |
[GitHub] [hadoop] hadoop-yetus commented on pull request #2698: HADOOP-17527. ABFS: ABFS: Fix boundary conditions in InputStream seek and skip
hadoop-yetus commented on pull request #2698: URL: https://github.com/apache/hadoop/pull/2698#issuecomment-779630975 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 33s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | | 0m 0s | [test4tests](test4tests) | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 32m 16s | | trunk passed | | +1 :green_heart: | compile | 0m 38s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 0m 35s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 30s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 40s | | trunk passed | | +1 :green_heart: | shadedclient | 14m 26s | | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 34s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 0m 29s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +0 :ok: | spotbugs | 1m 2s | | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 0m 59s | | trunk passed | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 29s | | the patch passed | | +1 :green_heart: | compile | 0m 31s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 0m 31s | | the patch passed | | +1 :green_heart: | compile | 0m 26s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | javac | 0m 26s | | the patch passed | | +1 :green_heart: | checkstyle | 0m 18s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 28s | | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 12m 37s | | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 27s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 0m 25s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | findbugs | 0m 59s | | the patch passed | _ Other Tests _ | | +1 :green_heart: | unit | 1m 53s | | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 33s | | The patch does not generate ASF License warnings. | | | | 73m 1s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2698/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2698 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 5931251a307d 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 07a4220cd27 | | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2698/3/testReport/ | | Max. process+thread count | 670 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2698/3/console | | versions | git=2.25.1 maven=3.6.3 findbugs=4.0.6 | | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org --
[GitHub] [hadoop] aajisaka commented on pull request #2702: HDFS-15836. RBF: Fix contract tests after HADOOP-13327
aajisaka commented on pull request #2702: URL: https://github.com/apache/hadoop/pull/2702#issuecomment-779599419 Merged. Thank you @ayushtkn This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] aajisaka merged pull request #2702: HDFS-15836. RBF: Fix contract tests after HADOOP-13327
aajisaka merged pull request #2702: URL: https://github.com/apache/hadoop/pull/2702 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17527) ABFS: Fix boundary conditions in InputStream seek and skip
[ https://issues.apache.org/jira/browse/HADOOP-17527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sumangala Patki updated HADOOP-17527: - Summary: ABFS: Fix boundary conditions in InputStream seek and skip (was: ABFS: Fix condition in InputStream seek) > ABFS: Fix boundary conditions in InputStream seek and skip > -- > > Key: HADOOP-17527 > URL: https://issues.apache.org/jira/browse/HADOOP-17527 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.3.0 >Reporter: Sumangala Patki >Priority: Major > Labels: pull-request-available > Time Spent: 0.5h > Remaining Estimate: 0h > > Fix bug in condition for validating position in AbfsInputStream seek method -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] aajisaka commented on pull request #2702: HDFS-15836. RBF: Fix contract tests after HADOOP-13327
aajisaka commented on pull request #2702: URL: https://github.com/apache/hadoop/pull/2702#issuecomment-779576047 Thank you @fengnanli This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17038) Support disabling buffered reads in ABFS positional reads
[ https://issues.apache.org/jira/browse/HADOOP-17038?focusedWorklogId=552791&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-552791 ] ASF GitHub Bot logged work on HADOOP-17038: --- Author: ASF GitHub Bot Created on: 16/Feb/21 03:37 Start Date: 16/Feb/21 03:37 Worklog Time Spent: 10m Work Description: anoopsjohn commented on pull request #2646: URL: https://github.com/apache/hadoop/pull/2646#issuecomment-779559570 Thanks @steveloughran . Making it static actually helps. I saw that. Still some comments left because for other numbers like 'byteToRead' etc it shows up. Again the Q is do we need to worry for tests. Ya in code it make sense. Anyways as u said its better to have as a checkpoint. I did not fix it as its not helping much in these tests. Thanks for all valuable suggestions. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 552791) Time Spent: 8h 40m (was: 8.5h) > Support disabling buffered reads in ABFS positional reads > - > > Key: HADOOP-17038 > URL: https://issues.apache.org/jira/browse/HADOOP-17038 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Anoop Sam John >Assignee: Anoop Sam John >Priority: Major > Labels: HBase, abfsactive, pull-request-available > Attachments: HBase Perf Test Report.xlsx, screenshot-1.png > > Time Spent: 8h 40m > Remaining Estimate: 0h > > Right now it will do a seek to the position , read and then seek back to the > old position. (As per the impl in the super class) > In HBase kind of workloads we rely mostly on short preads. (like 64 KB size > by default). So would be ideal to support a pure pos read API which will not > even keep the data in a buffer but will only read the required data as what > is asked for by the caller. (Not reading ahead more data as per the read size > config) > Allow an optional boolean config to be specified while opening file for read > using which buffered pread can be disabled. > FutureDataInputStreamBuilder openFile(Path path) -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] anoopsjohn commented on pull request #2646: HADOOP-17038 Support disabling buffered reads in ABFS positional reads.
anoopsjohn commented on pull request #2646: URL: https://github.com/apache/hadoop/pull/2646#issuecomment-779559570 Thanks @steveloughran . Making it static actually helps. I saw that. Still some comments left because for other numbers like 'byteToRead' etc it shows up. Again the Q is do we need to worry for tests. Ya in code it make sense. Anyways as u said its better to have as a checkpoint. I did not fix it as its not helping much in these tests. Thanks for all valuable suggestions. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] tomscut edited a comment on pull request #2668: HDFS-15808. Add metrics for FSNamesystem read/write lock hold long time
tomscut edited a comment on pull request #2668: URL: https://github.com/apache/hadoop/pull/2668#issuecomment-779345021 Failed junit tests hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks hadoop.hdfs.server.namenode.ha.TestBootstrapAliasmap Those failed unit tests are unrelated to the change. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] fengnanli commented on pull request #2605: HDFS-15423 RBF: WebHDFS create shouldn't choose DN from all sub-clusters
fengnanli commented on pull request #2605: URL: https://github.com/apache/hadoop/pull/2605#issuecomment-779521144 The test failures are related with the change in [HADOOP-13327](https://issues.apache.org/jira/browse/HADOOP-13327) and is under fix in [HDFS-15836](https://issues.apache.org/jira/browse/HDFS-15836) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-16748) Support Python 3 in dev-support scripts
[ https://issues.apache.org/jira/browse/HADOOP-16748?focusedWorklogId=552778&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-552778 ] ASF GitHub Bot logged work on HADOOP-16748: --- Author: ASF GitHub Bot Created on: 16/Feb/21 00:46 Start Date: 16/Feb/21 00:46 Worklog Time Spent: 10m Work Description: aajisaka commented on a change in pull request #1738: URL: https://github.com/apache/hadoop/pull/1738#discussion_r576486221 ## File path: dev-support/determine-flaky-tests-hadoop.py ## @@ -35,22 +35,8 @@ # at the failed test for the specific run is necessary. # import sys -import platform -sysversion = sys.hexversion -onward30 = False -if sysversion < 0x020600F0: - sys.exit("Minimum supported python version is 2.6, the current version is " + - "Python" + platform.python_version()) - -if sysversion == 0x03F0: - sys.exit("There is a known bug with Python" + platform.python_version() + - ", please try a different version"); - -if sysversion < 0x0300: - import urllib2 -else: - onward30 = True - import urllib.request + Review comment: I think it is not used. Now the developers can check the age from Jenkins Web UI (https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/418/testReport/) instead of running the script. ![image](https://user-images.githubusercontent.com/3403122/108005235-83b52400-703b-11eb-97f9-cd45de8d4947.png) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 552778) Time Spent: 4h (was: 3h 50m) > Support Python 3 in dev-support scripts > --- > > Key: HADOOP-16748 > URL: https://issues.apache.org/jira/browse/HADOOP-16748 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Major > Labels: pull-request-available > Time Spent: 4h > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] aajisaka commented on a change in pull request #1738: HADOOP-16748. Support Python 3 in dev-support scripts.
aajisaka commented on a change in pull request #1738: URL: https://github.com/apache/hadoop/pull/1738#discussion_r576486221 ## File path: dev-support/determine-flaky-tests-hadoop.py ## @@ -35,22 +35,8 @@ # at the failed test for the specific run is necessary. # import sys -import platform -sysversion = sys.hexversion -onward30 = False -if sysversion < 0x020600F0: - sys.exit("Minimum supported python version is 2.6, the current version is " + - "Python" + platform.python_version()) - -if sysversion == 0x03F0: - sys.exit("There is a known bug with Python" + platform.python_version() + - ", please try a different version"); - -if sysversion < 0x0300: - import urllib2 -else: - onward30 = True - import urllib.request + Review comment: I think it is not used. Now the developers can check the age from Jenkins Web UI (https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/418/testReport/) instead of running the script. ![image](https://user-images.githubusercontent.com/3403122/108005235-83b52400-703b-11eb-97f9-cd45de8d4947.png) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17511) Add an Audit plugin point for S3A auditing/context
[ https://issues.apache.org/jira/browse/HADOOP-17511?focusedWorklogId=552728&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-552728 ] ASF GitHub Bot logged work on HADOOP-17511: --- Author: ASF GitHub Bot Created on: 15/Feb/21 21:06 Start Date: 15/Feb/21 21:06 Worklog Time Spent: 10m Work Description: steveloughran commented on pull request #2675: URL: https://github.com/apache/hadoop/pull/2675#issuecomment-779447171 @bgaborg: thanks for the comments, will take them on. I've done another iteration of this which won't currently compile (pulls in the request factory from my refactoring, to do the prepareRequest() in there rather than throughout the source. Also I'm trying to make that context for the logs really useful by wiring up spark/MR jobs and including that info. We will be able to see which job is doing the IO, rather than just the user. sweet eh? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 552728) Time Spent: 5h 50m (was: 5h 40m) > Add an Audit plugin point for S3A auditing/context > -- > > Key: HADOOP-17511 > URL: https://issues.apache.org/jira/browse/HADOOP-17511 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.3.1 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Labels: pull-request-available > Time Spent: 5h 50m > Remaining Estimate: 0h > > Add a way for auditing tools to correlate S3 object calls with Hadoop FS API > calls. > Initially just to log/forward to an auditing service. > Later: let us attach them as parameters in S3 requests, such as opentrace > headeers or (my initial idea: http referrer header -where it will get into > the log) > Challenges > * ensuring the audit span is created for every public entry point. That will > have to include those used in s3guard tools, some defacto public APIs > * and not re-entered for active spans. s3A code must not call back into the > FS API points > * Propagation across worker threads -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on pull request #2675: HADOOP-17511. Add audit/telemetry logging to S3A connector
steveloughran commented on pull request #2675: URL: https://github.com/apache/hadoop/pull/2675#issuecomment-779447171 @bgaborg: thanks for the comments, will take them on. I've done another iteration of this which won't currently compile (pulls in the request factory from my refactoring, to do the prepareRequest() in there rather than throughout the source. Also I'm trying to make that context for the logs really useful by wiring up spark/MR jobs and including that info. We will be able to see which job is doing the IO, rather than just the user. sweet eh? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17511) Add an Audit plugin point for S3A auditing/context
[ https://issues.apache.org/jira/browse/HADOOP-17511?focusedWorklogId=552726&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-552726 ] ASF GitHub Bot logged work on HADOOP-17511: --- Author: ASF GitHub Bot Created on: 15/Feb/21 21:04 Start Date: 15/Feb/21 21:04 Worklog Time Spent: 10m Work Description: steveloughran commented on a change in pull request #2675: URL: https://github.com/apache/hadoop/pull/2675#discussion_r576421020 ## File path: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/audit/AuditConstants.java ## @@ -0,0 +1,22 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.s3a.audit; + +public class AuditConstants { Review comment: not empty in my current source tree This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 552726) Time Spent: 5h 40m (was: 5.5h) > Add an Audit plugin point for S3A auditing/context > -- > > Key: HADOOP-17511 > URL: https://issues.apache.org/jira/browse/HADOOP-17511 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.3.1 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Labels: pull-request-available > Time Spent: 5h 40m > Remaining Estimate: 0h > > Add a way for auditing tools to correlate S3 object calls with Hadoop FS API > calls. > Initially just to log/forward to an auditing service. > Later: let us attach them as parameters in S3 requests, such as opentrace > headeers or (my initial idea: http referrer header -where it will get into > the log) > Challenges > * ensuring the audit span is created for every public entry point. That will > have to include those used in s3guard tools, some defacto public APIs > * and not re-entered for active spans. s3A code must not call back into the > FS API points > * Propagation across worker threads -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on a change in pull request #2675: HADOOP-17511. Add audit/telemetry logging to S3A connector
steveloughran commented on a change in pull request #2675: URL: https://github.com/apache/hadoop/pull/2675#discussion_r576421020 ## File path: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/audit/AuditConstants.java ## @@ -0,0 +1,22 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.s3a.audit; + +public class AuditConstants { Review comment: not empty in my current source tree This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17038) Support disabling buffered reads in ABFS positional reads
[ https://issues.apache.org/jira/browse/HADOOP-17038?focusedWorklogId=552718&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-552718 ] ASF GitHub Bot logged work on HADOOP-17038: --- Author: ASF GitHub Bot Created on: 15/Feb/21 20:52 Start Date: 15/Feb/21 20:52 Worklog Time Spent: 10m Work Description: steveloughran commented on pull request #2646: URL: https://github.com/apache/hadoop/pull/2646#issuecomment-779442776 > Corrected. These checkstyle should be applied for test code also? In tests these usage looks normal. Anyways handled by making static final. it's annoying, because its only tests, but it is nice to keep checkstyle under control. Looking at the tests though: I don't see how making it static would help?. +1 from me. @surendralilhore : commit at will This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 552718) Time Spent: 8.5h (was: 8h 20m) > Support disabling buffered reads in ABFS positional reads > - > > Key: HADOOP-17038 > URL: https://issues.apache.org/jira/browse/HADOOP-17038 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Anoop Sam John >Assignee: Anoop Sam John >Priority: Major > Labels: HBase, abfsactive, pull-request-available > Attachments: HBase Perf Test Report.xlsx, screenshot-1.png > > Time Spent: 8.5h > Remaining Estimate: 0h > > Right now it will do a seek to the position , read and then seek back to the > old position. (As per the impl in the super class) > In HBase kind of workloads we rely mostly on short preads. (like 64 KB size > by default). So would be ideal to support a pure pos read API which will not > even keep the data in a buffer but will only read the required data as what > is asked for by the caller. (Not reading ahead more data as per the read size > config) > Allow an optional boolean config to be specified while opening file for read > using which buffered pread can be disabled. > FutureDataInputStreamBuilder openFile(Path path) -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on pull request #2646: HADOOP-17038 Support disabling buffered reads in ABFS positional reads.
steveloughran commented on pull request #2646: URL: https://github.com/apache/hadoop/pull/2646#issuecomment-779442776 > Corrected. These checkstyle should be applied for test code also? In tests these usage looks normal. Anyways handled by making static final. it's annoying, because its only tests, but it is nice to keep checkstyle under control. Looking at the tests though: I don't see how making it static would help?. +1 from me. @surendralilhore : commit at will This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2605: HDFS-15423 RBF: WebHDFS create shouldn't choose DN from all sub-clusters
hadoop-yetus commented on pull request #2605: URL: https://github.com/apache/hadoop/pull/2605#issuecomment-779441844 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 4s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | | 0m 0s | [test4tests](test4tests) | The patch appears to include 3 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 34m 25s | | trunk passed | | +1 :green_heart: | compile | 0m 37s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 0m 31s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 22s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 37s | | trunk passed | | +1 :green_heart: | shadedclient | 16m 31s | | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 36s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 0m 53s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +0 :ok: | spotbugs | 1m 14s | | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 1m 12s | | trunk passed | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 32s | | the patch passed | | +1 :green_heart: | compile | 0m 32s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 0m 32s | | the patch passed | | +1 :green_heart: | compile | 0m 26s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | javac | 0m 26s | | the patch passed | | +1 :green_heart: | checkstyle | 0m 15s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 29s | | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 15m 33s | | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 32s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 0m 49s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | findbugs | 1m 18s | | the patch passed | _ Other Tests _ | | -1 :x: | unit | 17m 13s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2605/10/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt) | hadoop-hdfs-rbf in the patch passed. | | +1 :green_heart: | asflicense | 0m 28s | | The patch does not generate ASF License warnings. | | | | 97m 49s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.fs.contract.router.TestRouterHDFSContractCreate | | | hadoop.fs.contract.router.TestRouterHDFSContractCreateSecure | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2605/10/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2605 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux c72969321a27 4.15.0-126-generic #129-Ubuntu SMP Mon Nov 23 18:53:38 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 07a4220cd27 | | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2605/10/testReport/ | | Max. process+thread count | 2225 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: hadoop-hdfs-project/hadoop-hdfs-rbf | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2605/10/console | | versions | git=2.25.1 maven=3.6.3 findbugs=4.0.6 | | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org | This message
[jira] [Work logged] (HADOOP-17038) Support disabling buffered reads in ABFS positional reads
[ https://issues.apache.org/jira/browse/HADOOP-17038?focusedWorklogId=552714&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-552714 ] ASF GitHub Bot logged work on HADOOP-17038: --- Author: ASF GitHub Bot Created on: 15/Feb/21 20:48 Start Date: 15/Feb/21 20:48 Worklog Time Spent: 10m Work Description: steveloughran commented on a change in pull request #2646: URL: https://github.com/apache/hadoop/pull/2646#discussion_r576415754 ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java ## @@ -634,12 +638,15 @@ public AbfsInputStream openFileForRead(final Path path, final FileSystem.Statist // Add statistics for InputStream return new AbfsInputStream(client, statistics, relativePath, contentLength, - populateAbfsInputStreamContext(), + populateAbfsInputStreamContext(options), eTag); } } - private AbfsInputStreamContext populateAbfsInputStreamContext() { + private AbfsInputStreamContext populateAbfsInputStreamContext( + Optional options) { +boolean bufferedPreadDisabled = options.isPresent() Review comment: sometimes that optional works. Sometimes, well, it sucks. This is one of those times where it seems clean This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 552714) Time Spent: 8h 20m (was: 8h 10m) > Support disabling buffered reads in ABFS positional reads > - > > Key: HADOOP-17038 > URL: https://issues.apache.org/jira/browse/HADOOP-17038 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Anoop Sam John >Assignee: Anoop Sam John >Priority: Major > Labels: HBase, abfsactive, pull-request-available > Attachments: HBase Perf Test Report.xlsx, screenshot-1.png > > Time Spent: 8h 20m > Remaining Estimate: 0h > > Right now it will do a seek to the position , read and then seek back to the > old position. (As per the impl in the super class) > In HBase kind of workloads we rely mostly on short preads. (like 64 KB size > by default). So would be ideal to support a pure pos read API which will not > even keep the data in a buffer but will only read the required data as what > is asked for by the caller. (Not reading ahead more data as per the read size > config) > Allow an optional boolean config to be specified while opening file for read > using which buffered pread can be disabled. > FutureDataInputStreamBuilder openFile(Path path) -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on a change in pull request #2646: HADOOP-17038 Support disabling buffered reads in ABFS positional reads.
steveloughran commented on a change in pull request #2646: URL: https://github.com/apache/hadoop/pull/2646#discussion_r576415754 ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java ## @@ -634,12 +638,15 @@ public AbfsInputStream openFileForRead(final Path path, final FileSystem.Statist // Add statistics for InputStream return new AbfsInputStream(client, statistics, relativePath, contentLength, - populateAbfsInputStreamContext(), + populateAbfsInputStreamContext(options), eTag); } } - private AbfsInputStreamContext populateAbfsInputStreamContext() { + private AbfsInputStreamContext populateAbfsInputStreamContext( + Optional options) { +boolean bufferedPreadDisabled = options.isPresent() Review comment: sometimes that optional works. Sometimes, well, it sucks. This is one of those times where it seems clean This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] fengnanli commented on pull request #2605: HDFS-15423 RBF: WebHDFS create shouldn't choose DN from all sub-clusters
fengnanli commented on pull request #2605: URL: https://github.com/apache/hadoop/pull/2605#issuecomment-779405236 Rebase on latest trunk and force push. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17109) Replace Guava base64Url and base64 with Java8+ base64
[ https://issues.apache.org/jira/browse/HADOOP-17109?focusedWorklogId=552649&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-552649 ] ASF GitHub Bot logged work on HADOOP-17109: --- Author: ASF GitHub Bot Created on: 15/Feb/21 18:02 Start Date: 15/Feb/21 18:02 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #2703: URL: https://github.com/apache/hadoop/pull/2703#issuecomment-779378107 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 40s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 34m 12s | | trunk passed | | +1 :green_heart: | compile | 0m 20s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 0m 21s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | mvnsite | 0m 20s | | trunk passed | | +1 :green_heart: | shadedclient | 48m 33s | | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 26s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 0m 24s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 15s | | the patch passed | | +1 :green_heart: | compile | 0m 15s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 0m 15s | | the patch passed | | +1 :green_heart: | compile | 0m 14s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | javac | 0m 14s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 13s | | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | | The patch has no whitespace issues. | | +1 :green_heart: | xml | 0m 2s | | The patch has no ill-formed XML file. | | +1 :green_heart: | shadedclient | 12m 39s | | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 19s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 0m 18s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | _ Other Tests _ | | +1 :green_heart: | unit | 0m 20s | | hadoop-build-tools in the patch passed. | | +1 :green_heart: | asflicense | 0m 34s | | The patch does not generate ASF License warnings. | | | | 68m 18s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2703/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2703 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml | | uname | Linux f455a51d93c9 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / bad6038a487 | | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2703/1/testReport/ | | Max. process+thread count | 537 (vs. ulimit of 5500) | | modules | C: hadoop-build-tools U: hadoop-build-tools | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2703/1/console | | versions | git=2.25.1 maven=3.6.3 | | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and
[GitHub] [hadoop] hadoop-yetus commented on pull request #2703: HADOOP-17109. add guava BaseEncoding to illegalClasses
hadoop-yetus commented on pull request #2703: URL: https://github.com/apache/hadoop/pull/2703#issuecomment-779378107 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 40s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 34m 12s | | trunk passed | | +1 :green_heart: | compile | 0m 20s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 0m 21s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | mvnsite | 0m 20s | | trunk passed | | +1 :green_heart: | shadedclient | 48m 33s | | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 26s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 0m 24s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 15s | | the patch passed | | +1 :green_heart: | compile | 0m 15s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 0m 15s | | the patch passed | | +1 :green_heart: | compile | 0m 14s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | javac | 0m 14s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 13s | | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | | The patch has no whitespace issues. | | +1 :green_heart: | xml | 0m 2s | | The patch has no ill-formed XML file. | | +1 :green_heart: | shadedclient | 12m 39s | | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 19s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 0m 18s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | _ Other Tests _ | | +1 :green_heart: | unit | 0m 20s | | hadoop-build-tools in the patch passed. | | +1 :green_heart: | asflicense | 0m 34s | | The patch does not generate ASF License warnings. | | | | 68m 18s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2703/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2703 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml | | uname | Linux f455a51d93c9 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / bad6038a487 | | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2703/1/testReport/ | | Max. process+thread count | 537 (vs. ulimit of 5500) | | modules | C: hadoop-build-tools U: hadoop-build-tools | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2703/1/console | | versions | git=2.25.1 maven=3.6.3 | | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-16810) Increase entropy to improve cryptographic randomness on precommit Linux VMs
[ https://issues.apache.org/jira/browse/HADOOP-16810?focusedWorklogId=552629&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-552629 ] ASF GitHub Bot logged work on HADOOP-16810: --- Author: ASF GitHub Bot Created on: 15/Feb/21 17:34 Start Date: 15/Feb/21 17:34 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #2697: URL: https://github.com/apache/hadoop/pull/2697#issuecomment-779365745 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 40s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 32m 46s | | trunk passed | | +1 :green_heart: | mvnsite | 24m 43s | | trunk passed | | +1 :green_heart: | shadedclient | 13m 26s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 19m 56s | | the patch passed | | +1 :green_heart: | mvnsite | 20m 54s | | the patch passed | | -1 :x: | shellcheck | 0m 1s | [/diff-patch-shellcheck.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2697/2/artifact/out/diff-patch-shellcheck.txt) | The patch generated 1 new + 1 unchanged - 0 fixed = 2 total (was 1) | | +1 :green_heart: | shelldocs | 0m 17s | | There were no new shelldocs issues. | | +1 :green_heart: | whitespace | 0m 0s | | The patch has no whitespace issues. | | -1 :x: | shadedclient | 1m 44s | | patch has errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 5m 48s | [/patch-unit-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2697/2/artifact/out/patch-unit-root.txt) | root in the patch failed. | | +1 :green_heart: | asflicense | 0m 53s | | The patch does not generate ASF License warnings. | | | | 123m 20s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2697/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2697 | | Optional Tests | dupname asflicense mvnsite unit shellcheck shelldocs | | uname | Linux ee2aea590a8b 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / bad6038a487 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2697/2/testReport/ | | Max. process+thread count | 535 (vs. ulimit of 5500) | | modules | C: . U: . | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2697/2/console | | versions | git=2.25.1 maven=3.6.3 shellcheck=0.7.0 | | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 552629) Time Spent: 1h 10m (was: 1h) > Increase entropy to improve cryptographic randomness on precommit Linux VMs > --- > > Key: HADOOP-16810 > URL: https://issues.apache.org/jira/browse/HADOOP-16810 > Project: Hadoop Common > Issue Type: Bug >Reporter: Ahmed Hussein >Assignee: Ahmed Hussein >Priority: Blocker > Labels: pull-request-available > Time Spent: 1h 10m > Remaining Estimate: 0h > > I was investigating a JUnit test (MAPREDUCE-7079 > :TestMRIntermediateDataEncryption is failing in precommit builds) that was > consistently hanging on Linux VMs and failing Mapreduce pre-builds. > I found that the test hangs slows or hangs indefinitely whenever Java reads > the random file. > I explored two dif
[GitHub] [hadoop] hadoop-yetus commented on pull request #2697: HADOOP-16810. Increase entropy on precommit Linux VMs
hadoop-yetus commented on pull request #2697: URL: https://github.com/apache/hadoop/pull/2697#issuecomment-779365745 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 40s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 32m 46s | | trunk passed | | +1 :green_heart: | mvnsite | 24m 43s | | trunk passed | | +1 :green_heart: | shadedclient | 13m 26s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 19m 56s | | the patch passed | | +1 :green_heart: | mvnsite | 20m 54s | | the patch passed | | -1 :x: | shellcheck | 0m 1s | [/diff-patch-shellcheck.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2697/2/artifact/out/diff-patch-shellcheck.txt) | The patch generated 1 new + 1 unchanged - 0 fixed = 2 total (was 1) | | +1 :green_heart: | shelldocs | 0m 17s | | There were no new shelldocs issues. | | +1 :green_heart: | whitespace | 0m 0s | | The patch has no whitespace issues. | | -1 :x: | shadedclient | 1m 44s | | patch has errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 5m 48s | [/patch-unit-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2697/2/artifact/out/patch-unit-root.txt) | root in the patch failed. | | +1 :green_heart: | asflicense | 0m 53s | | The patch does not generate ASF License warnings. | | | | 123m 20s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2697/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2697 | | Optional Tests | dupname asflicense mvnsite unit shellcheck shelldocs | | uname | Linux ee2aea590a8b 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / bad6038a487 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2697/2/testReport/ | | Max. process+thread count | 535 (vs. ulimit of 5500) | | modules | C: . U: . | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2697/2/console | | versions | git=2.25.1 maven=3.6.3 shellcheck=0.7.0 | | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] jbrennan333 merged pull request #2690: HDFS-15821. Add metrics for in-service datanodes
jbrennan333 merged pull request #2690: URL: https://github.com/apache/hadoop/pull/2690 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] jbrennan333 commented on pull request #2690: HDFS-15821. Add metrics for in-service datanodes
jbrennan333 commented on pull request #2690: URL: https://github.com/apache/hadoop/pull/2690#issuecomment-779355481 @zehaoc2 verified that if he builds hadoop before running the failed hadoop-hdfs-rbf tests, it does not fail. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17109) Replace Guava base64Url and base64 with Java8+ base64
[ https://issues.apache.org/jira/browse/HADOOP-17109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17284838#comment-17284838 ] Ahmed Hussein commented on HADOOP-17109: After revisiting this Jira, I do not think {{org.apache.commons.}} Base64 should be replaced. PR [#2703|https://github.com/apache/hadoop/pull/2703] is a straightforward to prevent importing guava.base64 in future commits. The hadoop source code relies on {{org.apache.commons.}} for Base64. This PR is to add the {{com.google.common.io.BaseEncoding}} to illegal classes in order to prevent using the guava import in future commits. * This PR only touches the checkstyle configuration. * There are no occurrences of {{com.google.common.io.BaseEncoding}} in the code. > Replace Guava base64Url and base64 with Java8+ base64 > - > > Key: HADOOP-17109 > URL: https://issues.apache.org/jira/browse/HADOOP-17109 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Ahmed Hussein >Assignee: Ahmed Hussein >Priority: Major > Labels: pull-request-available > Time Spent: 0.5h > Remaining Estimate: 0h > > One important thing to not here as pointed out by [~jeagles] in [his comment > on the parent > task|https://issues.apache.org/jira/browse/HADOOP-17098?focusedCommentId=17147935&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17147935] > {quote}One note to be careful about is that base64 translation is not a > standard, so the two implementations could produce different results. This > might matter in the case of serialization, persistence, or client server > different versions.{quote} > *Base64Url:* > {code:java} > Targets > Occurrences of 'base64Url' in project with mask '*.java' > Found Occurrences (6 usages found) > org.apache.hadoop.mapreduce (3 usages found) > CryptoUtils.java (3 usages found) > wrapIfNecessary(Configuration, FSDataOutputStream, boolean) (1 > usage found) > 138 + Base64.encodeBase64URLSafeString(iv) + "]"); > wrapIfNecessary(Configuration, InputStream, long) (1 usage found) > 183 + Base64.encodeBase64URLSafeString(iv) + "]"); > wrapIfNecessary(Configuration, FSDataInputStream) (1 usage found) > 218 + Base64.encodeBase64URLSafeString(iv) + "]"); > org.apache.hadoop.util (2 usages found) > KMSUtil.java (2 usages found) > toJSON(KeyVersion) (1 usage found) > 104 Base64.encodeBase64URLSafeString( > toJSON(EncryptedKeyVersion) (1 usage found) > 117 > .encodeBase64URLSafeString(encryptedKeyVersion.getEncryptedKeyIv())); > org.apache.hadoop.yarn.server.resourcemanager.webapp (1 usage found) > TestRMWebServicesAppsModification.java (1 usage found) > testAppSubmit(String, String) (1 usage found) > 837 .put("test", > Base64.encodeBase64URLSafeString("value12".getBytes("UTF8"))); > {code} > *Base64:* > {code:java} > Targets > Occurrences of 'base64;' in project with mask '*.java' > Found Occurrences (51 usages found) > org.apache.hadoop.crypto.key.kms (1 usage found) > KMSClientProvider.java (1 usage found) > 20 import org.apache.commons.codec.binary.Base64; > org.apache.hadoop.crypto.key.kms.server (1 usage found) > KMS.java (1 usage found) > 22 import org.apache.commons.codec.binary.Base64; > org.apache.hadoop.fs (2 usages found) > XAttrCodec.java (2 usages found) > 23 import org.apache.commons.codec.binary.Base64; > 56 BASE64; > org.apache.hadoop.fs.azure (3 usages found) > AzureBlobStorageTestAccount.java (1 usage found) > 23 import com.microsoft.azure.storage.core.Base64; > BlockBlobAppendStream.java (1 usage found) > 50 import org.apache.commons.codec.binary.Base64; > ITestBlobDataValidation.java (1 usage found) > 50 import com.microsoft.azure.storage.core.Base64; > org.apache.hadoop.fs.azurebfs (2 usages found) > AzureBlobFileSystemStore.java (1 usage found) > 99 import org.apache.hadoop.fs.azurebfs.utils.Base64; > TestAbfsConfigurationFieldsValidation.java (1 usage found) > 34 import org.apache.hadoop.fs.azurebfs.utils.Base64; > org.apache.hadoop.fs.azurebfs.diagnostics (2 usages found) > Base64StringConfigurationBasicValidator.java (1 usage found) > 26 import org.apache.hadoop.fs.azurebfs.utils.Base64; > TestConfigurationValidators.java (1 usage found) > 25 import org.apache.hadoop.fs.azurebfs.utils.Base64; > org.apache.hadoop.fs.azurebfs.extensions (2 usages
[GitHub] [hadoop] tomscut commented on pull request #2668: HDFS-15808. Add metrics for FSNamesystem read/write lock hold long time
tomscut commented on pull request #2668: URL: https://github.com/apache/hadoop/pull/2668#issuecomment-779345021 Failed junit tests hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks hadoop.hdfs.server.namenode.ha.TestBootstrapAliasmap Those failed unit tests were unrelated to the change. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17109) Replace Guava base64Url and base64 with Java8+ base64
[ https://issues.apache.org/jira/browse/HADOOP-17109?focusedWorklogId=552614&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-552614 ] ASF GitHub Bot logged work on HADOOP-17109: --- Author: ASF GitHub Bot Created on: 15/Feb/21 16:53 Start Date: 15/Feb/21 16:53 Worklog Time Spent: 10m Work Description: amahussein opened a new pull request #2703: URL: https://github.com/apache/hadoop/pull/2703 [HADOOP-17109: Replace Guava base64Url and base64 with Java8+ base64](https://issues.apache.org/jira/browse/HADOOP-17109) The hadoop source code relies on `org.apache.commons.` for Base64. This PR is to add the `com.google.common.io.BaseEncoding` to illegal classes in order to prevent using the guava import in future commits. - This PR only touches the checkstyle configuration. - There are no occurrences of `com.google.common.io.BaseEncoding` in the code. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 552614) Time Spent: 0.5h (was: 20m) > Replace Guava base64Url and base64 with Java8+ base64 > - > > Key: HADOOP-17109 > URL: https://issues.apache.org/jira/browse/HADOOP-17109 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Ahmed Hussein >Assignee: Ahmed Hussein >Priority: Major > Labels: pull-request-available > Time Spent: 0.5h > Remaining Estimate: 0h > > One important thing to not here as pointed out by [~jeagles] in [his comment > on the parent > task|https://issues.apache.org/jira/browse/HADOOP-17098?focusedCommentId=17147935&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17147935] > {quote}One note to be careful about is that base64 translation is not a > standard, so the two implementations could produce different results. This > might matter in the case of serialization, persistence, or client server > different versions.{quote} > *Base64Url:* > {code:java} > Targets > Occurrences of 'base64Url' in project with mask '*.java' > Found Occurrences (6 usages found) > org.apache.hadoop.mapreduce (3 usages found) > CryptoUtils.java (3 usages found) > wrapIfNecessary(Configuration, FSDataOutputStream, boolean) (1 > usage found) > 138 + Base64.encodeBase64URLSafeString(iv) + "]"); > wrapIfNecessary(Configuration, InputStream, long) (1 usage found) > 183 + Base64.encodeBase64URLSafeString(iv) + "]"); > wrapIfNecessary(Configuration, FSDataInputStream) (1 usage found) > 218 + Base64.encodeBase64URLSafeString(iv) + "]"); > org.apache.hadoop.util (2 usages found) > KMSUtil.java (2 usages found) > toJSON(KeyVersion) (1 usage found) > 104 Base64.encodeBase64URLSafeString( > toJSON(EncryptedKeyVersion) (1 usage found) > 117 > .encodeBase64URLSafeString(encryptedKeyVersion.getEncryptedKeyIv())); > org.apache.hadoop.yarn.server.resourcemanager.webapp (1 usage found) > TestRMWebServicesAppsModification.java (1 usage found) > testAppSubmit(String, String) (1 usage found) > 837 .put("test", > Base64.encodeBase64URLSafeString("value12".getBytes("UTF8"))); > {code} > *Base64:* > {code:java} > Targets > Occurrences of 'base64;' in project with mask '*.java' > Found Occurrences (51 usages found) > org.apache.hadoop.crypto.key.kms (1 usage found) > KMSClientProvider.java (1 usage found) > 20 import org.apache.commons.codec.binary.Base64; > org.apache.hadoop.crypto.key.kms.server (1 usage found) > KMS.java (1 usage found) > 22 import org.apache.commons.codec.binary.Base64; > org.apache.hadoop.fs (2 usages found) > XAttrCodec.java (2 usages found) > 23 import org.apache.commons.codec.binary.Base64; > 56 BASE64; > org.apache.hadoop.fs.azure (3 usages found) > AzureBlobStorageTestAccount.java (1 usage found) > 23 import com.microsoft.azure.storage.core.Base64; > BlockBlobAppendStream.java (1 usage found) > 50 import org.apache.commons.codec.binary.Base64; > ITestBlobDataValidation.java (1 usage found) > 50 import com.microsoft.azure.storage.core.Base64; > org.apache.hadoop.fs.azurebfs (2 usages found) > AzureBlobFileSystemStore.java (1 usage
[GitHub] [hadoop] amahussein opened a new pull request #2703: HADOOP-17109. add guava BaseEncoding to illegalClasses
amahussein opened a new pull request #2703: URL: https://github.com/apache/hadoop/pull/2703 [HADOOP-17109: Replace Guava base64Url and base64 with Java8+ base64](https://issues.apache.org/jira/browse/HADOOP-17109) The hadoop source code relies on `org.apache.commons.` for Base64. This PR is to add the `com.google.common.io.BaseEncoding` to illegal classes in order to prevent using the guava import in future commits. - This PR only touches the checkstyle configuration. - There are no occurrences of `com.google.common.io.BaseEncoding` in the code. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2668: HDFS-15808. Add metrics for FSNamesystem read/write lock hold long time
hadoop-yetus commented on pull request #2668: URL: https://github.com/apache/hadoop/pull/2668#issuecomment-779330421 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 39s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 32m 39s | | trunk passed | | +1 :green_heart: | compile | 1m 20s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 1m 13s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 1m 3s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 19s | | trunk passed | | +1 :green_heart: | shadedclient | 15m 56s | | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 55s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 1m 27s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +0 :ok: | spotbugs | 3m 6s | | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 3m 4s | | trunk passed | | -0 :warning: | patch | 3m 22s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 12s | | the patch passed | | +1 :green_heart: | compile | 1m 16s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 1m 16s | | the patch passed | | +1 :green_heart: | compile | 1m 7s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | javac | 1m 7s | | the patch passed | | +1 :green_heart: | checkstyle | 0m 58s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 14s | | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 13m 4s | | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 50s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 1m 20s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | findbugs | 3m 9s | | the patch passed | _ Other Tests _ | | -1 :x: | unit | 194m 59s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2668/4/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 44s | | The patch does not generate ASF License warnings. | | | | 281m 25s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped | | | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks | | | hadoop.hdfs.server.namenode.ha.TestBootstrapAliasmap | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2668/4/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2668 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 0c1c25c99215 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / c3134ab3a99 | | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2668/4/testReport/ | | Max. process+thread count | 3175 (vs. ulimit of 5
[jira] [Work logged] (HADOOP-17126) implement non-guava Precondition checkNotNull
[ https://issues.apache.org/jira/browse/HADOOP-17126?focusedWorklogId=552603&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-552603 ] ASF GitHub Bot logged work on HADOOP-17126: --- Author: ASF GitHub Bot Created on: 15/Feb/21 15:51 Start Date: 15/Feb/21 15:51 Worklog Time Spent: 10m Work Description: amahussein commented on pull request #2143: URL: https://github.com/apache/hadoop/pull/2143#issuecomment-779308586 @steveloughran and @daryn-sharp, Are you guys okay with the current version? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 552603) Time Spent: 50m (was: 40m) > implement non-guava Precondition checkNotNull > - > > Key: HADOOP-17126 > URL: https://issues.apache.org/jira/browse/HADOOP-17126 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Ahmed Hussein >Assignee: Ahmed Hussein >Priority: Major > Labels: pull-request-available > Attachments: HADOOP-17126.001.patch, HADOOP-17126.002.patch > > Time Spent: 50m > Remaining Estimate: 0h > > As part In order to replace Guava Preconditions, we need to implement our own > versions of the API. > This Jira is to create {{checkNotNull}} in a new package dubbed {{unguava}}. > +The plan is as follows+ > * create a new {{package org.apache.hadoop.util.unguava;}} > * {{create class Validate}} > * implement {{package org.apache.hadoop.util.unguava.Validate;}} with the > following interface > ** {{checkNotNull(final T obj)}} > ** {{checkNotNull(final T reference, final Object errorMessage)}} > ** {{checkNotNull(final T obj, final String message, final Object... > values)}} > ** {{checkNotNull(final T obj,final Supplier msgSupplier)}} > * {{guava.preconditions used String.lenientformat which suppressed > exceptions caused by string formatting of the exception message . So, in > order to avoid changing the behavior, the implementation catches Exceptions > triggered by building the message (IllegalFormat, InsufficientArg, > NullPointer..etc)}} > * {{After merging the new class, we can replace > guava.Preconditions.checkNotNull by {{unguava.Validate.checkNotNull > * We need the change to go into trunk, 3.1, 3.2, and 3.3 > > Similar Jiras will be created to implement checkState, checkArgument, > checkIndex -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] amahussein commented on pull request #2143: HADOOP-17126. implement un-guava Precondition checkNotNull
amahussein commented on pull request #2143: URL: https://github.com/apache/hadoop/pull/2143#issuecomment-779308586 @steveloughran and @daryn-sharp, Are you guys okay with the current version? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-16810) Increase entropy to improve cryptographic randomness on precommit Linux VMs
[ https://issues.apache.org/jira/browse/HADOOP-16810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17284037#comment-17284037 ] Ahmed Hussein edited comment on HADOOP-16810 at 2/15/21, 3:47 PM: -- [~aajisaka] I remembered you made some changes to Yetus/hadoop in the past. So, I thought to get your feedback on the changes in the PR. In [my comment on MAPREDUCE-7079|https://issues.apache.org/jira/browse/MAPREDUCE-7079?focusedCommentId=17013234&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17013234] {quote}This test case has been failing for ever. - When it timeout, MRAppMaster and some YarnChild processes remain running in the background. Therefore, the JVM running the tests fail due to OOM. No one notices that this unit test case has failed because the QA reports the unit tests that failed, but not timeout. - It works for Mac OS X, but never works for Linux running on a virtual Box. It only works on the latter by disabling MRJobConfig.MR_ENCRYPTED_INTERMEDIATE_DATA.{quote} In this PR: - the {{DOCKER_EXTRAARGS}} are added to {{hadoop.sh}} to pass the random mount - -the version 0.10.0 is not on the [release page|https://yetus.apache.org/downloads/]. So, this is upgrading the Yetus to a released version 0.13.0.- - adding the mount parameter to {{start-build-env.sh}} Resources: * [Yetus Advanced Precommit - important-variables|https://yetus.apache.org/documentation/0.11.1/precommit-advanced/#important-variables] * [DOCKER_EXTRAARGS usage in Yetus code|https://github.com/apache/yetus/search?q=DOCKER_EXTRAARGS] We can try the new changes anyway as we are still dealing with the entropy problem. CC: [~ebadger] [~ste...@apache.org] was (Author: ahussein): [~aajisaka] I remembered you made some changes to Yetus/hadoop in the past. So, I thought to get your feedback on the changes in the PR. In [my comment on MAPREDUCE-7079|https://issues.apache.org/jira/browse/MAPREDUCE-7079?focusedCommentId=17013234&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17013234] {quote}This test case has been failing for ever. - When it timeout, MRAppMaster and some YarnChild processes remain running in the background. Therefore, the JVM running the tests fail due to OOM. No one notices that this unit test case has failed because the QA reports the unit tests that failed, but not timeout. - It works for Mac OS X, but never works for Linux running on a virtual Box. It only works on the latter by disabling MRJobConfig.MR_ENCRYPTED_INTERMEDIATE_DATA.{quote} In this PR: - the {{DOCKER_EXTRAARGS}} are added to {{hadoop.sh}} to pass the random mount - the version 0.10.0 is not on the [release page|https://yetus.apache.org/downloads/]. So, this is upgrading the Yetus to a released version 0.13.0. - adding the mount parameter to {{start-build-env.sh}} Resources: * [Yetus Advanced Precommit - important-variables|https://yetus.apache.org/documentation/0.11.1/precommit-advanced/#important-variables] * [DOCKER_EXTRAARGS usage in Yetus code|https://github.com/apache/yetus/search?q=DOCKER_EXTRAARGS] We can try the new changes anyway as we are still dealing with the entropy problem. CC: [~ebadger] [~ste...@apache.org] > Increase entropy to improve cryptographic randomness on precommit Linux VMs > --- > > Key: HADOOP-16810 > URL: https://issues.apache.org/jira/browse/HADOOP-16810 > Project: Hadoop Common > Issue Type: Bug >Reporter: Ahmed Hussein >Assignee: Ahmed Hussein >Priority: Blocker > Labels: pull-request-available > Time Spent: 1h > Remaining Estimate: 0h > > I was investigating a JUnit test (MAPREDUCE-7079 > :TestMRIntermediateDataEncryption is failing in precommit builds) that was > consistently hanging on Linux VMs and failing Mapreduce pre-builds. > I found that the test hangs slows or hangs indefinitely whenever Java reads > the random file. > I explored two different ways to get that test case to work properly on my > local Linux VM running rel7: > # To install "haveged" and "rng-tools" on the virtual machine running Rel7. > Then, start rngd service {{sudo service rngd start}} . This will fix the > problem for all the components on the image including java, native and any > other component. > # Change java configuration to load urandom > {code:bash} > sudo vim $JAVA_HOME/jre/lib/security/java.security > ## Change the line “securerandom.source=file:/dev/random” to read: > securerandom.source=file:/dev/./urandom > {code} > The first solution is better because this will fix the problem for everything > that requires SSL/TLS or other services that depend upon encryption. > Since the precommit build runs on Docker, then it would be best
[jira] [Work logged] (HADOOP-16810) Increase entropy to improve cryptographic randomness on precommit Linux VMs
[ https://issues.apache.org/jira/browse/HADOOP-16810?focusedWorklogId=552599&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-552599 ] ASF GitHub Bot logged work on HADOOP-16810: --- Author: ASF GitHub Bot Created on: 15/Feb/21 15:45 Start Date: 15/Feb/21 15:45 Worklog Time Spent: 10m Work Description: amahussein commented on pull request #2697: URL: https://github.com/apache/hadoop/pull/2697#issuecomment-779304525 > Let's upgrade Yetus 0.13.0 in the Jenkinsfile instead of yetus-wrapper to use Yetus 0.13.0 in the pre-commit job. > Note that Yetus 0.13.0 dropped Python 2 support and I want to merge #1738 first. Thanks @aajisaka ! I removed the version change. Hopefully, the `DOCKER_EXTRAARGS` will be picked correctly by yetus. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 552599) Time Spent: 1h (was: 50m) > Increase entropy to improve cryptographic randomness on precommit Linux VMs > --- > > Key: HADOOP-16810 > URL: https://issues.apache.org/jira/browse/HADOOP-16810 > Project: Hadoop Common > Issue Type: Bug >Reporter: Ahmed Hussein >Assignee: Ahmed Hussein >Priority: Blocker > Labels: pull-request-available > Time Spent: 1h > Remaining Estimate: 0h > > I was investigating a JUnit test (MAPREDUCE-7079 > :TestMRIntermediateDataEncryption is failing in precommit builds) that was > consistently hanging on Linux VMs and failing Mapreduce pre-builds. > I found that the test hangs slows or hangs indefinitely whenever Java reads > the random file. > I explored two different ways to get that test case to work properly on my > local Linux VM running rel7: > # To install "haveged" and "rng-tools" on the virtual machine running Rel7. > Then, start rngd service {{sudo service rngd start}} . This will fix the > problem for all the components on the image including java, native and any > other component. > # Change java configuration to load urandom > {code:bash} > sudo vim $JAVA_HOME/jre/lib/security/java.security > ## Change the line “securerandom.source=file:/dev/random” to read: > securerandom.source=file:/dev/./urandom > {code} > The first solution is better because this will fix the problem for everything > that requires SSL/TLS or other services that depend upon encryption. > Since the precommit build runs on Docker, then it would be best to mount > {{/dev/urandom}} from the host as {{/dev/random}} into the container: > {code:java} > docker run -v /dev/urandom:/dev/random > {code} > For Yetus, we need to add the mount to the {{DOCKER_EXTRAARGS}} as follows: > {code:java} > DOCKER_EXTRAARGS+=("-v" "/dev/urandom:/dev/random") > {code} > ... -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] amahussein commented on pull request #2697: HADOOP-16810. Increase entropy on precommit Linux VMs
amahussein commented on pull request #2697: URL: https://github.com/apache/hadoop/pull/2697#issuecomment-779304525 > Let's upgrade Yetus 0.13.0 in the Jenkinsfile instead of yetus-wrapper to use Yetus 0.13.0 in the pre-commit job. > Note that Yetus 0.13.0 dropped Python 2 support and I want to merge #1738 first. Thanks @aajisaka ! I removed the version change. Hopefully, the `DOCKER_EXTRAARGS` will be picked correctly by yetus. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-16748) Support Python 3 in dev-support scripts
[ https://issues.apache.org/jira/browse/HADOOP-16748?focusedWorklogId=552597&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-552597 ] ASF GitHub Bot logged work on HADOOP-16748: --- Author: ASF GitHub Bot Created on: 15/Feb/21 15:43 Start Date: 15/Feb/21 15:43 Worklog Time Spent: 10m Work Description: amahussein commented on a change in pull request #1738: URL: https://github.com/apache/hadoop/pull/1738#discussion_r576279272 ## File path: dev-support/determine-flaky-tests-hadoop.py ## @@ -35,22 +35,8 @@ # at the failed test for the specific run is necessary. # import sys -import platform -sysversion = sys.hexversion -onward30 = False -if sysversion < 0x020600F0: - sys.exit("Minimum supported python version is 2.6, the current version is " + - "Python" + platform.python_version()) - -if sysversion == 0x03F0: - sys.exit("There is a known bug with Python" + platform.python_version() + - ", please try a different version"); - -if sysversion < 0x0300: - import urllib2 -else: - onward30 = True - import urllib.request + Review comment: Is the `determine-flaky-tests-hadoop.py` is being used? I thought it does not work since the developers are manually filing the flaky tests. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 552597) Time Spent: 3h 50m (was: 3h 40m) > Support Python 3 in dev-support scripts > --- > > Key: HADOOP-16748 > URL: https://issues.apache.org/jira/browse/HADOOP-16748 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Major > Labels: pull-request-available > Time Spent: 3h 50m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] amahussein commented on a change in pull request #1738: HADOOP-16748. Support Python 3 in dev-support scripts.
amahussein commented on a change in pull request #1738: URL: https://github.com/apache/hadoop/pull/1738#discussion_r576279272 ## File path: dev-support/determine-flaky-tests-hadoop.py ## @@ -35,22 +35,8 @@ # at the failed test for the specific run is necessary. # import sys -import platform -sysversion = sys.hexversion -onward30 = False -if sysversion < 0x020600F0: - sys.exit("Minimum supported python version is 2.6, the current version is " + - "Python" + platform.python_version()) - -if sysversion == 0x03F0: - sys.exit("There is a known bug with Python" + platform.python_version() + - ", please try a different version"); - -if sysversion < 0x0300: - import urllib2 -else: - onward30 = True - import urllib.request + Review comment: Is the `determine-flaky-tests-hadoop.py` is being used? I thought it does not work since the developers are manually filing the flaky tests. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-16810) Increase entropy to improve cryptographic randomness on precommit Linux VMs
[ https://issues.apache.org/jira/browse/HADOOP-16810?focusedWorklogId=552589&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-552589 ] ASF GitHub Bot logged work on HADOOP-16810: --- Author: ASF GitHub Bot Created on: 15/Feb/21 15:32 Start Date: 15/Feb/21 15:32 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #2697: URL: https://github.com/apache/hadoop/pull/2697#issuecomment-779296759 (!) A patch to the testing environment has been detected. Re-executing against the patched versions to perform further tests. The console is at https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2697/2/console in case of problems. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 552589) Time Spent: 50m (was: 40m) > Increase entropy to improve cryptographic randomness on precommit Linux VMs > --- > > Key: HADOOP-16810 > URL: https://issues.apache.org/jira/browse/HADOOP-16810 > Project: Hadoop Common > Issue Type: Bug >Reporter: Ahmed Hussein >Assignee: Ahmed Hussein >Priority: Blocker > Labels: pull-request-available > Time Spent: 50m > Remaining Estimate: 0h > > I was investigating a JUnit test (MAPREDUCE-7079 > :TestMRIntermediateDataEncryption is failing in precommit builds) that was > consistently hanging on Linux VMs and failing Mapreduce pre-builds. > I found that the test hangs slows or hangs indefinitely whenever Java reads > the random file. > I explored two different ways to get that test case to work properly on my > local Linux VM running rel7: > # To install "haveged" and "rng-tools" on the virtual machine running Rel7. > Then, start rngd service {{sudo service rngd start}} . This will fix the > problem for all the components on the image including java, native and any > other component. > # Change java configuration to load urandom > {code:bash} > sudo vim $JAVA_HOME/jre/lib/security/java.security > ## Change the line “securerandom.source=file:/dev/random” to read: > securerandom.source=file:/dev/./urandom > {code} > The first solution is better because this will fix the problem for everything > that requires SSL/TLS or other services that depend upon encryption. > Since the precommit build runs on Docker, then it would be best to mount > {{/dev/urandom}} from the host as {{/dev/random}} into the container: > {code:java} > docker run -v /dev/urandom:/dev/random > {code} > For Yetus, we need to add the mount to the {{DOCKER_EXTRAARGS}} as follows: > {code:java} > DOCKER_EXTRAARGS+=("-v" "/dev/urandom:/dev/random") > {code} > ... -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2697: HADOOP-16810. Increase entropy on precommit Linux VMs
hadoop-yetus commented on pull request #2697: URL: https://github.com/apache/hadoop/pull/2697#issuecomment-779296759 (!) A patch to the testing environment has been detected. Re-executing against the patched versions to perform further tests. The console is at https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2697/2/console in case of problems. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-16202) Stabilize openFile() and adopt internally
[ https://issues.apache.org/jira/browse/HADOOP-16202?focusedWorklogId=552586&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-552586 ] ASF GitHub Bot logged work on HADOOP-16202: --- Author: ASF GitHub Bot Created on: 15/Feb/21 15:30 Start Date: 15/Feb/21 15:30 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #2584: URL: https://github.com/apache/hadoop/pull/2584#issuecomment-779295371 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 14m 30s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | markdownlint | 0m 1s | | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | | 0m 0s | [test4tests](test4tests) | The patch appears to include 16 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 14m 6s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 20m 21s | | trunk passed | | +1 :green_heart: | compile | 21m 19s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 19m 3s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 3m 59s | | trunk passed | | +1 :green_heart: | mvnsite | 7m 6s | | trunk passed | | +1 :green_heart: | shadedclient | 24m 48s | | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 5m 25s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 6m 13s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +0 :ok: | spotbugs | 1m 16s | | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 10m 52s | | trunk passed | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 25s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 4m 26s | | the patch passed | | +1 :green_heart: | compile | 20m 3s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 20m 2s | | the patch passed | | +1 :green_heart: | compile | 18m 1s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | javac | 18m 1s | | the patch passed | | -0 :warning: | checkstyle | 4m 0s | [/diff-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2584/2/artifact/out/diff-checkstyle-root.txt) | root: The patch generated 5 new + 821 unchanged - 1 fixed = 826 total (was 822) | | +1 :green_heart: | mvnsite | 7m 18s | | the patch passed | | -1 :x: | whitespace | 0m 0s | [/whitespace-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2584/2/artifact/out/whitespace-eol.txt) | The patch has 10 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply | | +1 :green_heart: | shadedclient | 13m 8s | | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 5m 52s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 6m 46s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | findbugs | 11m 55s | | the patch passed | _ Other Tests _ | | +1 :green_heart: | unit | 17m 49s | | hadoop-common in the patch passed. | | +1 :green_heart: | unit | 4m 20s | | hadoop-yarn-common in the patch passed. | | +1 :green_heart: | unit | 7m 17s | | hadoop-mapreduce-client-core in the patch passed. | | +1 :green_heart: | unit | 8m 29s | | hadoop-mapreduce-client-app in the patch passed. | | +1 :green_heart: | unit | 12m 16s | | hadoop-distcp in the patch passed. | | +1 :green_heart: | unit | 0m 59s | | hadoop-mapreduce-examples in the patch passed. | | +1 :green_heart: | unit | 6m 38s | | hadoop-streaming in the patch passed. | | +1 :green_heart: | unit | 2m 10s | | hadoop-aws in the patch passed. | | +1 :green_heart: | asflicense | 0m 55s | | The patch does not generate ASF License warnings. | | | | 295m 59s | | | | Subsystem | Report/Notes | |--:|:-| | Docker |
[GitHub] [hadoop] hadoop-yetus commented on pull request #2584: HADOOP-16202. Enhance openFile()
hadoop-yetus commented on pull request #2584: URL: https://github.com/apache/hadoop/pull/2584#issuecomment-779295371 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 14m 30s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | markdownlint | 0m 1s | | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | | 0m 0s | [test4tests](test4tests) | The patch appears to include 16 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 14m 6s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 20m 21s | | trunk passed | | +1 :green_heart: | compile | 21m 19s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 19m 3s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 3m 59s | | trunk passed | | +1 :green_heart: | mvnsite | 7m 6s | | trunk passed | | +1 :green_heart: | shadedclient | 24m 48s | | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 5m 25s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 6m 13s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +0 :ok: | spotbugs | 1m 16s | | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 10m 52s | | trunk passed | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 25s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 4m 26s | | the patch passed | | +1 :green_heart: | compile | 20m 3s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 20m 2s | | the patch passed | | +1 :green_heart: | compile | 18m 1s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | javac | 18m 1s | | the patch passed | | -0 :warning: | checkstyle | 4m 0s | [/diff-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2584/2/artifact/out/diff-checkstyle-root.txt) | root: The patch generated 5 new + 821 unchanged - 1 fixed = 826 total (was 822) | | +1 :green_heart: | mvnsite | 7m 18s | | the patch passed | | -1 :x: | whitespace | 0m 0s | [/whitespace-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2584/2/artifact/out/whitespace-eol.txt) | The patch has 10 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply | | +1 :green_heart: | shadedclient | 13m 8s | | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 5m 52s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 6m 46s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | findbugs | 11m 55s | | the patch passed | _ Other Tests _ | | +1 :green_heart: | unit | 17m 49s | | hadoop-common in the patch passed. | | +1 :green_heart: | unit | 4m 20s | | hadoop-yarn-common in the patch passed. | | +1 :green_heart: | unit | 7m 17s | | hadoop-mapreduce-client-core in the patch passed. | | +1 :green_heart: | unit | 8m 29s | | hadoop-mapreduce-client-app in the patch passed. | | +1 :green_heart: | unit | 12m 16s | | hadoop-distcp in the patch passed. | | +1 :green_heart: | unit | 0m 59s | | hadoop-mapreduce-examples in the patch passed. | | +1 :green_heart: | unit | 6m 38s | | hadoop-streaming in the patch passed. | | +1 :green_heart: | unit | 2m 10s | | hadoop-aws in the patch passed. | | +1 :green_heart: | asflicense | 0m 55s | | The patch does not generate ASF License warnings. | | | | 295m 59s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2584/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2584 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle markdownlint | | uname | Linux 00a870d16cf0 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86
[GitHub] [hadoop] hadoop-yetus commented on pull request #2702: HDFS-15836. RBF: Fix contract tests after HADOOP-13327
hadoop-yetus commented on pull request #2702: URL: https://github.com/apache/hadoop/pull/2702#issuecomment-779198376 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 38s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | | 0m 0s | [test4tests](test4tests) | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 34m 17s | | trunk passed | | +1 :green_heart: | shadedclient | 47m 48s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 34s | | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | | The patch has no whitespace issues. | | +1 :green_heart: | xml | 0m 2s | | The patch has no ill-formed XML file. | | +1 :green_heart: | shadedclient | 13m 42s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 0m 37s | | hadoop-hdfs-rbf in the patch passed. | | +1 :green_heart: | asflicense | 0m 35s | | The patch does not generate ASF License warnings. | | | | 66m 35s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2702/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2702 | | Optional Tests | dupname asflicense unit xml | | uname | Linux 87c21ab0f2b7 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / c3134ab3a99 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2702/1/testReport/ | | Max. process+thread count | 667 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: hadoop-hdfs-project/hadoop-hdfs-rbf | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2702/1/console | | versions | git=2.25.1 maven=3.6.3 | | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17511) Add an Audit plugin point for S3A auditing/context
[ https://issues.apache.org/jira/browse/HADOOP-17511?focusedWorklogId=552547&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-552547 ] ASF GitHub Bot logged work on HADOOP-17511: --- Author: ASF GitHub Bot Created on: 15/Feb/21 12:39 Start Date: 15/Feb/21 12:39 Worklog Time Spent: 10m Work Description: bgaborg commented on a change in pull request #2675: URL: https://github.com/apache/hadoop/pull/2675#discussion_r576144264 ## File path: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/audit/LoggingAuditService.java ## @@ -0,0 +1,279 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.s3a.audit; + +import javax.annotation.Nullable; +import java.io.IOException; +import java.time.LocalDateTime; +import java.time.format.DateTimeFormatter; +import java.time.format.DateTimeFormatterBuilder; +import java.time.temporal.ChronoField; +import java.util.UUID; +import java.util.concurrent.atomic.AtomicLong; + +import com.amazonaws.AmazonWebServiceRequest; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.s3a.Statistic; +import org.apache.hadoop.fs.statistics.impl.IOStatisticsStore; + +import static org.apache.hadoop.fs.s3a.impl.HeaderProcessing.HEADER_REFERRER; +import static org.apache.hadoop.fs.statistics.StoreStatisticNames.SUFFIX_FAILURES; + +/** + * Logging audit serves logs at INFO. + */ +public final class LoggingAuditService +extends AbstractOperationAuditService { + + /** + * What to look for in logs for ops outside any audit. + * {@value}. + */ + public static final String UNAUDITED_OPERATION = "unaudited operation"; + + /** + * This is where the context gets logged to. + */ + private static final Logger LOG = + LoggerFactory.getLogger(LoggingAuditService.class); + + /** + * Should OOB Spans be rejected? Review comment: OOB span - the naming seems a little confusing. Out of band spans? ## File path: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/audit/AuditConstants.java ## @@ -0,0 +1,22 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.s3a.audit; + +public class AuditConstants { Review comment: What is the purpose of this empty class? ## File path: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java ## @@ -4532,8 +4814,10 @@ private HeaderProcessing getHeaderProcessing() { public RemoteIterator listLocatedStatus(final Path f, final PathFilter filter) throws FileNotFoundException, IOException { -entryPoint(INVOCATION_LIST_LOCATED_STATUS); Path path = qualify(f); +// Unless that iterator is closed, the iterator wouldn't be closed +// there. +entryPoint(INVOCATION_LIST_LOCATED_STATUS, path); Review comment: you are not using try with resource here. maybe it's justified, I'm just pointing it out because maybe it's needed. ## File path: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java ## @@ -1311,8 +1408,36 @@ private S3ObjectAttributes createObjectAttributes( public FSDataOutputStream create(Path f, FsPermission permission, boolean overwrite, int bufferSi
[GitHub] [hadoop] bgaborg commented on a change in pull request #2675: HADOOP-17511. Add audit/telemetry logging to S3A connector
bgaborg commented on a change in pull request #2675: URL: https://github.com/apache/hadoop/pull/2675#discussion_r576144264 ## File path: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/audit/LoggingAuditService.java ## @@ -0,0 +1,279 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.s3a.audit; + +import javax.annotation.Nullable; +import java.io.IOException; +import java.time.LocalDateTime; +import java.time.format.DateTimeFormatter; +import java.time.format.DateTimeFormatterBuilder; +import java.time.temporal.ChronoField; +import java.util.UUID; +import java.util.concurrent.atomic.AtomicLong; + +import com.amazonaws.AmazonWebServiceRequest; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.s3a.Statistic; +import org.apache.hadoop.fs.statistics.impl.IOStatisticsStore; + +import static org.apache.hadoop.fs.s3a.impl.HeaderProcessing.HEADER_REFERRER; +import static org.apache.hadoop.fs.statistics.StoreStatisticNames.SUFFIX_FAILURES; + +/** + * Logging audit serves logs at INFO. + */ +public final class LoggingAuditService +extends AbstractOperationAuditService { + + /** + * What to look for in logs for ops outside any audit. + * {@value}. + */ + public static final String UNAUDITED_OPERATION = "unaudited operation"; + + /** + * This is where the context gets logged to. + */ + private static final Logger LOG = + LoggerFactory.getLogger(LoggingAuditService.class); + + /** + * Should OOB Spans be rejected? Review comment: OOB span - the naming seems a little confusing. Out of band spans? ## File path: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/audit/AuditConstants.java ## @@ -0,0 +1,22 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.s3a.audit; + +public class AuditConstants { Review comment: What is the purpose of this empty class? ## File path: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java ## @@ -4532,8 +4814,10 @@ private HeaderProcessing getHeaderProcessing() { public RemoteIterator listLocatedStatus(final Path f, final PathFilter filter) throws FileNotFoundException, IOException { -entryPoint(INVOCATION_LIST_LOCATED_STATUS); Path path = qualify(f); +// Unless that iterator is closed, the iterator wouldn't be closed +// there. +entryPoint(INVOCATION_LIST_LOCATED_STATUS, path); Review comment: you are not using try with resource here. maybe it's justified, I'm just pointing it out because maybe it's needed. ## File path: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java ## @@ -1311,8 +1408,36 @@ private S3ObjectAttributes createObjectAttributes( public FSDataOutputStream create(Path f, FsPermission permission, boolean overwrite, int bufferSize, short replication, long blockSize, Progressable progress) throws IOException { -entryPoint(INVOCATION_CREATE); final Path path = qualify(f); +final AuditSpan span = entryPoint(INVOCATION_CREATE, path); +return innerCreateFile(path, permission, overwrite, bufferSize, replication, +blockSize, progress); + + } Review comment: nit: formatting, newline ## File path: hadoop-tools/
[GitHub] [hadoop] hadoop-yetus commented on pull request #1: MAPREDUCE-6096.SummarizedJob Class Improvment
hadoop-yetus commented on pull request #1: URL: https://github.com/apache/hadoop/pull/1#issuecomment-779162454 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 33s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 33m 52s | | trunk passed | | +1 :green_heart: | compile | 0m 43s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 0m 35s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 0m 40s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 41s | | trunk passed | | +1 :green_heart: | shadedclient | 17m 8s | | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 24s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 0m 21s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +0 :ok: | spotbugs | 1m 22s | | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 1m 20s | | trunk passed | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 33s | | the patch passed | | +1 :green_heart: | compile | 0m 33s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 0m 33s | | the patch passed | | +1 :green_heart: | compile | 0m 30s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | javac | 0m 30s | | the patch passed | | -0 :warning: | checkstyle | 0m 28s | [/diff-checkstyle-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-1/1/artifact/out/diff-checkstyle-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.txt) | hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core: The patch generated 9 new + 138 unchanged - 9 fixed = 147 total (was 147) | | +1 :green_heart: | mvnsite | 0m 33s | | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 14m 46s | | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 20s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 0m 19s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | findbugs | 1m 32s | | the patch passed | _ Other Tests _ | | +1 :green_heart: | unit | 7m 33s | | hadoop-mapreduce-client-core in the patch passed. | | +1 :green_heart: | asflicense | 0m 32s | | The patch does not generate ASF License warnings. | | | | 85m 42s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-1/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 9fa6df5c667f 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / c3134ab3a99 | | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-1/1/testReport/ | | Max. process+thread count | 1340 (vs. ulimit of 5500) | | modules | C: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core U: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core | | Console output | https://c
[GitHub] [hadoop] aajisaka opened a new pull request #2702: HDFS-15836. RBF: Fix contract tests after HADOOP-13327
aajisaka opened a new pull request #2702: URL: https://github.com/apache/hadoop/pull/2702 JIRA: https://issues.apache.org/jira/browse/HDFS-15836 Fix the following tests: - TestRouterHDFSContractCreate - TestRouterHDFSContractCreateSecure - TestRouterWebHDFSContractCreate This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-16748) Support Python 3 in dev-support scripts
[ https://issues.apache.org/jira/browse/HADOOP-16748?focusedWorklogId=552500&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-552500 ] ASF GitHub Bot logged work on HADOOP-16748: --- Author: ASF GitHub Bot Created on: 15/Feb/21 09:49 Start Date: 15/Feb/21 09:49 Worklog Time Spent: 10m Work Description: aajisaka commented on pull request #1738: URL: https://github.com/apache/hadoop/pull/1738#issuecomment-779100724 Now the patch is ready to go. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 552500) Time Spent: 3h 40m (was: 3.5h) > Support Python 3 in dev-support scripts > --- > > Key: HADOOP-16748 > URL: https://issues.apache.org/jira/browse/HADOOP-16748 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Major > Labels: pull-request-available > Time Spent: 3h 40m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] aajisaka commented on pull request #1738: HADOOP-16748. Support Python 3 in dev-support scripts.
aajisaka commented on pull request #1738: URL: https://github.com/apache/hadoop/pull/1738#issuecomment-779100724 Now the patch is ready to go. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] aajisaka commented on pull request #2696: HDFS-15834. Remove the usage of org.apache.log4j.Level
aajisaka commented on pull request #2696: URL: https://github.com/apache/hadoop/pull/2696#issuecomment-779098100 Filed https://issues.apache.org/jira/browse/HDFS-15836 to fix the test failures. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13327) Add OutputStream + Syncable to the Filesystem Specification
[ https://issues.apache.org/jira/browse/HADOOP-13327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-13327: --- Fix Version/s: 3.4.0 > Add OutputStream + Syncable to the Filesystem Specification > --- > > Key: HADOOP-13327 > URL: https://issues.apache.org/jira/browse/HADOOP-13327 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Labels: pull-request-available > Fix For: 3.3.1, 3.4.0 > > Attachments: HADOOP-13327-002.patch, HADOOP-13327-003.patch, > HADOOP-13327-branch-2-001.patch > > Time Spent: 11h 10m > Remaining Estimate: 0h > > Write down what a Filesystem output stream should do. While core the API is > defined in Java, that doesn't say what's expected about visibility, > durability, etc —and Hadoop Syncable interface is entirely ours to define. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17528) Not closing an SFTP File System instance prevents JVM from exiting.
[ https://issues.apache.org/jira/browse/HADOOP-17528?focusedWorklogId=552492&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-552492 ] ASF GitHub Bot logged work on HADOOP-17528: --- Author: ASF GitHub Bot Created on: 15/Feb/21 09:29 Start Date: 15/Feb/21 09:29 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #2701: URL: https://github.com/apache/hadoop/pull/2701#issuecomment-779088595 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 30s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | | 0m 0s | [test4tests](test4tests) | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 32m 9s | | trunk passed | | +1 :green_heart: | compile | 20m 28s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 17m 45s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 1m 6s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 31s | | trunk passed | | +1 :green_heart: | shadedclient | 16m 1s | | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 1m 5s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 1m 36s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +0 :ok: | spotbugs | 2m 21s | | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 2m 19s | | trunk passed | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 54s | | the patch passed | | +1 :green_heart: | compile | 20m 33s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 20m 33s | | the patch passed | | +1 :green_heart: | compile | 17m 53s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | javac | 17m 53s | | the patch passed | | +1 :green_heart: | checkstyle | 1m 3s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 28s | | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 13m 27s | | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 1m 4s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 1m 30s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | findbugs | 2m 22s | | the patch passed | _ Other Tests _ | | +1 :green_heart: | unit | 17m 18s | | hadoop-common in the patch passed. | | +1 :green_heart: | asflicense | 0m 50s | | The patch does not generate ASF License warnings. | | | | 175m 5s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2701/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2701 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 937d084d102c 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / c3134ab3a99 | | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2701/1/testReport/ | | Max. process+thread count | 2197 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2701/1/console | | versions | git=2.25.1 maven=3.6.3 findbugs=4.0.6 | | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://y
[GitHub] [hadoop] hadoop-yetus commented on pull request #2701: HADOOP-17528. Fix closing an underlying connection pool when closing SFTP File System
hadoop-yetus commented on pull request #2701: URL: https://github.com/apache/hadoop/pull/2701#issuecomment-779088595 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 30s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | | 0m 0s | [test4tests](test4tests) | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 32m 9s | | trunk passed | | +1 :green_heart: | compile | 20m 28s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 17m 45s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 1m 6s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 31s | | trunk passed | | +1 :green_heart: | shadedclient | 16m 1s | | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 1m 5s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 1m 36s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +0 :ok: | spotbugs | 2m 21s | | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 2m 19s | | trunk passed | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 54s | | the patch passed | | +1 :green_heart: | compile | 20m 33s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 20m 33s | | the patch passed | | +1 :green_heart: | compile | 17m 53s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | javac | 17m 53s | | the patch passed | | +1 :green_heart: | checkstyle | 1m 3s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 28s | | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 13m 27s | | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 1m 4s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 1m 30s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | findbugs | 2m 22s | | the patch passed | _ Other Tests _ | | +1 :green_heart: | unit | 17m 18s | | hadoop-common in the patch passed. | | +1 :green_heart: | asflicense | 0m 50s | | The patch does not generate ASF License warnings. | | | | 175m 5s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2701/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2701 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 937d084d102c 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / c3134ab3a99 | | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2701/1/testReport/ | | Max. process+thread count | 2197 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2701/1/console | | versions | git=2.25.1 maven=3.6.3 findbugs=4.0.6 | | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [hadoop] ferhui commented on pull request #2694: HDFS-15830. Support to make dfs.image.parallel.load reconfigurable
ferhui commented on pull request #2694: URL: https://github.com/apache/hadoop/pull/2694#issuecomment-779069487 @dineshchitlangia Thanks for review! Wait until @sodonnel take another look. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2696: HDFS-15834. Remove the usage of org.apache.log4j.Level
hadoop-yetus commented on pull request #2696: URL: https://github.com/apache/hadoop/pull/2696#issuecomment-779036029 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 31s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 2s | | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | | 0m 0s | [test4tests](test4tests) | The patch appears to include 76 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 14m 34s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 20m 6s | | trunk passed | | +1 :green_heart: | compile | 4m 49s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 4m 27s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 1m 55s | | trunk passed | | +1 :green_heart: | mvnsite | 3m 22s | | trunk passed | | +1 :green_heart: | shadedclient | 18m 39s | | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 2m 33s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 3m 21s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +0 :ok: | spotbugs | 1m 14s | | Used deprecated FindBugs config; considering switching to SpotBugs. | | +0 :ok: | findbugs | 0m 22s | | branch/hadoop-hdfs-project/hadoop-hdfs-native-client no findbugs output file (findbugsXml.xml) | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 27s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 50s | | the patch passed | | +1 :green_heart: | compile | 4m 41s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 4m 41s | | hadoop-hdfs-project-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 generated 0 new + 692 unchanged - 79 fixed = 692 total (was 771) | | +1 :green_heart: | compile | 4m 21s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | javac | 4m 21s | | hadoop-hdfs-project-jdkPrivateBuild-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 generated 0 new + 669 unchanged - 79 fixed = 669 total (was 748) | | +1 :green_heart: | checkstyle | 1m 49s | | hadoop-hdfs-project: The patch generated 0 new + 1630 unchanged - 4 fixed = 1630 total (was 1634) | | +1 :green_heart: | mvnsite | 2m 48s | | the patch passed | | +1 :green_heart: | whitespace | 0m 1s | | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 12m 44s | | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 2m 17s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 3m 7s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +0 :ok: | findbugs | 0m 19s | | hadoop-hdfs-project/hadoop-hdfs-native-client has no data from findbugs | _ Other Tests _ | | +1 :green_heart: | unit | 2m 20s | | hadoop-hdfs-client in the patch passed. | | +1 :green_heart: | unit | 193m 15s | | hadoop-hdfs in the patch passed. | | +1 :green_heart: | unit | 8m 15s | | hadoop-hdfs-native-client in the patch passed. | | -1 :x: | unit | 16m 50s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2696/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt) | hadoop-hdfs-rbf in the patch passed. | | +1 :green_heart: | asflicense | 0m 44s | | The patch does not generate ASF License warnings. | | | | 344m 19s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.fs.contract.router.web.TestRouterWebHDFSContractCreate | | | hadoop.fs.contract.router.TestRouterHDFSContractCreate | | | hadoop.fs.contract.router.TestRouterHDFSContractCreateSecure | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2696/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2696 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit