[jira] [Work logged] (HADOOP-17715) ABFS: Append blob tests with non HNS accounts fail
[ https://issues.apache.org/jira/browse/HADOOP-17715?focusedWorklogId=608958&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-608958 ] ASF GitHub Bot logged work on HADOOP-17715: --- Author: ASF GitHub Bot Created on: 09/Jun/21 06:44 Start Date: 09/Jun/21 06:44 Worklog Time Spent: 10m Work Description: surendralilhore commented on pull request #3028: URL: https://github.com/apache/hadoop/pull/3028#issuecomment-857429622 Committed to trunk, please raise PR for 3.2 and 3.3 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 608958) Time Spent: 1h 40m (was: 1.5h) > ABFS: Append blob tests with non HNS accounts fail > -- > > Key: HADOOP-17715 > URL: https://issues.apache.org/jira/browse/HADOOP-17715 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Sneha Varma >Priority: Minor > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 1h 40m > Remaining Estimate: 0h > > Append blob tests with non HNS accounts fail. > # The script to run the tests should ensure that append blob tests with non > HNS account don't execute > # Should have proper documentation mentioning that append blob is allowed > only for HNS accounts -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] surendralilhore commented on pull request #3028: HADOOP-17715 ABFS: Append blob tests with non HNS accounts fail
surendralilhore commented on pull request #3028: URL: https://github.com/apache/hadoop/pull/3028#issuecomment-857429622 Committed to trunk, please raise PR for 3.2 and 3.3 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17750) Fix asf license errors in newly added files by HADOOP-17727
[ https://issues.apache.org/jira/browse/HADOOP-17750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Takanobu Asanuma updated HADOOP-17750: -- Fix Version/s: 3.4.0 Resolution: Fixed Status: Resolved (was: Patch Available) > Fix asf license errors in newly added files by HADOOP-17727 > --- > > Key: HADOOP-17750 > URL: https://issues.apache.org/jira/browse/HADOOP-17750 > Project: Hadoop Common > Issue Type: Bug >Reporter: Takanobu Asanuma >Assignee: Takanobu Asanuma >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 0.5h > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17750) Fix asf license errors in newly added files by HADOOP-17727
[ https://issues.apache.org/jira/browse/HADOOP-17750?focusedWorklogId=608954&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-608954 ] ASF GitHub Bot logged work on HADOOP-17750: --- Author: ASF GitHub Bot Created on: 09/Jun/21 06:32 Start Date: 09/Jun/21 06:32 Worklog Time Spent: 10m Work Description: tasanuma merged pull request #3083: URL: https://github.com/apache/hadoop/pull/3083 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 608954) Time Spent: 0.5h (was: 20m) > Fix asf license errors in newly added files by HADOOP-17727 > --- > > Key: HADOOP-17750 > URL: https://issues.apache.org/jira/browse/HADOOP-17750 > Project: Hadoop Common > Issue Type: Bug >Reporter: Takanobu Asanuma >Assignee: Takanobu Asanuma >Priority: Major > Labels: pull-request-available > Time Spent: 0.5h > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] tasanuma merged pull request #3083: HADOOP-17750. Fix asf license errors in newly added files by HADOOP-17727
tasanuma merged pull request #3083: URL: https://github.com/apache/hadoop/pull/3083 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17750) Fix asf license errors in newly added files by HADOOP-17727
[ https://issues.apache.org/jira/browse/HADOOP-17750?focusedWorklogId=608952&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-608952 ] ASF GitHub Bot logged work on HADOOP-17750: --- Author: ASF GitHub Bot Created on: 09/Jun/21 06:29 Start Date: 09/Jun/21 06:29 Worklog Time Spent: 10m Work Description: tasanuma commented on pull request #3083: URL: https://github.com/apache/hadoop/pull/3083#issuecomment-857420711 Thanks for reviewing it, all. Unfortunately, CI had run over 20h and failed due to timeout. I confirmed `mvn clean apache-rat:check` succeeds with this PR. I will merge it. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 608952) Time Spent: 20m (was: 10m) > Fix asf license errors in newly added files by HADOOP-17727 > --- > > Key: HADOOP-17750 > URL: https://issues.apache.org/jira/browse/HADOOP-17750 > Project: Hadoop Common > Issue Type: Bug >Reporter: Takanobu Asanuma >Assignee: Takanobu Asanuma >Priority: Major > Time Spent: 20m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17750) Fix asf license errors in newly added files by HADOOP-17727
[ https://issues.apache.org/jira/browse/HADOOP-17750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HADOOP-17750: Labels: pull-request-available (was: ) > Fix asf license errors in newly added files by HADOOP-17727 > --- > > Key: HADOOP-17750 > URL: https://issues.apache.org/jira/browse/HADOOP-17750 > Project: Hadoop Common > Issue Type: Bug >Reporter: Takanobu Asanuma >Assignee: Takanobu Asanuma >Priority: Major > Labels: pull-request-available > Time Spent: 20m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] tasanuma commented on pull request #3083: HADOOP-17750. Fix asf license errors in newly added files by HADOOP-17727
tasanuma commented on pull request #3083: URL: https://github.com/apache/hadoop/pull/3083#issuecomment-857420711 Thanks for reviewing it, all. Unfortunately, CI had run over 20h and failed due to timeout. I confirmed `mvn clean apache-rat:check` succeeds with this PR. I will merge it. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #3084: HDFS-16057. Make sure the order for location in ENTERING_MAINTENANCE …
hadoop-yetus commented on pull request #3084: URL: https://github.com/apache/hadoop/pull/3084#issuecomment-857419235 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 35s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 30m 26s | | trunk passed | | +1 :green_heart: | compile | 1m 21s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 1m 17s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 1m 3s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 24s | | trunk passed | | +1 :green_heart: | javadoc | 0m 57s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 30s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 8s | | trunk passed | | +1 :green_heart: | shadedclient | 16m 3s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 12s | | the patch passed | | +1 :green_heart: | compile | 1m 15s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 1m 15s | | the patch passed | | +1 :green_heart: | compile | 1m 7s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 1m 7s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 52s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 11s | | the patch passed | | +1 :green_heart: | javadoc | 0m 47s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 23s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 8s | | the patch passed | | +1 :green_heart: | shadedclient | 15m 48s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 228m 44s | | hadoop-hdfs in the patch passed. | | -1 :x: | asflicense | 0m 47s | [/results-asflicense.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3084/3/artifact/out/results-asflicense.txt) | The patch generated 2 ASF License warnings. | | | | 311m 59s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3084/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3084 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 17996b1e50ba 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 702b82e873b4f2ab68b1cb76bada0c9dbd25df1c | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3084/3/testReport/ | | Max. process+thread count | 3166 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3084/3/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org -
[GitHub] [hadoop] tasanuma merged pull request #3075: YARN-10805. Replace Guava Lists usage by Hadoop's own Lists in hadoop-yarn-project
tasanuma merged pull request #3075: URL: https://github.com/apache/hadoop/pull/3075 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] tasanuma commented on pull request #3075: YARN-10805. Replace Guava Lists usage by Hadoop's own Lists in hadoop-yarn-project
tasanuma commented on pull request #3075: URL: https://github.com/apache/hadoop/pull/3075#issuecomment-857412757 Thanks for your contribution, @virajjasani. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] tasanuma commented on pull request #3075: YARN-10805. Replace Guava Lists usage by Hadoop's own Lists in hadoop-yarn-project
tasanuma commented on pull request #3075: URL: https://github.com/apache/hadoop/pull/3075#issuecomment-857412387 The failed tests passed in my local environment. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] tasanuma commented on pull request #3073: HDFS-16054. Replace Guava Lists usage by Hadoop's own Lists in hadoop-hdfs-project
tasanuma commented on pull request #3073: URL: https://github.com/apache/hadoop/pull/3073#issuecomment-857410973 Merged. Thanks again, @virajjasani. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] tasanuma merged pull request #3073: HDFS-16054. Replace Guava Lists usage by Hadoop's own Lists in hadoop-hdfs-project
tasanuma merged pull request #3073: URL: https://github.com/apache/hadoop/pull/3073 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] tasanuma commented on pull request #3073: HDFS-16054. Replace Guava Lists usage by Hadoop's own Lists in hadoop-hdfs-project
tasanuma commented on pull request #3073: URL: https://github.com/apache/hadoop/pull/3073#issuecomment-857410626 The failed tests succeeded in my local environment. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17752) Remove lock contention in REGISTRY of Configuration
[ https://issues.apache.org/jira/browse/HADOOP-17752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xuesen Liang updated HADOOP-17752: -- Parent: HADOOP-17751 Issue Type: Sub-task (was: Improvement) > Remove lock contention in REGISTRY of Configuration > --- > > Key: HADOOP-17752 > URL: https://issues.apache.org/jira/browse/HADOOP-17752 > Project: Hadoop Common > Issue Type: Sub-task > Components: common >Reporter: Xuesen Liang >Priority: Major > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > Every Configuration instance is put into *Configuration#REGISTRY* by its > constructor. This operation is guard by Configuration.class. > REGISTRY is a *WeakHashMap*, which should be replaced by *ConcurrentHashMap*. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17596) ABFS: Change default Readahead Queue Depth from num(processors) to const
[ https://issues.apache.org/jira/browse/HADOOP-17596?focusedWorklogId=608930&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-608930 ] ASF GitHub Bot logged work on HADOOP-17596: --- Author: ASF GitHub Bot Created on: 09/Jun/21 05:36 Start Date: 09/Jun/21 05:36 Worklog Time Spent: 10m Work Description: sumangala-patki commented on pull request #2795: URL: https://github.com/apache/hadoop/pull/2795#issuecomment-857390775 @surendralilhore thanks for updating the JIRA @steveloughran yes, but planning to ensure a gap (maybe a week or two) between check-in to trunk and backport of PRs. Will keep a track of pending backports -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 608930) Time Spent: 3h 50m (was: 3h 40m) > ABFS: Change default Readahead Queue Depth from num(processors) to const > > > Key: HADOOP-17596 > URL: https://issues.apache.org/jira/browse/HADOOP-17596 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.3.1 >Reporter: Sumangala Patki >Assignee: Sumangala Patki >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 3h 50m > Remaining Estimate: 0h > > The default value of readahead queue depth is currently set to the number of > available processors. However, this can result in one inputstream instance > consuming more processor time. To ensure equal thread allocation during read > for all inputstreams created in a session, we change the default readahead > queue depth to a constant (2). -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] sumangala-patki commented on pull request #2795: HADOOP-17596. ABFS: Change default Readahead Queue Depth from num(processors) to const
sumangala-patki commented on pull request #2795: URL: https://github.com/apache/hadoop/pull/2795#issuecomment-857390775 @surendralilhore thanks for updating the JIRA @steveloughran yes, but planning to ensure a gap (maybe a week or two) between check-in to trunk and backport of PRs. Will keep a track of pending backports -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17715) ABFS: Append blob tests with non HNS accounts fail
[ https://issues.apache.org/jira/browse/HADOOP-17715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Surendra Singh Lilhore updated HADOOP-17715: Fix Version/s: 3.4.0 > ABFS: Append blob tests with non HNS accounts fail > -- > > Key: HADOOP-17715 > URL: https://issues.apache.org/jira/browse/HADOOP-17715 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Sneha Varma >Priority: Minor > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 1.5h > Remaining Estimate: 0h > > Append blob tests with non HNS accounts fail. > # The script to run the tests should ensure that append blob tests with non > HNS account don't execute > # Should have proper documentation mentioning that append blob is allowed > only for HNS accounts -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17715) ABFS: Append blob tests with non HNS accounts fail
[ https://issues.apache.org/jira/browse/HADOOP-17715?focusedWorklogId=608927&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-608927 ] ASF GitHub Bot logged work on HADOOP-17715: --- Author: ASF GitHub Bot Created on: 09/Jun/21 05:24 Start Date: 09/Jun/21 05:24 Worklog Time Spent: 10m Work Description: surendralilhore merged pull request #3028: URL: https://github.com/apache/hadoop/pull/3028 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 608927) Time Spent: 1.5h (was: 1h 20m) > ABFS: Append blob tests with non HNS accounts fail > -- > > Key: HADOOP-17715 > URL: https://issues.apache.org/jira/browse/HADOOP-17715 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Sneha Varma >Priority: Minor > Labels: pull-request-available > Time Spent: 1.5h > Remaining Estimate: 0h > > Append blob tests with non HNS accounts fail. > # The script to run the tests should ensure that append blob tests with non > HNS account don't execute > # Should have proper documentation mentioning that append blob is allowed > only for HNS accounts -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] surendralilhore merged pull request #3028: HADOOP-17715 ABFS: Append blob tests with non HNS accounts fail
surendralilhore merged pull request #3028: URL: https://github.com/apache/hadoop/pull/3028 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17715) ABFS: Append blob tests with non HNS accounts fail
[ https://issues.apache.org/jira/browse/HADOOP-17715?focusedWorklogId=608926&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-608926 ] ASF GitHub Bot logged work on HADOOP-17715: --- Author: ASF GitHub Bot Created on: 09/Jun/21 05:23 Start Date: 09/Jun/21 05:23 Worklog Time Spent: 10m Work Description: surendralilhore commented on pull request #3028: URL: https://github.com/apache/hadoop/pull/3028#issuecomment-857384805 +1 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 608926) Time Spent: 1h 20m (was: 1h 10m) > ABFS: Append blob tests with non HNS accounts fail > -- > > Key: HADOOP-17715 > URL: https://issues.apache.org/jira/browse/HADOOP-17715 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Sneha Varma >Priority: Minor > Labels: pull-request-available > Time Spent: 1h 20m > Remaining Estimate: 0h > > Append blob tests with non HNS accounts fail. > # The script to run the tests should ensure that append blob tests with non > HNS account don't execute > # Should have proper documentation mentioning that append blob is allowed > only for HNS accounts -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] surendralilhore commented on pull request #3028: HADOOP-17715 ABFS: Append blob tests with non HNS accounts fail
surendralilhore commented on pull request #3028: URL: https://github.com/apache/hadoop/pull/3028#issuecomment-857384805 +1 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17596) ABFS: Change default Readahead Queue Depth from num(processors) to const
[ https://issues.apache.org/jira/browse/HADOOP-17596?focusedWorklogId=608923&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-608923 ] ASF GitHub Bot logged work on HADOOP-17596: --- Author: ASF GitHub Bot Created on: 09/Jun/21 05:11 Start Date: 09/Jun/21 05:11 Worklog Time Spent: 10m Work Description: surendralilhore commented on pull request #2795: URL: https://github.com/apache/hadoop/pull/2795#issuecomment-857378838 @steveloughran , I updated the Jira. > Are there any plans to backport to branch-3.3? A retest of the cherrypick is all which should be needed Yes, @sumangala-patki is working on backport and testing it for 3.2 and 3.3 branch. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 608923) Time Spent: 3h 40m (was: 3.5h) > ABFS: Change default Readahead Queue Depth from num(processors) to const > > > Key: HADOOP-17596 > URL: https://issues.apache.org/jira/browse/HADOOP-17596 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.3.1 >Reporter: Sumangala Patki >Assignee: Sumangala Patki >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 3h 40m > Remaining Estimate: 0h > > The default value of readahead queue depth is currently set to the number of > available processors. However, this can result in one inputstream instance > consuming more processor time. To ensure equal thread allocation during read > for all inputstreams created in a session, we change the default readahead > queue depth to a constant (2). -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] surendralilhore commented on pull request #2795: HADOOP-17596. ABFS: Change default Readahead Queue Depth from num(processors) to const
surendralilhore commented on pull request #2795: URL: https://github.com/apache/hadoop/pull/2795#issuecomment-857378838 @steveloughran , I updated the Jira. > Are there any plans to backport to branch-3.3? A retest of the cherrypick is all which should be needed Yes, @sumangala-patki is working on backport and testing it for 3.2 and 3.3 branch. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17596) ABFS: Change default Readahead Queue Depth from num(processors) to const
[ https://issues.apache.org/jira/browse/HADOOP-17596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Surendra Singh Lilhore updated HADOOP-17596: Fix Version/s: 3.4.0 > ABFS: Change default Readahead Queue Depth from num(processors) to const > > > Key: HADOOP-17596 > URL: https://issues.apache.org/jira/browse/HADOOP-17596 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.3.1 >Reporter: Sumangala Patki >Assignee: Sumangala Patki >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Time Spent: 3.5h > Remaining Estimate: 0h > > The default value of readahead queue depth is currently set to the number of > available processors. However, this can result in one inputstream instance > consuming more processor time. To ensure equal thread allocation during read > for all inputstreams created in a session, we change the default readahead > queue depth to a constant (2). -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17752) Remove lock contention in REGISTRY of Configuration
[ https://issues.apache.org/jira/browse/HADOOP-17752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HADOOP-17752: Labels: pull-request-available (was: ) > Remove lock contention in REGISTRY of Configuration > --- > > Key: HADOOP-17752 > URL: https://issues.apache.org/jira/browse/HADOOP-17752 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Reporter: Xuesen Liang >Priority: Major > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > Every Configuration instance is put into *Configuration#REGISTRY* by its > constructor. This operation is guard by Configuration.class. > REGISTRY is a *WeakHashMap*, which should be replaced by *ConcurrentHashMap*. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17752) Remove lock contention in REGISTRY of Configuration
[ https://issues.apache.org/jira/browse/HADOOP-17752?focusedWorklogId=608913&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-608913 ] ASF GitHub Bot logged work on HADOOP-17752: --- Author: ASF GitHub Bot Created on: 09/Jun/21 04:11 Start Date: 09/Jun/21 04:11 Worklog Time Spent: 10m Work Description: liangxs opened a new pull request #3085: URL: https://github.com/apache/hadoop/pull/3085 JIRA: [HADOOP-17752](https://issues.apache.org/jira/browse/HADOOP-17752) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 608913) Remaining Estimate: 0h Time Spent: 10m > Remove lock contention in REGISTRY of Configuration > --- > > Key: HADOOP-17752 > URL: https://issues.apache.org/jira/browse/HADOOP-17752 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Reporter: Xuesen Liang >Priority: Major > Time Spent: 10m > Remaining Estimate: 0h > > Every Configuration instance is put into *Configuration#REGISTRY* by its > constructor. This operation is guard by Configuration.class. > REGISTRY is a *WeakHashMap*, which should be replaced by *ConcurrentHashMap*. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] liangxs opened a new pull request #3085: HADOOP-17752. Remove lock contention in REGISTRY of Configuration
liangxs opened a new pull request #3085: URL: https://github.com/apache/hadoop/pull/3085 JIRA: [HADOOP-17752](https://issues.apache.org/jira/browse/HADOOP-17752) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] aajisaka commented on a change in pull request #3065: HDFS-13671. Namenode deletes large dir slowly caused by FoldedTreeSet#removeAndGet
aajisaka commented on a change in pull request #3065: URL: https://github.com/apache/hadoop/pull/3065#discussion_r647951023 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java ## @@ -1996,7 +1996,12 @@ private void metaSave(PrintWriter out) { LightWeightHashSet openFileIds = new LightWeightHashSet<>(); for (DatanodeDescriptor dataNode : blockManager.getDatanodeManager().getDatanodes()) { - for (long ucFileId : dataNode.getLeavingServiceStatus().getOpenFiles()) { + // Sort open files + LightWeightHashSet dnOpenFiles = + dataNode.getLeavingServiceStatus().getOpenFiles(); + Long[] dnOpenFileIds = new Long[dnOpenFiles.size()]; + Arrays.sort(dnOpenFiles.toArray(dnOpenFileIds)); + for (Long ucFileId : dnOpenFileIds) { INode ucFile = getFSDirectory().getInode(ucFileId); if (ucFile == null || ucFileId <= prevId || openFileIds.contains(ucFileId)) { Review comment: Yes, I'm +1 to keep the same as before. I'll file a new jira. Thanks! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #3075: YARN-10805. Replace Guava Lists usage by Hadoop's own Lists in hadoop-yarn-project
hadoop-yetus commented on pull request #3075: URL: https://github.com/apache/hadoop/pull/3075#issuecomment-857346126 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 38s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 37 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 12m 52s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 21m 17s | | trunk passed | | +1 :green_heart: | compile | 9m 33s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 8m 8s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 1m 51s | | trunk passed | | +1 :green_heart: | mvnsite | 9m 58s | | trunk passed | | +1 :green_heart: | javadoc | 6m 51s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 6m 22s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 20m 21s | | trunk passed | | +1 :green_heart: | shadedclient | 13m 58s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 27s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 8m 54s | | the patch passed | | +1 :green_heart: | compile | 9m 11s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 9m 11s | | the patch passed | | +1 :green_heart: | compile | 8m 4s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 8m 4s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 1m 46s | | the patch passed | | +1 :green_heart: | mvnsite | 8m 59s | | the patch passed | | +1 :green_heart: | xml | 0m 1s | | The patch has no ill-formed XML file. | | +1 :green_heart: | javadoc | 6m 25s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 5m 30s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 21m 42s | | the patch passed | | +1 :green_heart: | shadedclient | 15m 13s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 228m 41s | [/patch-unit-hadoop-yarn-project_hadoop-yarn.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3075/6/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn.txt) | hadoop-yarn in the patch passed. | | +1 :green_heart: | unit | 5m 11s | | hadoop-yarn-common in the patch passed. | | +1 :green_heart: | unit | 1m 38s | | hadoop-yarn-server-web-proxy in the patch passed. | | +1 :green_heart: | unit | 96m 45s | | hadoop-yarn-server-resourcemanager in the patch passed. | | +1 :green_heart: | unit | 23m 18s | | hadoop-yarn-server-nodemanager in the patch passed. | | +1 :green_heart: | unit | 28m 27s | | hadoop-yarn-client in the patch passed. | | +1 :green_heart: | unit | 20m 47s | | hadoop-yarn-services-core in the patch passed. | | +1 :green_heart: | unit | 2m 27s | | hadoop-yarn-services-api in the patch passed. | | -1 :x: | asflicense | 1m 2s | [/results-asflicense.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3075/6/artifact/out/results-asflicense.txt) | The patch generated 2 ASF License warnings. | | | | 610m 24s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.yarn.server.router.clientrm.TestFederationClientInterceptor | | | hadoop.yarn.server.timelineservice.storage.common.TestHBaseTimelineStorageUtils | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3075/6/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3075 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell
[GitHub] [hadoop] hadoop-yetus commented on pull request #3078: HDFS-16055. Quota is not preserved in snapshot INode
hadoop-yetus commented on pull request #3078: URL: https://github.com/apache/hadoop/pull/3078#issuecomment-857345327 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 20m 39s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 33m 8s | | trunk passed | | +1 :green_heart: | compile | 1m 24s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 1m 13s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 1m 1s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 23s | | trunk passed | | +1 :green_heart: | javadoc | 0m 55s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 25s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 15s | | trunk passed | | +1 :green_heart: | shadedclient | 18m 38s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 14s | | the patch passed | | +1 :green_heart: | compile | 1m 18s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 1m 18s | | the patch passed | | +1 :green_heart: | compile | 1m 9s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 1m 9s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 55s | [/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3078/2/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 33 unchanged - 0 fixed = 34 total (was 33) | | +1 :green_heart: | mvnsite | 1m 16s | | the patch passed | | +1 :green_heart: | javadoc | 0m 49s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 18s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 21s | | the patch passed | | +1 :green_heart: | shadedclient | 20m 0s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 312m 26s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3078/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | -1 :x: | asflicense | 0m 38s | [/results-asflicense.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3078/2/artifact/out/results-asflicense.txt) | The patch generated 2 ASF License warnings. | | | | 424m 48s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.namenode.TestDecommissioningStatus | | | hadoop.hdfs.TestDFSShell | | | hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport | | | hadoop.hdfs.server.namenode.snapshot.TestSnapRootDescendantDiff | | | hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor | | | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3078/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3078 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 07c24f28696e 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / bc5f6c8a65856ea0d658a05fad4173e7f4173f21 | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2
[GitHub] [hadoop] hadoop-yetus commented on pull request #3073: HDFS-16054. Replace Guava Lists usage by Hadoop's own Lists in hadoop-hdfs-project
hadoop-yetus commented on pull request #3073: URL: https://github.com/apache/hadoop/pull/3073#issuecomment-857329518 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 54s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 2s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 69 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 12m 40s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 24m 47s | | trunk passed | | +1 :green_heart: | compile | 6m 35s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 6m 22s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 2m 1s | | trunk passed | | +1 :green_heart: | mvnsite | 4m 17s | | trunk passed | | +1 :green_heart: | javadoc | 3m 12s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 3m 50s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 9m 22s | | trunk passed | | +1 :green_heart: | shadedclient | 17m 11s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 25s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 39s | | the patch passed | | +1 :green_heart: | compile | 6m 39s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 6m 39s | | the patch passed | | +1 :green_heart: | compile | 6m 18s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 6m 18s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 1m 36s | | hadoop-hdfs-project: The patch generated 0 new + 5020 unchanged - 5 fixed = 5020 total (was 5025) | | +1 :green_heart: | mvnsite | 3m 50s | | the patch passed | | +1 :green_heart: | xml | 0m 7s | | The patch has no ill-formed XML file. | | +1 :green_heart: | javadoc | 2m 45s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 3m 27s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 10m 2s | | the patch passed | | +1 :green_heart: | shadedclient | 17m 3s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 2m 22s | | hadoop-hdfs-client in the patch passed. | | -1 :x: | unit | 375m 43s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3073/4/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | unit | 9m 20s | | hadoop-hdfs-httpfs in the patch passed. | | +1 :green_heart: | unit | 3m 23s | | hadoop-hdfs-nfs in the patch passed. | | +1 :green_heart: | unit | 26m 12s | | hadoop-hdfs-rbf in the patch passed. | | -1 :x: | asflicense | 0m 40s | [/results-asflicense.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3073/4/artifact/out/results-asflicense.txt) | The patch generated 2 ASF License warnings. | | | | 567m 12s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.namenode.TestDecommissioningStatus | | | hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor | | | hadoop.hdfs.TestDFSShell | | | hadoop.hdfs.TestHDFSFileSystemContract | | | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3073/4/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3073 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient codespell xml spotbugs checkstyle | | uname | Linux 40e951d3ca4a 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64
[GitHub] [hadoop] hadoop-yetus commented on pull request #3073: HDFS-16054. Replace Guava Lists usage by Hadoop's own Lists in hadoop-hdfs-project
hadoop-yetus commented on pull request #3073: URL: https://github.com/apache/hadoop/pull/3073#issuecomment-857328284 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 49s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 3s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 69 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 12m 33s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 24m 33s | | trunk passed | | +1 :green_heart: | compile | 6m 46s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 6m 12s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 1m 41s | | trunk passed | | +1 :green_heart: | mvnsite | 4m 18s | | trunk passed | | +1 :green_heart: | javadoc | 3m 13s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 3m 50s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 9m 17s | | trunk passed | | +1 :green_heart: | shadedclient | 16m 58s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 25s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 41s | | the patch passed | | +1 :green_heart: | compile | 6m 34s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 6m 34s | | the patch passed | | +1 :green_heart: | compile | 6m 6s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 6m 6s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 1m 34s | | hadoop-hdfs-project: The patch generated 0 new + 5020 unchanged - 5 fixed = 5020 total (was 5025) | | +1 :green_heart: | mvnsite | 3m 54s | | the patch passed | | +1 :green_heart: | xml | 0m 6s | | The patch has no ill-formed XML file. | | +1 :green_heart: | javadoc | 2m 47s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 3m 26s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 10m 10s | | the patch passed | | +1 :green_heart: | shadedclient | 17m 15s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 2m 19s | | hadoop-hdfs-client in the patch passed. | | -1 :x: | unit | 371m 32s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3073/5/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | unit | 9m 28s | | hadoop-hdfs-httpfs in the patch passed. | | +1 :green_heart: | unit | 3m 19s | | hadoop-hdfs-nfs in the patch passed. | | +1 :green_heart: | unit | 26m 1s | | hadoop-hdfs-rbf in the patch passed. | | -1 :x: | asflicense | 0m 42s | [/results-asflicense.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3073/5/artifact/out/results-asflicense.txt) | The patch generated 2 ASF License warnings. | | | | 562m 6s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor | | | hadoop.hdfs.TestDFSShell | | | hadoop.hdfs.TestSnapshotCommands | | | hadoop.hdfs.server.namenode.ha.TestEditLogTailer | | | hadoop.hdfs.TestHDFSFileSystemContract | | | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3073/5/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3073 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient codespell xml spotbugs checkstyle | | uname | Linux 257cf2869aa9 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 0
[jira] [Created] (HADOOP-17752) Remove lock contention in REGISTRY of Configuration
Xuesen Liang created HADOOP-17752: - Summary: Remove lock contention in REGISTRY of Configuration Key: HADOOP-17752 URL: https://issues.apache.org/jira/browse/HADOOP-17752 Project: Hadoop Common Issue Type: Improvement Components: common Reporter: Xuesen Liang Every Configuration instance is put into *Configuration#REGISTRY* by its constructor. This operation is guard by Configuration.class. REGISTRY is a *WeakHashMap*, which should be replaced by *ConcurrentHashMap*. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] jojochuang merged pull request #3081: YARN-10809. Missing dependency causing NoClassDefFoundError in TestHBaseTimelineStorageUtils
jojochuang merged pull request #3081: URL: https://github.com/apache/hadoop/pull/3081 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] jojochuang merged pull request #3056: HDFS-15916. Addendum. DistCp: Backward compatibility: Distcp fails from Hadoop 3 to Hadoop 2 for snapshotdiff.
jojochuang merged pull request #3056: URL: https://github.com/apache/hadoop/pull/3056 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] jojochuang commented on pull request #3054: HDFS-15916. DistCp: Backward compatibility: Distcp fails from Hadoop 3 to Hadoop 2 for snapshotdiff. (#2863). Contributed by Ayush Saxena.
jojochuang commented on pull request #3054: URL: https://github.com/apache/hadoop/pull/3054#issuecomment-857323813 Merging it. No test failures. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] tasanuma commented on pull request #3075: YARN-10805. Replace Guava Lists usage by Hadoop's own Lists in hadoop-yarn-project
tasanuma commented on pull request #3075: URL: https://github.com/apache/hadoop/pull/3075#issuecomment-857323736 +1, pending Jenkins. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] jojochuang merged pull request #3054: HDFS-15916. DistCp: Backward compatibility: Distcp fails from Hadoop 3 to Hadoop 2 for snapshotdiff. (#2863). Contributed by Ayush Saxena.
jojochuang merged pull request #3054: URL: https://github.com/apache/hadoop/pull/3054 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] tomscut commented on pull request #3084: HDFS-16057. Make sure the order for location in ENTERING_MAINTENANCE …
tomscut commented on pull request #3084: URL: https://github.com/apache/hadoop/pull/3084#issuecomment-857320247 > The fix makes sense to me. > > It would be great if the test is written as a unit test for DatanodeManager rather than a cluster test. Thank @jojochuang for your review. I will reference TestSortLocatedStripedBlock to make some changes. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17751) Reduce lock contention in org.apache.hadoop.conf.Configuration
[ https://issues.apache.org/jira/browse/HADOOP-17751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xuesen Liang updated HADOOP-17751: -- Labels: Umbrella (was: ) > Reduce lock contention in org.apache.hadoop.conf.Configuration > -- > > Key: HADOOP-17751 > URL: https://issues.apache.org/jira/browse/HADOOP-17751 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Reporter: Xuesen Liang >Priority: Major > Labels: Umbrella > > There are many locks in class *Configuration.* > These locks are bad for performance. > Some locks can be removed. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-17751) Reduce lock contention in org.apache.hadoop.conf.Configuration
Xuesen Liang created HADOOP-17751: - Summary: Reduce lock contention in org.apache.hadoop.conf.Configuration Key: HADOOP-17751 URL: https://issues.apache.org/jira/browse/HADOOP-17751 Project: Hadoop Common Issue Type: Improvement Components: common Reporter: Xuesen Liang There are many locks in class *Configuration.* These locks are bad for performance. Some locks can be removed. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] tomscut commented on pull request #3084: HDFS-16057. Make sure the order for location in ENTERING_MAINTENANCE …
tomscut commented on pull request #3084: URL: https://github.com/apache/hadoop/pull/3084#issuecomment-857305808 Hi @zhe-thoughts @rakeshadr @umbrant @yzhangal , could you please review the code? Thanks you. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ferhui commented on a change in pull request #3065: HDFS-13671. Namenode deletes large dir slowly caused by FoldedTreeSet#removeAndGet
ferhui commented on a change in pull request #3065: URL: https://github.com/apache/hadoop/pull/3065#discussion_r647902665 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java ## @@ -1996,7 +1996,12 @@ private void metaSave(PrintWriter out) { LightWeightHashSet openFileIds = new LightWeightHashSet<>(); for (DatanodeDescriptor dataNode : blockManager.getDatanodeManager().getDatanodes()) { - for (long ucFileId : dataNode.getLeavingServiceStatus().getOpenFiles()) { + // Sort open files + LightWeightHashSet dnOpenFiles = + dataNode.getLeavingServiceStatus().getOpenFiles(); + Long[] dnOpenFileIds = new Long[dnOpenFiles.size()]; + Arrays.sort(dnOpenFiles.toArray(dnOpenFileIds)); + for (Long ucFileId : dnOpenFileIds) { INode ucFile = getFSDirectory().getInode(ucFileId); if (ucFile == null || ucFileId <= prevId || openFileIds.contains(ucFileId)) { Review comment: @aajisaka Thanks, you are right, it has a bug. How about keeping the same as before that sort the open files. File a new jira for this bug? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #3082: [Do not commit] Exclude JSON files from RAT check
hadoop-yetus commented on pull request #3082: URL: https://github.com/apache/hadoop/pull/3082#issuecomment-857292313 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 14m 22s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 33m 34s | | trunk passed | | +1 :green_heart: | compile | 23m 41s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 20m 15s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | mvnsite | 27m 1s | | trunk passed | | +1 :green_heart: | javadoc | 8m 50s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 9m 21s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | shadedclient | 137m 18s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 23m 8s | | the patch passed | | +1 :green_heart: | compile | 29m 0s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 29m 0s | | the patch passed | | +1 :green_heart: | compile | 25m 15s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 25m 15s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | mvnsite | 22m 22s | | the patch passed | | +1 :green_heart: | xml | 0m 2s | | The patch has no ill-formed XML file. | | +1 :green_heart: | javadoc | 8m 19s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 8m 14s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | shadedclient | 31m 31s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 808m 45s | [/patch-unit-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3082/1/artifact/out/patch-unit-root.txt) | root in the patch passed. | | +1 :green_heart: | asflicense | 1m 41s | | The patch does not generate ASF License warnings. | | | | 1095m 42s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.yarn.server.timelineservice.storage.common.TestHBaseTimelineStorageUtils | | | hadoop.yarn.server.router.clientrm.TestFederationClientInterceptor | | | hadoop.fs.http.client.TestHttpFSWithHttpFSFileSystem | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3082/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3082 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient codespell xml | | uname | Linux 6d57cd7dabf0 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / da43ad3182d5b3da9d3d05ede6762d84656bbfe8 | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3082/1/testReport/ | | Max. process+thread count | 2884 (vs. ulimit of 5500) | | modules | C: . U: . | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3082/1/console | | versions | git=2.25.1 maven=3.6.3 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL abov
[GitHub] [hadoop] hadoop-yetus commented on pull request #2998: HDFS-16016. BPServiceActor to provide new thread to handle IBR
hadoop-yetus commented on pull request #2998: URL: https://github.com/apache/hadoop/pull/2998#issuecomment-857230783 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 20m 13s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 31m 44s | | trunk passed | | +1 :green_heart: | compile | 1m 25s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 1m 22s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 1m 8s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 30s | | trunk passed | | +1 :green_heart: | javadoc | 1m 1s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 33s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 18s | | trunk passed | | +1 :green_heart: | shadedclient | 16m 24s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 17s | | the patch passed | | +1 :green_heart: | compile | 1m 18s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 1m 18s | | the patch passed | | +1 :green_heart: | compile | 1m 11s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 1m 11s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 56s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 16s | | the patch passed | | +1 :green_heart: | javadoc | 0m 47s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 24s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 18s | | the patch passed | | +1 :green_heart: | shadedclient | 16m 5s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 474m 15s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2998/22/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | -1 :x: | asflicense | 0m 48s | [/results-asflicense.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2998/22/artifact/out/results-asflicense.txt) | The patch generated 2 ASF License warnings. | | | | 580m 1s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.fs.viewfs.TestViewFSOverloadSchemeWithMountTableConfigInHDFS | | | hadoop.hdfs.web.TestWebHdfsFileSystemContract | | | hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor | | | hadoop.hdfs.server.diskbalancer.command.TestDiskBalancerCommand | | | hadoop.hdfs.TestViewDistributedFileSystemContract | | | hadoop.hdfs.TestDFSShell | | | hadoop.hdfs.TestSnapshotCommands | | | hadoop.hdfs.TestHDFSFileSystemContract | | | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby | | | hadoop.hdfs.server.namenode.ha.TestEditLogTailer | | | hadoop.fs.viewfs.TestViewFileSystemOverloadSchemeWithHdfsScheme | | | hadoop.hdfs.server.namenode.TestDecommissioningStatus | | | hadoop.fs.viewfs.TestViewFileSystemOverloadSchemeHdfsFileSystemContract | | | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsVolumeList | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2998/22/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2998 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux f2d1edcb243a 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | g
[GitHub] [hadoop] hadoop-yetus commented on pull request #2998: HDFS-16016. BPServiceActor to provide new thread to handle IBR
hadoop-yetus commented on pull request #2998: URL: https://github.com/apache/hadoop/pull/2998#issuecomment-857228763 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 18m 7s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 31m 56s | | trunk passed | | +1 :green_heart: | compile | 1m 23s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 1m 18s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 1m 5s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 30s | | trunk passed | | +1 :green_heart: | javadoc | 1m 2s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 35s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 19s | | trunk passed | | +1 :green_heart: | shadedclient | 16m 30s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 14s | | the patch passed | | +1 :green_heart: | compile | 1m 18s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 1m 18s | | the patch passed | | +1 :green_heart: | compile | 1m 10s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 1m 10s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 56s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 15s | | the patch passed | | +1 :green_heart: | javadoc | 0m 47s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 23s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 18s | | the patch passed | | +1 :green_heart: | shadedclient | 16m 24s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 470m 11s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2998/23/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | -1 :x: | asflicense | 0m 48s | [/results-asflicense.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2998/23/artifact/out/results-asflicense.txt) | The patch generated 2 ASF License warnings. | | | | 574m 8s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.fs.viewfs.TestViewFSOverloadSchemeWithMountTableConfigInHDFS | | | hadoop.hdfs.web.TestWebHdfsFileSystemContract | | | hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor | | | hadoop.hdfs.server.diskbalancer.command.TestDiskBalancerCommand | | | hadoop.hdfs.TestDFSShell | | | hadoop.hdfs.TestSnapshotCommands | | | hadoop.hdfs.TestHDFSFileSystemContract | | | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby | | | hadoop.hdfs.server.namenode.ha.TestEditLogTailer | | | hadoop.fs.viewfs.TestViewFileSystemOverloadSchemeWithHdfsScheme | | | hadoop.hdfs.server.balancer.TestBalancer | | | hadoop.hdfs.server.namenode.TestDecommissioningStatus | | | hadoop.fs.viewfs.TestViewFileSystemOverloadSchemeHdfsFileSystemContract | | | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsVolumeList | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2998/23/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2998 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux cf16119a4eea 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revisi
[GitHub] [hadoop] hadoop-yetus commented on pull request #3075: YARN-10805. Replace Guava Lists usage by Hadoop's own Lists in hadoop-yarn-project
hadoop-yetus commented on pull request #3075: URL: https://github.com/apache/hadoop/pull/3075#issuecomment-857221708 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 8m 42s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 37 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 12m 35s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 19m 56s | | trunk passed | | +1 :green_heart: | compile | 9m 15s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 7m 54s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 1m 52s | | trunk passed | | +1 :green_heart: | mvnsite | 9m 34s | | trunk passed | | +1 :green_heart: | javadoc | 6m 48s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 6m 6s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 19m 28s | | trunk passed | | +1 :green_heart: | shadedclient | 14m 13s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 28s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 8m 41s | | the patch passed | | +1 :green_heart: | compile | 9m 0s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 9m 0s | | the patch passed | | +1 :green_heart: | compile | 9m 31s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 9m 31s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 2m 3s | | the patch passed | | +1 :green_heart: | mvnsite | 10m 38s | | the patch passed | | +1 :green_heart: | xml | 0m 2s | | The patch has no ill-formed XML file. | | +1 :green_heart: | javadoc | 7m 5s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 6m 4s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 25m 20s | | the patch passed | | +1 :green_heart: | shadedclient | 18m 36s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 249m 33s | [/patch-unit-hadoop-yarn-project_hadoop-yarn.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3075/5/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn.txt) | hadoop-yarn in the patch passed. | | +1 :green_heart: | unit | 5m 8s | | hadoop-yarn-common in the patch passed. | | +1 :green_heart: | unit | 1m 20s | | hadoop-yarn-server-web-proxy in the patch passed. | | +1 :green_heart: | unit | 96m 47s | | hadoop-yarn-server-resourcemanager in the patch passed. | | +1 :green_heart: | unit | 23m 18s | | hadoop-yarn-server-nodemanager in the patch passed. | | +1 :green_heart: | unit | 28m 2s | | hadoop-yarn-client in the patch passed. | | +1 :green_heart: | unit | 21m 17s | | hadoop-yarn-services-core in the patch passed. | | +1 :green_heart: | unit | 2m 14s | | hadoop-yarn-services-api in the patch passed. | | -1 :x: | asflicense | 1m 2s | [/results-asflicense.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3075/5/artifact/out/results-asflicense.txt) | The patch generated 2 ASF License warnings. | | | | 646m 49s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.yarn.server.timelineservice.storage.common.TestHBaseTimelineStorageUtils | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3075/5/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3075 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell xml | | uname | Linux 924d53bfe200 4.15.0-60-generic #67-Ubuntu SMP Thu Aug
[jira] [Work logged] (HADOOP-17745) ADLS client can throw an IOException when it should throw an InterruptedIOException
[ https://issues.apache.org/jira/browse/HADOOP-17745?focusedWorklogId=608761&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-608761 ] ASF GitHub Bot logged work on HADOOP-17745: --- Author: ASF GitHub Bot Created on: 08/Jun/21 21:20 Start Date: 08/Jun/21 21:20 Worklog Time Spent: 10m Work Description: steveloughran commented on a change in pull request #3076: URL: https://github.com/apache/hadoop/pull/3076#discussion_r647799533 ## File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/IOUtils.java ## @@ -482,6 +482,12 @@ public static IOException wrapException(final String path, if (exception instanceof InterruptedIOException || exception instanceof PathIOException) { return exception; +} else if (exception.getCause() != null +&& exception.getCause() instanceof InterruptedException) { Review comment: you can get rid of the !=null check, as L486 does that implicitly ## File path: hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestIOUtilsWrapExceptionSuite.java ## @@ -0,0 +1,56 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.io; + +import java.io.IOException; +import java.io.InterruptedIOException; + +import org.junit.Assert; +import org.junit.Test; + +public class TestIOUtilsWrapExceptionSuite extends Assert { Review comment: extends AbstractHadoopTestBase ; this sets up a timeout and names the test thread ## File path: hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestIOUtilsWrapExceptionSuite.java ## @@ -0,0 +1,56 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.io; + +import java.io.IOException; +import java.io.InterruptedIOException; + +import org.junit.Assert; +import org.junit.Test; + +public class TestIOUtilsWrapExceptionSuite extends Assert { +@Test +public void testWrapExceptionWithInterruptedException() throws Exception { +InterruptedIOException inputException = new InterruptedIOException("message"); +NullPointerException causeException = new NullPointerException("cause"); +inputException.initCause(causeException); +Exception outputException = IOUtils.wrapException("path", "methodName", inputException); + +// The new exception should retain the input message, cause, and type +assertTrue(outputException instanceof InterruptedIOException); +assertTrue(outputException.getCause() instanceof NullPointerException); +assertEquals(outputException.getMessage(), inputException.getMessage()); +assertEquals(outputException.getCause(), inputException.getCause()); +} + +@Test +public void testWrapExceptionWithInterruptedCauseException() throws Exception { +IOException inputException = new IOException("message"); +InterruptedException causeException = new InterruptedException("cause"); +inputException.initCause(causeException); +Exception outputException = IOUtils.wrapException("path", "methodName", inputException); + +// The new exception should retain the input message and cause +// but be an InterruptedIOException because the cause was an InterruptedException +
[GitHub] [hadoop] steveloughran commented on a change in pull request #3076: HADOOP-17745. Wrap IOException with InterruptedException cause properly
steveloughran commented on a change in pull request #3076: URL: https://github.com/apache/hadoop/pull/3076#discussion_r647799533 ## File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/IOUtils.java ## @@ -482,6 +482,12 @@ public static IOException wrapException(final String path, if (exception instanceof InterruptedIOException || exception instanceof PathIOException) { return exception; +} else if (exception.getCause() != null +&& exception.getCause() instanceof InterruptedException) { Review comment: you can get rid of the !=null check, as L486 does that implicitly ## File path: hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestIOUtilsWrapExceptionSuite.java ## @@ -0,0 +1,56 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.io; + +import java.io.IOException; +import java.io.InterruptedIOException; + +import org.junit.Assert; +import org.junit.Test; + +public class TestIOUtilsWrapExceptionSuite extends Assert { Review comment: extends AbstractHadoopTestBase ; this sets up a timeout and names the test thread ## File path: hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestIOUtilsWrapExceptionSuite.java ## @@ -0,0 +1,56 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.io; + +import java.io.IOException; +import java.io.InterruptedIOException; + +import org.junit.Assert; +import org.junit.Test; + +public class TestIOUtilsWrapExceptionSuite extends Assert { +@Test +public void testWrapExceptionWithInterruptedException() throws Exception { +InterruptedIOException inputException = new InterruptedIOException("message"); +NullPointerException causeException = new NullPointerException("cause"); +inputException.initCause(causeException); +Exception outputException = IOUtils.wrapException("path", "methodName", inputException); + +// The new exception should retain the input message, cause, and type +assertTrue(outputException instanceof InterruptedIOException); +assertTrue(outputException.getCause() instanceof NullPointerException); +assertEquals(outputException.getMessage(), inputException.getMessage()); +assertEquals(outputException.getCause(), inputException.getCause()); +} + +@Test +public void testWrapExceptionWithInterruptedCauseException() throws Exception { +IOException inputException = new IOException("message"); +InterruptedException causeException = new InterruptedException("cause"); +inputException.initCause(causeException); +Exception outputException = IOUtils.wrapException("path", "methodName", inputException); + +// The new exception should retain the input message and cause +// but be an InterruptedIOException because the cause was an InterruptedException +assertTrue(outputException instanceof InterruptedIOException); Review comment: same here, embrace AssertJ. It's better, mostly. ## File path: hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestIOUtilsWrapExceptionSuite.java ## @@ -0,0 +1,56 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * di
[jira] [Work logged] (HADOOP-17745) ADLS client can throw an IOException when it should throw an InterruptedIOException
[ https://issues.apache.org/jira/browse/HADOOP-17745?focusedWorklogId=608757&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-608757 ] ASF GitHub Bot logged work on HADOOP-17745: --- Author: ASF GitHub Bot Created on: 08/Jun/21 21:15 Start Date: 08/Jun/21 21:15 Worklog Time Spent: 10m Work Description: steveloughran commented on pull request #3076: URL: https://github.com/apache/hadoop/pull/3076#issuecomment-857157506 aah, you need to look at the [Testing Azure](https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-azure/src/site/markdown/testing_azure.md) doc. I can see the code here doesn't directly go near the store, but as its called from abfs you are going to have to do a test run. Sorry. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 608757) Time Spent: 1h (was: 50m) > ADLS client can throw an IOException when it should throw an > InterruptedIOException > --- > > Key: HADOOP-17745 > URL: https://issues.apache.org/jira/browse/HADOOP-17745 > Project: Hadoop Common > Issue Type: Bug >Reporter: Eric Maynard >Priority: Major > Labels: pull-request-available > Time Spent: 1h > Remaining Estimate: 0h > > The Azure client sometimes throws an IOException with an InterruptedException > cause which can be converted to an InterruptedIOException. This is important > for downstream consumers that rely on an InterruptedIOException to gracefully > close. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on pull request #3076: HADOOP-17745. Wrap IOException with InterruptedException cause properly
steveloughran commented on pull request #3076: URL: https://github.com/apache/hadoop/pull/3076#issuecomment-857157506 aah, you need to look at the [Testing Azure](https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-azure/src/site/markdown/testing_azure.md) doc. I can see the code here doesn't directly go near the store, but as its called from abfs you are going to have to do a test run. Sorry. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17725) Improve error message for token providers in ABFS
[ https://issues.apache.org/jira/browse/HADOOP-17725?focusedWorklogId=608756&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-608756 ] ASF GitHub Bot logged work on HADOOP-17725: --- Author: ASF GitHub Bot Created on: 08/Jun/21 21:14 Start Date: 08/Jun/21 21:14 Worklog Time Spent: 10m Work Description: steveloughran commented on pull request #3041: URL: https://github.com/apache/hadoop/pull/3041#issuecomment-857155825 ..and its done. Thank you everyone for your work. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 608756) Time Spent: 6h 20m (was: 6h 10m) > Improve error message for token providers in ABFS > - > > Key: HADOOP-17725 > URL: https://issues.apache.org/jira/browse/HADOOP-17725 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/azure, hadoop-thirdparty >Affects Versions: 3.3.0 >Reporter: Ivan >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > Fix For: 3.3.2 > > Time Spent: 6h 20m > Remaining Estimate: 0h > > It would be good to improve error messages for token providers in ABFS. > Currently, when a configuration key is not found or mistyped, the error is > not very clear on what went wrong. It would be good to indicate that the key > was required but not found in Hadoop configuration when creating a token > provider. > For example, when running the following code: > {code:java} > import org.apache.hadoop.conf._ > import org.apache.hadoop.fs._ > val conf = new Configuration() > conf.set("fs.azure.account.auth.type", "OAuth") > conf.set("fs.azure.account.oauth.provider.type", > "org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider") > conf.set("fs.azure.account.oauth2.client.id", "my-client-id") > // > conf.set("fs.azure.account.oauth2.client.secret.my-account.dfs.core.windows.net", > "my-secret") > conf.set("fs.azure.account.oauth2.client.endpoint", "my-endpoint") > val path = new Path("abfss://contai...@my-account.dfs.core.windows.net/") > val fs = path.getFileSystem(conf) > fs.getFileStatus(path){code} > The following exception is thrown: > {code:java} > TokenAccessProviderException: Unable to load OAuth token provider class. > ... > Caused by: UncheckedExecutionException: java.lang.NullPointerException: > clientSecret > ... > Caused by: NullPointerException: clientSecret {code} > which does not tell what configuration key was not loaded. > > IMHO, it would be good if the exception was something like this: > {code:java} > TokenAccessProviderException: Unable to load OAuth token provider class. > ... > Caused by: ConfigurationPropertyNotFoundException: Configuration property > fs.azure.account.oauth2.client.secret not found. {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on pull request #3041: HADOOP-17725. Improve error message for token providers in ABFS
steveloughran commented on pull request #3041: URL: https://github.com/apache/hadoop/pull/3041#issuecomment-857155825 ..and its done. Thank you everyone for your work. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-17631) Configuration ${env.VAR:-FALLBACK} should eval FALLBACK when restrictSystemProps=true
[ https://issues.apache.org/jira/browse/HADOOP-17631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran resolved HADOOP-17631. - Fix Version/s: 3.3.2 Resolution: Fixed > Configuration ${env.VAR:-FALLBACK} should eval FALLBACK when > restrictSystemProps=true > -- > > Key: HADOOP-17631 > URL: https://issues.apache.org/jira/browse/HADOOP-17631 > Project: Hadoop Common > Issue Type: Bug > Components: common >Affects Versions: 3.3.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > Labels: pull-request-available > Fix For: 3.3.2 > > Time Spent: 1h > Remaining Estimate: 0h > > When configuration reads in resources with a restricted parser, it skips > evaluaging system ${env. } vars. But it also skips evaluating fallbacks > As a result, a property like > ${env.LOCAL_DIRS:-${hadoop.tmp.dir}} ends up evaluating as > ${env.LOCAL_DIRS:-${hadoop.tmp.dir}} > It should instead fall back to the "env var unset" option of > ${hadoop.tmp.dir}. This allows for configs (like for s3a buffer dirs) which > are usable in restricted mode as well as unrestricted deployments. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-17725) Improve error message for token providers in ABFS
[ https://issues.apache.org/jira/browse/HADOOP-17725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran resolved HADOOP-17725. - Fix Version/s: 3.3.2 Resolution: Fixed > Improve error message for token providers in ABFS > - > > Key: HADOOP-17725 > URL: https://issues.apache.org/jira/browse/HADOOP-17725 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/azure, hadoop-thirdparty >Affects Versions: 3.3.0 >Reporter: Ivan >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > Fix For: 3.3.2 > > Time Spent: 6h 10m > Remaining Estimate: 0h > > It would be good to improve error messages for token providers in ABFS. > Currently, when a configuration key is not found or mistyped, the error is > not very clear on what went wrong. It would be good to indicate that the key > was required but not found in Hadoop configuration when creating a token > provider. > For example, when running the following code: > {code:java} > import org.apache.hadoop.conf._ > import org.apache.hadoop.fs._ > val conf = new Configuration() > conf.set("fs.azure.account.auth.type", "OAuth") > conf.set("fs.azure.account.oauth.provider.type", > "org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider") > conf.set("fs.azure.account.oauth2.client.id", "my-client-id") > // > conf.set("fs.azure.account.oauth2.client.secret.my-account.dfs.core.windows.net", > "my-secret") > conf.set("fs.azure.account.oauth2.client.endpoint", "my-endpoint") > val path = new Path("abfss://contai...@my-account.dfs.core.windows.net/") > val fs = path.getFileSystem(conf) > fs.getFileStatus(path){code} > The following exception is thrown: > {code:java} > TokenAccessProviderException: Unable to load OAuth token provider class. > ... > Caused by: UncheckedExecutionException: java.lang.NullPointerException: > clientSecret > ... > Caused by: NullPointerException: clientSecret {code} > which does not tell what configuration key was not loaded. > > IMHO, it would be good if the exception was something like this: > {code:java} > TokenAccessProviderException: Unable to load OAuth token provider class. > ... > Caused by: ConfigurationPropertyNotFoundException: Configuration property > fs.azure.account.oauth2.client.secret not found. {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17725) Improve error message for token providers in ABFS
[ https://issues.apache.org/jira/browse/HADOOP-17725?focusedWorklogId=608750&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-608750 ] ASF GitHub Bot logged work on HADOOP-17725: --- Author: ASF GitHub Bot Created on: 08/Jun/21 21:04 Start Date: 08/Jun/21 21:04 Worklog Time Spent: 10m Work Description: steveloughran commented on pull request #3041: URL: https://github.com/apache/hadoop/pull/3041#issuecomment-857145495 merged into trunk; will cherrypick to branch-3.3 with a recompile to verify all is good. leaving JIRA open until then -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 608750) Time Spent: 6h 10m (was: 6h) > Improve error message for token providers in ABFS > - > > Key: HADOOP-17725 > URL: https://issues.apache.org/jira/browse/HADOOP-17725 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/azure, hadoop-thirdparty >Affects Versions: 3.3.0 >Reporter: Ivan >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > Time Spent: 6h 10m > Remaining Estimate: 0h > > It would be good to improve error messages for token providers in ABFS. > Currently, when a configuration key is not found or mistyped, the error is > not very clear on what went wrong. It would be good to indicate that the key > was required but not found in Hadoop configuration when creating a token > provider. > For example, when running the following code: > {code:java} > import org.apache.hadoop.conf._ > import org.apache.hadoop.fs._ > val conf = new Configuration() > conf.set("fs.azure.account.auth.type", "OAuth") > conf.set("fs.azure.account.oauth.provider.type", > "org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider") > conf.set("fs.azure.account.oauth2.client.id", "my-client-id") > // > conf.set("fs.azure.account.oauth2.client.secret.my-account.dfs.core.windows.net", > "my-secret") > conf.set("fs.azure.account.oauth2.client.endpoint", "my-endpoint") > val path = new Path("abfss://contai...@my-account.dfs.core.windows.net/") > val fs = path.getFileSystem(conf) > fs.getFileStatus(path){code} > The following exception is thrown: > {code:java} > TokenAccessProviderException: Unable to load OAuth token provider class. > ... > Caused by: UncheckedExecutionException: java.lang.NullPointerException: > clientSecret > ... > Caused by: NullPointerException: clientSecret {code} > which does not tell what configuration key was not loaded. > > IMHO, it would be good if the exception was something like this: > {code:java} > TokenAccessProviderException: Unable to load OAuth token provider class. > ... > Caused by: ConfigurationPropertyNotFoundException: Configuration property > fs.azure.account.oauth2.client.secret not found. {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17725) Improve error message for token providers in ABFS
[ https://issues.apache.org/jira/browse/HADOOP-17725?focusedWorklogId=608749&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-608749 ] ASF GitHub Bot logged work on HADOOP-17725: --- Author: ASF GitHub Bot Created on: 08/Jun/21 21:03 Start Date: 08/Jun/21 21:03 Worklog Time Spent: 10m Work Description: steveloughran merged pull request #3041: URL: https://github.com/apache/hadoop/pull/3041 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 608749) Time Spent: 6h (was: 5h 50m) > Improve error message for token providers in ABFS > - > > Key: HADOOP-17725 > URL: https://issues.apache.org/jira/browse/HADOOP-17725 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/azure, hadoop-thirdparty >Affects Versions: 3.3.0 >Reporter: Ivan >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > Time Spent: 6h > Remaining Estimate: 0h > > It would be good to improve error messages for token providers in ABFS. > Currently, when a configuration key is not found or mistyped, the error is > not very clear on what went wrong. It would be good to indicate that the key > was required but not found in Hadoop configuration when creating a token > provider. > For example, when running the following code: > {code:java} > import org.apache.hadoop.conf._ > import org.apache.hadoop.fs._ > val conf = new Configuration() > conf.set("fs.azure.account.auth.type", "OAuth") > conf.set("fs.azure.account.oauth.provider.type", > "org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider") > conf.set("fs.azure.account.oauth2.client.id", "my-client-id") > // > conf.set("fs.azure.account.oauth2.client.secret.my-account.dfs.core.windows.net", > "my-secret") > conf.set("fs.azure.account.oauth2.client.endpoint", "my-endpoint") > val path = new Path("abfss://contai...@my-account.dfs.core.windows.net/") > val fs = path.getFileSystem(conf) > fs.getFileStatus(path){code} > The following exception is thrown: > {code:java} > TokenAccessProviderException: Unable to load OAuth token provider class. > ... > Caused by: UncheckedExecutionException: java.lang.NullPointerException: > clientSecret > ... > Caused by: NullPointerException: clientSecret {code} > which does not tell what configuration key was not loaded. > > IMHO, it would be good if the exception was something like this: > {code:java} > TokenAccessProviderException: Unable to load OAuth token provider class. > ... > Caused by: ConfigurationPropertyNotFoundException: Configuration property > fs.azure.account.oauth2.client.secret not found. {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on pull request #3041: HADOOP-17725. Improve error message for token providers in ABFS
steveloughran commented on pull request #3041: URL: https://github.com/apache/hadoop/pull/3041#issuecomment-857145495 merged into trunk; will cherrypick to branch-3.3 with a recompile to verify all is good. leaving JIRA open until then -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran merged pull request #3041: HADOOP-17725. Improve error message for token providers in ABFS
steveloughran merged pull request #3041: URL: https://github.com/apache/hadoop/pull/3041 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on a change in pull request #2971: MAPREDUCE-7341. Intermediate Manifest Committer
steveloughran commented on a change in pull request #2971: URL: https://github.com/apache/hadoop/pull/2971#discussion_r647789479 ## File path: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/committer/manifest/ManifestCommitter.java ## @@ -0,0 +1,712 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.mapreduce.lib.output.committer.manifest; + +import java.io.IOException; +import java.util.Objects; +import java.util.concurrent.ExecutorService; + +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import org.apache.commons.lang3.tuple.Pair; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.FileSystem; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.fs.statistics.IOStatisticsSource; +import org.apache.hadoop.fs.statistics.impl.IOStatisticsStore; +import org.apache.hadoop.mapreduce.JobContext; +import org.apache.hadoop.mapreduce.JobStatus; +import org.apache.hadoop.mapreduce.TaskAttemptContext; +import org.apache.hadoop.mapreduce.TaskAttemptID; +import org.apache.hadoop.mapreduce.lib.output.PathOutputCommitter; +import org.apache.hadoop.mapreduce.lib.output.committer.manifest.files.ManifestSuccessData; +import org.apache.hadoop.mapreduce.lib.output.committer.manifest.files.TaskManifest; +import org.apache.hadoop.thirdparty.com.google.common.annotations.VisibleForTesting; +import org.apache.hadoop.thirdparty.com.google.common.util.concurrent.ThreadFactoryBuilder; +import org.apache.hadoop.util.Progressable; +import org.apache.hadoop.util.concurrent.HadoopExecutors; + +import static org.apache.hadoop.fs.statistics.IOStatisticsLogging.ioStatisticsToPrettyString; +import static org.apache.hadoop.fs.statistics.IOStatisticsLogging.logIOStatisticsAtDebug; +import static org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.SUCCESSFUL_JOB_OUTPUT_DIR_MARKER; +import static org.apache.hadoop.mapreduce.lib.output.committer.manifest.CleanupJobStage.optionsFromConfig; +import static org.apache.hadoop.mapreduce.lib.output.committer.manifest.ManifestCommitterConstants.DEFAULT_CREATE_SUCCESSFUL_JOB_DIR_MARKER; +import static org.apache.hadoop.mapreduce.lib.output.committer.manifest.ManifestCommitterConstants.MANIFEST_SUFFIX; +import static org.apache.hadoop.mapreduce.lib.output.committer.manifest.ManifestCommitterConstants.OPT_IO_PROCESSORS; +import static org.apache.hadoop.mapreduce.lib.output.committer.manifest.ManifestCommitterConstants.OPT_IO_PROCESSORS_DEFAULT; +import static org.apache.hadoop.mapreduce.lib.output.committer.manifest.ManifestCommitterConstants.OPT_VALIDATE_OUTPUT; +import static org.apache.hadoop.mapreduce.lib.output.committer.manifest.ManifestCommitterConstants.OPT_VALIDATE_OUTPUT_DEFAULT; +import static org.apache.hadoop.mapreduce.lib.output.committer.manifest.ManifestCommitterStatisticNames.COMMITTER_TASKS_COMPLETED; +import static org.apache.hadoop.mapreduce.lib.output.committer.manifest.ManifestCommitterSupport.buildJobUUID; +import static org.apache.hadoop.mapreduce.lib.output.committer.manifest.ManifestCommitterSupport.createIOStatisticsStore; +import static org.apache.hadoop.mapreduce.lib.output.committer.manifest.ManifestCommitterSupport.getAppAttemptId; + +/** + * This is the Intermediate-Manifest committer. + */ +public class ManifestCommitter extends PathOutputCommitter implements +IOStatisticsSource { + + public static final Logger LOG = LoggerFactory.getLogger( + ManifestCommitter.class); + + public static final String TASK_COMMITTER = "task committer"; + + public static final String JOB_COMMITTER = "job committer"; + + /** + * Committer Configuration as extracted from + * the job/task context and set in the constructor. + * + */ + private final ManifestCommitterConfig baseConfig; + + /** + * Destination of the job. + */ + private final Path destinationDir; + + /** + * For tasks, the attempt directory. + * Null for jobs. + */ + private final Path taskAttemptDir; + + /** + * IOStatistics to update. + */ + private final IOStatisticsStore iostatistics; + + /** + * The job Manifest Success
[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #2971: MAPREDUCE-7341. Intermediate Manifest Committer
hadoop-yetus removed a comment on pull request #2971: URL: https://github.com/apache/hadoop/pull/2971#issuecomment-849569296 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on a change in pull request #2971: MAPREDUCE-7341. Intermediate Manifest Committer
steveloughran commented on a change in pull request #2971: URL: https://github.com/apache/hadoop/pull/2971#discussion_r647788743 ## File path: hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/output/committer/manifest/CommitJobStage.java ## @@ -0,0 +1,112 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.mapreduce.lib.output.committer.manifest; + +import java.io.IOException; +import java.util.List; + +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import org.apache.commons.lang3.tuple.Pair; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.fs.statistics.impl.IOStatisticsStore; +import org.apache.hadoop.mapreduce.lib.output.committer.manifest.files.ManifestSuccessData; +import org.apache.hadoop.mapreduce.lib.output.committer.manifest.files.TaskManifest; + +import static org.apache.commons.io.FileUtils.byteCountToDisplaySize; +import static org.apache.hadoop.mapreduce.lib.output.committer.manifest.ManifestCommitterStatisticNames.OP_JOB_COMMITTED_BYTES; +import static org.apache.hadoop.mapreduce.lib.output.committer.manifest.ManifestCommitterStatisticNames.OP_JOB_COMMITTED_FILES; +import static org.apache.hadoop.mapreduce.lib.output.committer.manifest.ManifestCommitterStatisticNames.OP_STAGE_JOB_COMMIT; + +/** + * Commit the Job. + * Arguments (save manifest, validate output) + */ Review comment: I agree: will do -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17631) Configuration ${env.VAR:-FALLBACK} should eval FALLBACK when restrictSystemProps=true
[ https://issues.apache.org/jira/browse/HADOOP-17631?focusedWorklogId=608745&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-608745 ] ASF GitHub Bot logged work on HADOOP-17631: --- Author: ASF GitHub Bot Created on: 08/Jun/21 20:56 Start Date: 08/Jun/21 20:56 Worklog Time Spent: 10m Work Description: steveloughran merged pull request #2977: URL: https://github.com/apache/hadoop/pull/2977 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 608745) Time Spent: 1h (was: 50m) > Configuration ${env.VAR:-FALLBACK} should eval FALLBACK when > restrictSystemProps=true > -- > > Key: HADOOP-17631 > URL: https://issues.apache.org/jira/browse/HADOOP-17631 > Project: Hadoop Common > Issue Type: Bug > Components: common >Affects Versions: 3.3.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > Labels: pull-request-available > Time Spent: 1h > Remaining Estimate: 0h > > When configuration reads in resources with a restricted parser, it skips > evaluaging system ${env. } vars. But it also skips evaluating fallbacks > As a result, a property like > ${env.LOCAL_DIRS:-${hadoop.tmp.dir}} ends up evaluating as > ${env.LOCAL_DIRS:-${hadoop.tmp.dir}} > It should instead fall back to the "env var unset" option of > ${hadoop.tmp.dir}. This allows for configs (like for s3a buffer dirs) which > are usable in restricted mode as well as unrestricted deployments. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran merged pull request #2977: HADOOP-17631. Configuration ${env.VAR:-FALLBACK} to eval FALLBACK when restrictSystemProps=true
steveloughran merged pull request #2977: URL: https://github.com/apache/hadoop/pull/2977 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16878) FileUtil.copy() to throw IOException if the source and destination are the same
[ https://issues.apache.org/jira/browse/HADOOP-16878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17359564#comment-17359564 ] Steve Loughran commented on HADOOP-16878: - ok, but if I do a command on the CLI, isn't it only going to find one of these? > FileUtil.copy() to throw IOException if the source and destination are the > same > --- > > Key: HADOOP-16878 > URL: https://issues.apache.org/jira/browse/HADOOP-16878 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.3.0 >Reporter: Gabor Bota >Assignee: Gabor Bota >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0 > > Attachments: hdfsTest.patch > > Time Spent: 2h 10m > Remaining Estimate: 0h > > We encountered an error during a test in our QE when the file destination and > source path were the same. This happened during an ADLS test, and there were > no meaningful error messages, so it was hard to find the root cause of the > failure. > The error we saw was that file size has changed during the copy operation. > The new file creation in the destination - which is the same as the source - > creates a file and sets the file length to zero. After this, getting the > source file will fail because the sile size changed during the operation. > I propose a solution to at least log in error level in the {{FileUtil}} if > the source and destination of the copy operation is the same, so debugging > issues like this will be easier in the future. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-11461) Namenode stdout log contains IllegalAccessException
[ https://issues.apache.org/jira/browse/HADOOP-11461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17359563#comment-17359563 ] Steve Loughran commented on HADOOP-11461: - bq. Should I be worried? no, though its a warning of a version of something (jersey) not being happy on java 8. Why not move up to Hadoop 3? > Namenode stdout log contains IllegalAccessException > --- > > Key: HADOOP-11461 > URL: https://issues.apache.org/jira/browse/HADOOP-11461 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 2.7.0 >Reporter: Mohammad Islam >Assignee: Mohammad Islam >Priority: Major > > We frequently see the following exception in namenode out log file. > {noformat} > Nov 19, 2014 8:11:19 PM > com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator > attachTypes > INFO: Couldn't find JAX-B element for class javax.ws.rs.core.Response > Nov 19, 2014 8:11:19 PM > com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator$8 > resolve > SEVERE: null > java.lang.IllegalAccessException: Class > com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator$8 can > not access a member of class javax.ws.rs.co > re.Response with modifiers "protected" > at sun.reflect.Reflection.ensureMemberAccess(Reflection.java:109) > at java.lang.Class.newInstance(Class.java:368) > at > com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator$8.resolve(WadlGeneratorJAXBGrammarGenerator.java:467) > at > com.sun.jersey.server.wadl.WadlGenerator$ExternalGrammarDefinition.resolve(WadlGenerator.java:181) > at > com.sun.jersey.server.wadl.ApplicationDescription.resolve(ApplicationDescription.java:81) > at > com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator.attachTypes(WadlGeneratorJAXBGrammarGenerator.java:518) > at com.sun.jersey.server.wadl.WadlBuilder.generate(WadlBuilder.java:124) > at > com.sun.jersey.server.impl.wadl.WadlApplicationContextImpl.getApplication(WadlApplicationContextImpl.java:104) > at > com.sun.jersey.server.impl.wadl.WadlApplicationContextImpl.getApplication(WadlApplicationContextImpl.java:120) > at > com.sun.jersey.server.impl.wadl.WadlMethodFactory$WadlOptionsMethodDispatcher.dispatch(WadlMethodFactory.java:98) > at > com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288) > at > com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108) > at > com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) > at > com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84) > at > com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1469) > at > com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1400) > at > com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349) > at > com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339) > at > com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416) > at > com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:537) > at > com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:699) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:820) > at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511) > at > org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221) > at > org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:384) > at org.apache.hadoop.hdfs.web.AuthFilter.doFilter(AuthFilter.java:85) > at > org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) > at > org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1183) > at > org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) > at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45) > at > org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) > at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45) > at > org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212) > at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399) > at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216) > at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182) > at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766) > at org.mor
[jira] [Work logged] (HADOOP-17596) ABFS: Change default Readahead Queue Depth from num(processors) to const
[ https://issues.apache.org/jira/browse/HADOOP-17596?focusedWorklogId=608720&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-608720 ] ASF GitHub Bot logged work on HADOOP-17596: --- Author: ASF GitHub Bot Created on: 08/Jun/21 20:31 Start Date: 08/Jun/21 20:31 Worklog Time Spent: 10m Work Description: steveloughran commented on pull request #2795: URL: https://github.com/apache/hadoop/pull/2795#issuecomment-857108374 1. Now this has been merged, the JIRA MUST be updated with fix version 2. Are there any plans to backport to branch-3.3? A retest of the cherrypick is all which should be needed Keeping both branches in sync is essential for cherrypicking future work -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 608720) Time Spent: 3.5h (was: 3h 20m) > ABFS: Change default Readahead Queue Depth from num(processors) to const > > > Key: HADOOP-17596 > URL: https://issues.apache.org/jira/browse/HADOOP-17596 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.3.1 >Reporter: Sumangala Patki >Assignee: Sumangala Patki >Priority: Major > Labels: pull-request-available > Time Spent: 3.5h > Remaining Estimate: 0h > > The default value of readahead queue depth is currently set to the number of > available processors. However, this can result in one inputstream instance > consuming more processor time. To ensure equal thread allocation during read > for all inputstreams created in a session, we change the default readahead > queue depth to a constant (2). -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on pull request #2795: HADOOP-17596. ABFS: Change default Readahead Queue Depth from num(processors) to const
steveloughran commented on pull request #2795: URL: https://github.com/apache/hadoop/pull/2795#issuecomment-857108374 1. Now this has been merged, the JIRA MUST be updated with fix version 2. Are there any plans to backport to branch-3.3? A retest of the cherrypick is all which should be needed Keeping both branches in sync is essential for cherrypicking future work -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] smengcl commented on pull request #3078: HDFS-16055. Quota is not preserved in snapshot INode
smengcl commented on pull request #3078: URL: https://github.com/apache/hadoop/pull/3078#issuecomment-857033639 The cause of the UT failure `testRenameDirAndDeleteSnapshot_1` is that features in the stream are returned as **references**. While this is fine for AclFeature or XAttrFeature. A `DirectoryWithQuotaFeature` will get updated when namespace or space usage has changed. Hence the snapshot inode sharing a same `DirectoryWithQuotaFeature` with the original dir inode becomes a problem in this test case where it compares the quota usage before and after NN restart. Updated the patch to create copy of the `DirectoryWithQuotaFeature`. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] aajisaka commented on pull request #3082: [Do not commit] Exclude JSON files from RAT check
aajisaka commented on pull request #3082: URL: https://github.com/apache/hadoop/pull/3082#issuecomment-856968401 It is duplicate of HADOOP-17750 (#3083). Thanks. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #3073: HDFS-16054. Replace Guava Lists usage by Hadoop's own Lists in hadoop-hdfs-project
hadoop-yetus commented on pull request #3073: URL: https://github.com/apache/hadoop/pull/3073#issuecomment-856948626 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 19m 56s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 2s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 69 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 13m 48s | | Maven dependency ordering for branch | | -1 :x: | mvninstall | 13m 27s | [/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3073/3/artifact/out/branch-mvninstall-root.txt) | root in trunk failed. | | +1 :green_heart: | compile | 5m 14s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 4m 46s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | -0 :warning: | checkstyle | 0m 17s | [/buildtool-branch-checkstyle-hadoop-hdfs-project.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3073/3/artifact/out/buildtool-branch-checkstyle-hadoop-hdfs-project.txt) | The patch fails to run checkstyle in hadoop-hdfs-project | | -1 :x: | mvnsite | 0m 52s | [/branch-mvnsite-hadoop-hdfs-project_hadoop-hdfs-client.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3073/3/artifact/out/branch-mvnsite-hadoop-hdfs-project_hadoop-hdfs-client.txt) | hadoop-hdfs-client in trunk failed. | | -1 :x: | mvnsite | 1m 15s | [/branch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3073/3/artifact/out/branch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in trunk failed. | | -1 :x: | mvnsite | 0m 27s | [/branch-mvnsite-hadoop-hdfs-project_hadoop-hdfs-httpfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3073/3/artifact/out/branch-mvnsite-hadoop-hdfs-project_hadoop-hdfs-httpfs.txt) | hadoop-hdfs-httpfs in trunk failed. | | -1 :x: | mvnsite | 0m 25s | [/branch-mvnsite-hadoop-hdfs-project_hadoop-hdfs-nfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3073/3/artifact/out/branch-mvnsite-hadoop-hdfs-project_hadoop-hdfs-nfs.txt) | hadoop-hdfs-nfs in trunk failed. | | -1 :x: | mvnsite | 0m 36s | [/branch-mvnsite-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3073/3/artifact/out/branch-mvnsite-hadoop-hdfs-project_hadoop-hdfs-rbf.txt) | hadoop-hdfs-rbf in trunk failed. | | -1 :x: | javadoc | 0m 18s | [/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-client-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3073/3/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-client-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt) | hadoop-hdfs-client in trunk failed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04. | | -1 :x: | javadoc | 0m 18s | [/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3073/3/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt) | hadoop-hdfs in trunk failed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04. | | -1 :x: | javadoc | 0m 20s | [/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-httpfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3073/3/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-httpfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt) | hadoop-hdfs-httpfs in trunk failed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04. | | -1 :x: | javadoc | 0m 19s | [/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-nfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3073/3/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-nfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt) | hadoop-hdfs-nfs in trunk failed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04. | | -1 :x: | javadoc | 0m 18s | [/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-rbf-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3073/3/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-rbf-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt) | hadoop-hdfs-rbf in trunk failed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubu
[GitHub] [hadoop] hadoop-yetus commented on pull request #3084: HDFS-16057. Make sure the order for location in ENTERING_MAINTENANCE …
hadoop-yetus commented on pull request #3084: URL: https://github.com/apache/hadoop/pull/3084#issuecomment-856924189 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 8m 54s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 30m 55s | | trunk passed | | +1 :green_heart: | compile | 1m 23s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 1m 18s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 1m 2s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 25s | | trunk passed | | +1 :green_heart: | javadoc | 0m 57s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 29s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 5s | | trunk passed | | +1 :green_heart: | shadedclient | 16m 2s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 13s | | the patch passed | | +1 :green_heart: | compile | 1m 14s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 1m 14s | | the patch passed | | +1 :green_heart: | compile | 1m 9s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 1m 9s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 52s | [/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3084/2/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs-project/hadoop-hdfs: The patch generated 5 new + 38 unchanged - 0 fixed = 43 total (was 38) | | +1 :green_heart: | mvnsite | 1m 13s | | the patch passed | | +1 :green_heart: | javadoc | 0m 47s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 22s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 6s | | the patch passed | | +1 :green_heart: | shadedclient | 15m 46s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 230m 36s | | hadoop-hdfs in the patch passed. | | -1 :x: | asflicense | 0m 46s | [/results-asflicense.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3084/2/artifact/out/results-asflicense.txt) | The patch generated 2 ASF License warnings. | | | | 322m 38s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3084/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3084 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 192abf48494c 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 4504b6617c67e31addb95d511d8f646180b13270 | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3084/2/testReport/ | | Max. process+thread count | 3413 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3084/2/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated.
[GitHub] [hadoop] kihwal merged pull request #3058: HDFS-16042. DatanodeAdminMonitor scan should be delay based
kihwal merged pull request #3058: URL: https://github.com/apache/hadoop/pull/3058 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] amahussein commented on pull request #3058: HDFS-16042. DatanodeAdminMonitor scan should be delay based
amahussein commented on pull request #3058: URL: https://github.com/apache/hadoop/pull/3058#issuecomment-856894824 @kihwal and @jbrennan333 I have tested the three unit tests `TestDecommissioningStatus`, `TestDFSShell`, `TestDecommissioningStatusWithBackoffMonitor` in a loop for couple of hours without getting failures in the container image. I also tested them on OS X and I could not reproduce the failures. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] aajisaka commented on a change in pull request #3065: HDFS-13671. Namenode deletes large dir slowly caused by FoldedTreeSet#removeAndGet
aajisaka commented on a change in pull request #3065: URL: https://github.com/apache/hadoop/pull/3065#discussion_r647396463 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java ## @@ -1996,7 +1996,12 @@ private void metaSave(PrintWriter out) { LightWeightHashSet openFileIds = new LightWeightHashSet<>(); for (DatanodeDescriptor dataNode : blockManager.getDatanodeManager().getDatanodes()) { - for (long ucFileId : dataNode.getLeavingServiceStatus().getOpenFiles()) { + // Sort open files + LightWeightHashSet dnOpenFiles = + dataNode.getLeavingServiceStatus().getOpenFiles(); + Long[] dnOpenFileIds = new Long[dnOpenFiles.size()]; + Arrays.sort(dnOpenFiles.toArray(dnOpenFileIds)); + for (Long ucFileId : dnOpenFileIds) { INode ucFile = getFSDirectory().getInode(ucFileId); if (ucFile == null || ucFileId <= prevId || openFileIds.contains(ucFileId)) { Review comment: Agreed. It must be sorted. I think HDFS-11847 has a bug. If the DataNodes have the following open files and we want to list all the open files: DN1: [1001, 1002, 1003, ... , 2000] DN2: [1, 2, 3, ... , 1000] At first `getFilesBlockingDecom(0, "/")` is called and it returns [1001, 1002, ... , 2000] because it reached max size (=1000), and next `getFilesBlockingDecom(2000, "/")` is called because the last inode Id of the previous result is 2000. That way the open files of DN2 is missed -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] aajisaka commented on a change in pull request #3065: HDFS-13671. Namenode deletes large dir slowly caused by FoldedTreeSet#removeAndGet
aajisaka commented on a change in pull request #3065: URL: https://github.com/apache/hadoop/pull/3065#discussion_r647396463 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java ## @@ -1996,7 +1996,12 @@ private void metaSave(PrintWriter out) { LightWeightHashSet openFileIds = new LightWeightHashSet<>(); for (DatanodeDescriptor dataNode : blockManager.getDatanodeManager().getDatanodes()) { - for (long ucFileId : dataNode.getLeavingServiceStatus().getOpenFiles()) { + // Sort open files + LightWeightHashSet dnOpenFiles = + dataNode.getLeavingServiceStatus().getOpenFiles(); + Long[] dnOpenFileIds = new Long[dnOpenFiles.size()]; + Arrays.sort(dnOpenFiles.toArray(dnOpenFileIds)); + for (Long ucFileId : dnOpenFileIds) { INode ucFile = getFSDirectory().getInode(ucFileId); if (ucFile == null || ucFileId <= prevId || openFileIds.contains(ucFileId)) { Review comment: Agreed. It must be sorted. I think HDFS-11847 has a bug. If the DataNodes have the following open files and we want to list all the open files: DN1: [1001, 1002, 1003, ... , 2000] DN2: [1, 2, 3, ... , 1000] At first `getFilesBlockingDecom(0, "/")` is called and it returns [1001, 1002, ... , 2000] because it reached max size (=1000), and next `getFilesBlockingDecom(2000, "/")` is called and the open files of DN2 is missed because the last inode Id of the previous result is 2000. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] aajisaka commented on a change in pull request #3065: HDFS-13671. Namenode deletes large dir slowly caused by FoldedTreeSet#removeAndGet
aajisaka commented on a change in pull request #3065: URL: https://github.com/apache/hadoop/pull/3065#discussion_r647396463 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java ## @@ -1996,7 +1996,12 @@ private void metaSave(PrintWriter out) { LightWeightHashSet openFileIds = new LightWeightHashSet<>(); for (DatanodeDescriptor dataNode : blockManager.getDatanodeManager().getDatanodes()) { - for (long ucFileId : dataNode.getLeavingServiceStatus().getOpenFiles()) { + // Sort open files + LightWeightHashSet dnOpenFiles = + dataNode.getLeavingServiceStatus().getOpenFiles(); + Long[] dnOpenFileIds = new Long[dnOpenFiles.size()]; + Arrays.sort(dnOpenFiles.toArray(dnOpenFileIds)); + for (Long ucFileId : dnOpenFileIds) { INode ucFile = getFSDirectory().getInode(ucFileId); if (ucFile == null || ucFileId <= prevId || openFileIds.contains(ucFileId)) { Review comment: Agreed. It must be sorted. I think HDFS-11847 has a bug. If the DataNodes have the following open files and we want to list all the open files: DN1: [1001, 1002, 1003, ... , 2000] DN2: [1, 2, 3, ... , 1000] At first `getFilesBlockingDecom(0, "/")` is called and it returns [1001, 1002, ... , 2000], and next `getFilesBlockingDecom(2000, "/")` is called and the open files of DN2 is missed because the last inode Id of the previous result is 2000. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ferhui commented on a change in pull request #3065: HDFS-13671. Namenode deletes large dir slowly caused by FoldedTreeSet#removeAndGet
ferhui commented on a change in pull request #3065: URL: https://github.com/apache/hadoop/pull/3065#discussion_r647387834 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java ## @@ -1996,7 +1996,12 @@ private void metaSave(PrintWriter out) { LightWeightHashSet openFileIds = new LightWeightHashSet<>(); for (DatanodeDescriptor dataNode : blockManager.getDatanodeManager().getDatanodes()) { - for (long ucFileId : dataNode.getLeavingServiceStatus().getOpenFiles()) { + // Sort open files + LightWeightHashSet dnOpenFiles = + dataNode.getLeavingServiceStatus().getOpenFiles(); + Long[] dnOpenFileIds = new Long[dnOpenFiles.size()]; + Arrays.sort(dnOpenFiles.toArray(dnOpenFileIds)); + for (Long ucFileId : dnOpenFileIds) { INode ucFile = getFSDirectory().getInode(ucFileId); if (ucFile == null || ucFileId <= prevId || openFileIds.contains(ucFileId)) { Review comment: @aajisaka Thanks for comments. Previous behavior is that inode ids are sorted for each datanode Here the if clause (ucFileId <= prevId) is the key point, , it affects remote iterator, if they are not sorted, some open files maybe are missing. This function getFilesBlockingDecom is added by HDFS-11847 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] aajisaka closed pull request #3059: #HDFS-13729 Removed extra space
aajisaka closed pull request #3059: URL: https://github.com/apache/hadoop/pull/3059 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] aajisaka commented on pull request #3068: YARN-10803. [JDK 11] TestRMFailoverProxyProvider and TestNoHaRMFailoverProxyProvier fails by ClassCastException
aajisaka commented on pull request #3068: URL: https://github.com/apache/hadoop/pull/3068#issuecomment-856713356 Thank you @bogthe! @tasanuma Would you review this? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] aajisaka commented on a change in pull request #3065: HDFS-13671. Namenode deletes large dir slowly caused by FoldedTreeSet#removeAndGet
aajisaka commented on a change in pull request #3065: URL: https://github.com/apache/hadoop/pull/3065#discussion_r647376745 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java ## @@ -1996,7 +1996,12 @@ private void metaSave(PrintWriter out) { LightWeightHashSet openFileIds = new LightWeightHashSet<>(); for (DatanodeDescriptor dataNode : blockManager.getDatanodeManager().getDatanodes()) { - for (long ucFileId : dataNode.getLeavingServiceStatus().getOpenFiles()) { + // Sort open files + LightWeightHashSet dnOpenFiles = + dataNode.getLeavingServiceStatus().getOpenFiles(); + Long[] dnOpenFileIds = new Long[dnOpenFiles.size()]; + Arrays.sort(dnOpenFiles.toArray(dnOpenFileIds)); + for (Long ucFileId : dnOpenFileIds) { Review comment: @ferhui Thank you for your comment. The change makes sense to keep the previous behavior. Now I have a question about the previous (and current) behavior. https://github.com/apache/hadoop/blob/85517df11ae33ab3a06654d40a1ef4d8eae013e3/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java#L1997-L1998 If there are multiple DataNodes with open files, are the inode IDs really sorted? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] aajisaka commented on a change in pull request #3065: HDFS-13671. Namenode deletes large dir slowly caused by FoldedTreeSet#removeAndGet
aajisaka commented on a change in pull request #3065: URL: https://github.com/apache/hadoop/pull/3065#discussion_r647376745 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java ## @@ -1996,7 +1996,12 @@ private void metaSave(PrintWriter out) { LightWeightHashSet openFileIds = new LightWeightHashSet<>(); for (DatanodeDescriptor dataNode : blockManager.getDatanodeManager().getDatanodes()) { - for (long ucFileId : dataNode.getLeavingServiceStatus().getOpenFiles()) { + // Sort open files + LightWeightHashSet dnOpenFiles = + dataNode.getLeavingServiceStatus().getOpenFiles(); + Long[] dnOpenFileIds = new Long[dnOpenFiles.size()]; + Arrays.sort(dnOpenFiles.toArray(dnOpenFileIds)); + for (Long ucFileId : dnOpenFileIds) { Review comment: @ferhui Thank you for your comment. The change makes sense to keep the previous behavior. https://github.com/apache/hadoop/blob/85517df11ae33ab3a06654d40a1ef4d8eae013e3/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java#L1997-L1998 If there are multiple DataNodes with open files, are the inode IDs really sorted? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #3013: TestDFSMkdir fails with multiple partitions
hadoop-yetus commented on pull request #3013: URL: https://github.com/apache/hadoop/pull/3013#issuecomment-856704328 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 10m 24s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ fgl Compile Tests _ | | -1 :x: | mvninstall | 3m 28s | [/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3013/2/artifact/out/branch-mvninstall-root.txt) | root in fgl failed. | | -1 :x: | compile | 0m 10s | [/branch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3013/2/artifact/out/branch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt) | hadoop-hdfs in fgl failed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04. | | -1 :x: | compile | 0m 10s | [/branch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3013/2/artifact/out/branch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt) | hadoop-hdfs in fgl failed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10. | | -0 :warning: | checkstyle | 1m 22s | [/buildtool-branch-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3013/2/artifact/out/buildtool-branch-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt) | The patch fails to run checkstyle in hadoop-hdfs | | -1 :x: | mvnsite | 0m 14s | [/branch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3013/2/artifact/out/branch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in fgl failed. | | -1 :x: | javadoc | 0m 15s | [/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3013/2/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt) | hadoop-hdfs in fgl failed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04. | | -1 :x: | javadoc | 0m 10s | [/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3013/2/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt) | hadoop-hdfs in fgl failed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10. | | -1 :x: | spotbugs | 0m 10s | [/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3013/2/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in fgl failed. | | -1 :x: | shadedclient | 2m 33s | | branch has errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | -1 :x: | mvninstall | 0m 10s | [/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3013/2/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch failed. | | -1 :x: | compile | 0m 11s | [/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3013/2/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt) | hadoop-hdfs in the patch failed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04. | | -1 :x: | javac | 0m 10s | [/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3013/2/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt) | hadoop-hdfs in the patch failed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04. | | -1 :x: | compile | 0m 10s | [/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/
[GitHub] [hadoop] aajisaka commented on a change in pull request #3065: HDFS-13671. Namenode deletes large dir slowly caused by FoldedTreeSet#removeAndGet
aajisaka commented on a change in pull request #3065: URL: https://github.com/apache/hadoop/pull/3065#discussion_r647363773 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java ## @@ -3111,106 +3042,127 @@ void processFirstBlockReport( } } - private void reportDiffSorted(DatanodeStorageInfo storageInfo, - Iterable newReport, + private void reportDiff(DatanodeStorageInfo storageInfo, + BlockListAsLongs newReport, Collection toAdd, // add to DatanodeDescriptor Collection toRemove, // remove from DatanodeDescriptor Collection toInvalidate, // should be removed from DN Collection toCorrupt, // add to corrupt replicas list Collection toUC) { // add to under-construction list -// The blocks must be sorted and the storagenodes blocks must be sorted -Iterator storageBlocksIterator = storageInfo.getBlockIterator(); +// place a delimiter in the list which separates blocks +// that have been reported from those that have not DatanodeDescriptor dn = storageInfo.getDatanodeDescriptor(); -BlockInfo storageBlock = null; - -for (BlockReportReplica replica : newReport) { - - long replicaID = replica.getBlockId(); - if (BlockIdManager.isStripedBlockID(replicaID) - && (!hasNonEcBlockUsingStripedID || - !blocksMap.containsBlock(replica))) { -replicaID = BlockIdManager.convertToStripedID(replicaID); - } - - ReplicaState reportedState = replica.getState(); - - LOG.debug("Reported block {} on {} size {} replicaState = {}", - replica, dn, replica.getNumBytes(), reportedState); - - if (shouldPostponeBlocksFromFuture - && isGenStampInFuture(replica)) { -queueReportedBlock(storageInfo, replica, reportedState, - QUEUE_REASON_FUTURE_GENSTAMP); -continue; - } - - if (storageBlock == null && storageBlocksIterator.hasNext()) { -storageBlock = storageBlocksIterator.next(); - } - - do { -int cmp; -if (storageBlock == null || -(cmp = Long.compare(replicaID, storageBlock.getBlockId())) < 0) { - // Check if block is available in NN but not yet on this storage - BlockInfo nnBlock = blocksMap.getStoredBlock(new Block(replicaID)); - if (nnBlock != null) { -reportDiffSortedInner(storageInfo, replica, reportedState, - nnBlock, toAdd, toCorrupt, toUC); - } else { -// Replica not found anywhere so it should be invalidated -toInvalidate.add(new Block(replica)); - } - break; -} else if (cmp == 0) { - // Replica matched current storageblock - reportDiffSortedInner(storageInfo, replica, reportedState, -storageBlock, toAdd, toCorrupt, toUC); - storageBlock = null; -} else { - // replica has higher ID than storedBlock - // Remove all stored blocks with IDs lower than replica - do { -toRemove.add(storageBlock); -storageBlock = storageBlocksIterator.hasNext() - ? storageBlocksIterator.next() : null; - } while (storageBlock != null && - Long.compare(replicaID, storageBlock.getBlockId()) > 0); +Block delimiterBlock = new Block(); +BlockInfo delimiter = new BlockInfoContiguous(delimiterBlock, +(short) 1); +AddBlockResult result = storageInfo.addBlock(delimiter, delimiterBlock); +assert result == AddBlockResult.ADDED +: "Delimiting block cannot be present in the node"; +int headIndex = 0; //currently the delimiter is in the head of the list +int curIndex; + +if (newReport == null) { + newReport = BlockListAsLongs.EMPTY; +} +// scan the report and process newly reported blocks +for (BlockReportReplica iblk : newReport) { + ReplicaState iState = iblk.getState(); + LOG.debug("Reported block {} on {} size {} replicaState = {}", iblk, dn, + iblk.getNumBytes(), iState); + BlockInfo storedBlock = processReportedBlock(storageInfo, + iblk, iState, toAdd, toInvalidate, toCorrupt, toUC); + + // move block to the head of the list + if (storedBlock != null) { +curIndex = storedBlock.findStorageInfo(storageInfo); +if (curIndex >= 0) { + headIndex = + storageInfo.moveBlockToHead(storedBlock, curIndex, headIndex); } - } while (storageBlock != null); + } } -// Iterate any remaining blocks that have not been reported and remove them -while (storageBlocksIterator.hasNext()) { - toRemove.add(storageBlocksIterator.next()); +// collect blocks that have not been reported +// all of them are next to the delimiter +Iterator it = +
[jira] [Work logged] (HADOOP-17749) Remove lock contention in SelectorPool of SocketIOWithTimeout
[ https://issues.apache.org/jira/browse/HADOOP-17749?focusedWorklogId=608418&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-608418 ] ASF GitHub Bot logged work on HADOOP-17749: --- Author: ASF GitHub Bot Created on: 08/Jun/21 12:01 Start Date: 08/Jun/21 12:01 Worklog Time Spent: 10m Work Description: liangxs commented on pull request #3080: URL: https://github.com/apache/hadoop/pull/3080#issuecomment-856703041 Could someone please review this PR? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 608418) Time Spent: 40m (was: 0.5h) > Remove lock contention in SelectorPool of SocketIOWithTimeout > - > > Key: HADOOP-17749 > URL: https://issues.apache.org/jira/browse/HADOOP-17749 > Project: Hadoop Common > Issue Type: Improvement > Components: net >Reporter: Xuesen Liang >Priority: Major > Labels: pull-request-available > Time Spent: 40m > Remaining Estimate: 0h > > *SelectorPool* in > hadoop-common/src/main/java/org/apache/hadoop/net/*SocketIOWithTimeout.java* > is a point of lock contention. > For example: > {code:java} > $ grep 'waiting to lock <0x7f7d94006d90>' 63692.jstack | uniq -c > 1005 - waiting to lock <0x7f7d94006d90> (a > org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool) > {code} > and the thread stack is as follows: > {code:java} > "IPC Client (324579982) connection to /100.10.6.10:60020 from user_00" #14139 > daemon prio=5 os_prio=0 tid=0x7f7374039000 nid=0x85cc waiting for monitor > entry [0x7f6f45939000] > java.lang.Thread.State: BLOCKED (on object monitor) > at > org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.get(SocketIOWithTimeout.java:390) > - waiting to lock <0x7f7d94006d90> (a > org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool) > at > org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:325) > at > org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) > at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) > at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) > at java.io.FilterInputStream.read(FilterInputStream.java:133) > at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) > at java.io.BufferedInputStream.read(BufferedInputStream.java:265) > - locked <0x7fa818caf258> (a java.io.BufferedInputStream) > at java.io.DataInputStream.readInt(DataInputStream.java:387) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.readResponse(RpcClientImpl.java:967) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.run(RpcClientImpl.java:568) > {code} > We should remove the lock contention. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] liangxs commented on pull request #3080: HADOOP-17749. Remove lock contention in SelectorPool of SocketIOWithTimeout
liangxs commented on pull request #3080: URL: https://github.com/apache/hadoop/pull/3080#issuecomment-856703041 Could someone please review this PR? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ferhui commented on a change in pull request #3065: HDFS-13671. Namenode deletes large dir slowly caused by FoldedTreeSet#removeAndGet
ferhui commented on a change in pull request #3065: URL: https://github.com/apache/hadoop/pull/3065#discussion_r647357273 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java ## @@ -1996,7 +1996,12 @@ private void metaSave(PrintWriter out) { LightWeightHashSet openFileIds = new LightWeightHashSet<>(); for (DatanodeDescriptor dataNode : blockManager.getDatanodeManager().getDatanodes()) { - for (long ucFileId : dataNode.getLeavingServiceStatus().getOpenFiles()) { + // Sort open files + LightWeightHashSet dnOpenFiles = + dataNode.getLeavingServiceStatus().getOpenFiles(); + Long[] dnOpenFileIds = new Long[dnOpenFiles.size()]; + Arrays.sort(dnOpenFiles.toArray(dnOpenFileIds)); + for (Long ucFileId : dnOpenFileIds) { Review comment: @aajisaka Thank you very much for review, add this change because TestDecommission#testDecommissionWithOpenfileReporting fails. In DatanodeAdminDefaultMonitor#processBlocksInternal, blocks are sorted because of FoldedTreeSet, so here inode ids are sorted. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #3084: HDFS-16057. Make sure the order for location in ENTERING_MAINTENANCE …
hadoop-yetus commented on pull request #3084: URL: https://github.com/apache/hadoop/pull/3084#issuecomment-856650629 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 41s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | -1 :x: | mvninstall | 29m 28s | [/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3084/1/artifact/out/branch-mvninstall-root.txt) | root in trunk failed. | | +1 :green_heart: | compile | 1m 43s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 1m 27s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | -0 :warning: | checkstyle | 0m 43s | [/buildtool-branch-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3084/1/artifact/out/buildtool-branch-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt) | The patch fails to run checkstyle in hadoop-hdfs | | -1 :x: | mvnsite | 1m 21s | [/branch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3084/1/artifact/out/branch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in trunk failed. | | -1 :x: | javadoc | 0m 19s | [/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3084/1/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt) | hadoop-hdfs in trunk failed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04. | | -1 :x: | javadoc | 0m 18s | [/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3084/1/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt) | hadoop-hdfs in trunk failed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10. | | -1 :x: | spotbugs | 0m 18s | [/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3084/1/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in trunk failed. | | -1 :x: | shadedclient | 10m 51s | | branch has errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 30s | | the patch passed | | +1 :green_heart: | compile | 1m 33s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 1m 33s | | the patch passed | | +1 :green_heart: | compile | 1m 23s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 1m 23s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 22s | [/buildtool-patch-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3084/1/artifact/out/buildtool-patch-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt) | The patch fails to run checkstyle in hadoop-hdfs | | -1 :x: | mvnsite | 1m 30s | [/patch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3084/1/artifact/out/patch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch failed. | | -1 :x: | javadoc | 0m 37s | [/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3084/1/artifact/out/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt) | hadoop-hdfs in the patch failed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04. | | -1 :x: | javadoc | 0m 15s | [/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3084/1/artifact/out/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_292-
[GitHub] [hadoop] hadoop-yetus commented on pull request #3075: YARN-10805. Replace Guava Lists usage by Hadoop's own Lists in hadoop-yarn-project
hadoop-yetus commented on pull request #3075: URL: https://github.com/apache/hadoop/pull/3075#issuecomment-856644751 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 0s | | Docker mode activated. | | -1 :x: | docker | 4m 44s | | Docker failed to build yetus/hadoop:1c0b2edde93. | | Subsystem | Report/Notes | |--:|:-| | GITHUB PR | https://github.com/apache/hadoop/pull/3075 | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3075/4/console | | versions | git=2.17.1 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] aajisaka commented on a change in pull request #3065: HDFS-13671. Namenode deletes large dir slowly caused by FoldedTreeSet#removeAndGet
aajisaka commented on a change in pull request #3065: URL: https://github.com/apache/hadoop/pull/3065#discussion_r647294150 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ReplicaMap.java ## @@ -238,9 +221,10 @@ ReplicaInfo remove(String bpid, long blockId) { * @return the number of replicas in the map */ int size(String bpid) { +LightWeightResizableGSet m = null; try (AutoCloseableLock l = readLock.acquire()) { - FoldedTreeSet set = map.get(bpid); - return set != null ? set.size() : 0; + m = map.get(bpid); Review comment: The definition of `m` can be moved into the try clause. ```suggestion GSet m = map.get(bpid); ``` ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeStorageInfo.java ## @@ -262,30 +257,61 @@ public AddBlockResult addBlock(BlockInfo b, Block reportedBlock) { } } +// add to the head of the data-node list b.addStorage(this, reportedBlock); -blocks.add(b); +insertToList(b); return result; } AddBlockResult addBlock(BlockInfo b) { return addBlock(b, b); } - boolean removeBlock(BlockInfo b) { -blocks.remove(b); -return b.removeStorage(this); + public void insertToList(BlockInfo b) { +blockList = b.listInsert(blockList, this); +numBlocks++; + } + public boolean removeBlock(BlockInfo b) { +blockList = b.listRemove(blockList, this); +if (b.removeStorage(this)) { + numBlocks--; + return true; +} else { + return false; +} } int numBlocks() { -return blocks.size(); +return numBlocks; } - + + Iterator getBlockIterator() { +return new BlockIterator(blockList); + } + /** - * @return iterator to an unmodifiable set of blocks - * related to this {@link DatanodeStorageInfo} + * Move block to the head of the list of blocks belonging to the data-node. + * @return the index of the head of the blockList */ - Iterator getBlockIterator() { -return Collections.unmodifiableSet(blocks).iterator(); + int moveBlockToHead(BlockInfo b, int curIndex, int headIndex) { +blockList = b.moveBlockToHead(blockList, this, curIndex, headIndex); +return curIndex; + } + + int getHeadIndex(DatanodeStorageInfo storageInfo) { +if (blockList == null) { + return -1; +} +return blockList.findStorageInfo(storageInfo); + } Review comment: This function is unused. It can be removed. ## File path: hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockHasMultipleReplicasOnSameDN.java ## @@ -118,13 +116,12 @@ public void testBlockHasMultipleReplicasOnSameDN() throws IOException { StorageBlockReport reports[] = new StorageBlockReport[cluster.getStoragesPerDatanode()]; -ArrayList blocks = new ArrayList<>(); +ArrayList blocks = new ArrayList(); Review comment: Nit: ```suggestion ArrayList blocks = new ArrayList<>(); ``` ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/proto/DatanodeProtocol.proto ## @@ -256,9 +256,6 @@ message BlockReportContextProto { // The block report lease ID, or 0 if we are sending without a lease to // bypass rate-limiting. optional uint64 leaseId = 4 [ default = 0 ]; - - // True if the reported blocks are sorted by increasing block IDs - optional bool sorted = 5 [default = false]; Review comment: We must document that field number 5 cannot be reused. ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java ## @@ -1996,7 +1996,12 @@ private void metaSave(PrintWriter out) { LightWeightHashSet openFileIds = new LightWeightHashSet<>(); for (DatanodeDescriptor dataNode : blockManager.getDatanodeManager().getDatanodes()) { - for (long ucFileId : dataNode.getLeavingServiceStatus().getOpenFiles()) { + // Sort open files + LightWeightHashSet dnOpenFiles = + dataNode.getLeavingServiceStatus().getOpenFiles(); + Long[] dnOpenFileIds = new Long[dnOpenFiles.size()]; + Arrays.sort(dnOpenFiles.toArray(dnOpenFileIds)); + for (Long ucFileId : dnOpenFileIds) { Review comment: I thought we have to sort inode ID throughout the DataNodes. However, It can be addressed in a separate jira because it is not sorted throughout the DataNodes even before applying the patch. ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeStorageInfo.java ## @@ -262,30 +257,61 @@ public AddBlockResult addBlock(BlockInfo b, Block reportedBlock) { } } +// add to the head of the data-node list b.addStorage(this, reportedBlock); -blocks.add(b); +insertToList(b);
[GitHub] [hadoop] hadoop-yetus commented on pull request #3075: YARN-10805. Replace Guava Lists usage by Hadoop's own Lists in hadoop-yarn-project
hadoop-yetus commented on pull request #3075: URL: https://github.com/apache/hadoop/pull/3075#issuecomment-856635662 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 0s | | Docker mode activated. | | -1 :x: | docker | 4m 38s | | Docker failed to build yetus/hadoop:1c0b2edde93. | | Subsystem | Report/Notes | |--:|:-| | GITHUB PR | https://github.com/apache/hadoop/pull/3075 | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3075/3/console | | versions | git=2.17.1 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] tomscut opened a new pull request #3084: HDFS-16057. Make sure the order for location in ENTERING_MAINTENANCE …
tomscut opened a new pull request #3084: URL: https://github.com/apache/hadoop/pull/3084 JIRA: [HDFS-16057](https://issues.apache.org/jira/browse/HDFS-16057). We use compactor to sort locations in getBlockLocations(), and the expected result is: live -> stale -> entering_maintenance -> decommissioned. But the networktopology. SortByDistance() will disrupt the order. We should also filtered out node in sate AdminStates.ENTERING_MAINTENANCE before networktopology. SortByDistance(). org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager#sortLocatedBlock() `DatanodeInfoWithStorage[] di = lb.getLocations(); // Move decommissioned/stale datanodes to the bottom Arrays.sort(di, comparator); // Sort nodes by network distance only for located blocks int lastActiveIndex = di.length - 1; while (lastActiveIndex > 0 && isInactive(di[lastActiveIndex])) { --lastActiveIndex; } int activeLen = lastActiveIndex + 1; if(nonDatanodeReader) { networktopology.sortByDistanceUsingNetworkLocation(client, lb.getLocations(), activeLen, createSecondaryNodeSorter()); } else { networktopology.sortByDistance(client, lb.getLocations(), activeLen, createSecondaryNodeSorter()); } ` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17750) Fix asf license errors in newly added files by HADOOP-17727
[ https://issues.apache.org/jira/browse/HADOOP-17750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Takanobu Asanuma updated HADOOP-17750: -- Labels: (was: pull-request-available) Status: Patch Available (was: Open) > Fix asf license errors in newly added files by HADOOP-17727 > --- > > Key: HADOOP-17750 > URL: https://issues.apache.org/jira/browse/HADOOP-17750 > Project: Hadoop Common > Issue Type: Bug >Reporter: Takanobu Asanuma >Assignee: Takanobu Asanuma >Priority: Major > Time Spent: 10m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-17750) Fix asf license errors in newly added files by HADOOP-17727
[ https://issues.apache.org/jira/browse/HADOOP-17750?focusedWorklogId=608361&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-608361 ] ASF GitHub Bot logged work on HADOOP-17750: --- Author: ASF GitHub Bot Created on: 08/Jun/21 09:12 Start Date: 08/Jun/21 09:12 Worklog Time Spent: 10m Work Description: tasanuma opened a new pull request #3083: URL: https://github.com/apache/hadoop/pull/3083 JIRA: HADOOP-17750 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 608361) Remaining Estimate: 0h Time Spent: 10m > Fix asf license errors in newly added files by HADOOP-17727 > --- > > Key: HADOOP-17750 > URL: https://issues.apache.org/jira/browse/HADOOP-17750 > Project: Hadoop Common > Issue Type: Bug >Reporter: Takanobu Asanuma >Assignee: Takanobu Asanuma >Priority: Major > Time Spent: 10m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17750) Fix asf license errors in newly added files by HADOOP-17727
[ https://issues.apache.org/jira/browse/HADOOP-17750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HADOOP-17750: Labels: pull-request-available (was: ) > Fix asf license errors in newly added files by HADOOP-17727 > --- > > Key: HADOOP-17750 > URL: https://issues.apache.org/jira/browse/HADOOP-17750 > Project: Hadoop Common > Issue Type: Bug >Reporter: Takanobu Asanuma >Assignee: Takanobu Asanuma >Priority: Major > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] tasanuma opened a new pull request #3083: HADOOP-17750. Fix asf license errors in newly added files by HADOOP-17727
tasanuma opened a new pull request #3083: URL: https://github.com/apache/hadoop/pull/3083 JIRA: HADOOP-17750 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17750) Fix asf license errors in newly added files by HADOOP-17727
[ https://issues.apache.org/jira/browse/HADOOP-17750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Takanobu Asanuma updated HADOOP-17750: -- Issue Type: Bug (was: Wish) > Fix asf license errors in newly added files by HADOOP-17727 > --- > > Key: HADOOP-17750 > URL: https://issues.apache.org/jira/browse/HADOOP-17750 > Project: Hadoop Common > Issue Type: Bug >Reporter: Takanobu Asanuma >Assignee: Takanobu Asanuma >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-17750) Fix asf license errors in newly added files by HADOOP-17727
Takanobu Asanuma created HADOOP-17750: - Summary: Fix asf license errors in newly added files by HADOOP-17727 Key: HADOOP-17750 URL: https://issues.apache.org/jira/browse/HADOOP-17750 Project: Hadoop Common Issue Type: Wish Reporter: Takanobu Asanuma Assignee: Takanobu Asanuma -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org