[GitHub] [hadoop] hadoop-yetus commented on pull request #3118: [Do not commit] CI for Debian 10
hadoop-yetus commented on pull request #3118: URL: https://github.com/apache/hadoop/pull/3118#issuecomment-864362707 (!) A patch to the testing environment has been detected. Re-executing against the patched versions to perform further tests. The console is at https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3118/15/console in case of problems. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2738: HDFS-15842. HDFS mover to emit metrics.
hadoop-yetus commented on pull request #2738: URL: https://github.com/apache/hadoop/pull/2738#issuecomment-864360570 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 13m 16s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 30m 53s | | trunk passed | | +1 :green_heart: | compile | 1m 22s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 1m 19s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 1m 3s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 26s | | trunk passed | | +1 :green_heart: | javadoc | 0m 57s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 29s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 9s | | trunk passed | | +1 :green_heart: | shadedclient | 16m 28s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 10s | | the patch passed | | +1 :green_heart: | compile | 1m 12s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 1m 12s | | the patch passed | | +1 :green_heart: | compile | 1m 10s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 1m 10s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 54s | | hadoop-hdfs-project/hadoop-hdfs: The patch generated 0 new + 100 unchanged - 1 fixed = 100 total (was 101) | | +1 :green_heart: | mvnsite | 1m 16s | | the patch passed | | +1 :green_heart: | javadoc | 0m 46s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 21s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 9s | | the patch passed | | +1 :green_heart: | shadedclient | 15m 57s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 230m 30s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2738/4/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 46s | | The patch does not generate ASF License warnings. | | | | 327m 18s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.TestRollingUpgrade | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2738/4/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/2738 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux cab7ae036721 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 96525f68efed9fd50d4ecc5ac39d585e8f7b6947 | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2738/4/testReport/ | | Max. process+thread count | 3829 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2738/4/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an
[GitHub] [hadoop] tomscut edited a comment on pull request #3119: HDFS-16078. Remove unused parameters for DatanodeManager.handleLifeli…
tomscut edited a comment on pull request #3119: URL: https://github.com/apache/hadoop/pull/3119#issuecomment-864350412 > Looks good to me Thanks @jojochuang for your review. Could you also help to review those PRs([PR#3120](https://github.com/apache/hadoop/pull/3120) [PR#3117](https://github.com/apache/hadoop/pull/3117)) if you have time. Thanks a lot. : ) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] tomscut commented on pull request #3117: HDFS-16076. Avoid using slow DataNodes for reading by sorting locations
tomscut commented on pull request #3117: URL: https://github.com/apache/hadoop/pull/3117#issuecomment-864352035 Rebased to the latest commit. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] tomscut commented on pull request #3119: HDFS-16078. Remove unused parameters for DatanodeManager.handleLifeli…
tomscut commented on pull request #3119: URL: https://github.com/apache/hadoop/pull/3119#issuecomment-864350412 > Looks good to me Thanks @jojochuang for your review. Could you also help to review those PRs([PR#3120](https://github.com/apache/hadoop/pull/3120) [PR#3117](https://github.com/apache/hadoop/pull/3117) [PR#3325](https://github.com/apache/hbase/pull/3325)) if you have time. Thanks a lot. : ) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-17766) CI for Debian 10
Gautham Banasandra created HADOOP-17766: --- Summary: CI for Debian 10 Key: HADOOP-17766 URL: https://issues.apache.org/jira/browse/HADOOP-17766 Project: Hadoop Common Issue Type: Improvement Components: build Affects Versions: 3.4.0 Reporter: Gautham Banasandra Assignee: Gautham Banasandra Need to setup CI for Debian 10. We need to also ensure it runs only if there are any changes to C++ files. Running it for all the PRs would be redundant. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #3118: [Do not commit] CI for Debian 10
hadoop-yetus commented on pull request #3118: URL: https://github.com/apache/hadoop/pull/3118#issuecomment-864344828 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 49s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | shelldocs | 0m 0s | | Shelldocs was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 38m 58s | | trunk passed | | +1 :green_heart: | compile | 2m 32s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 26s | | trunk passed | | +1 :green_heart: | shadedclient | 20m 14s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 15s | | the patch passed | | +1 :green_heart: | compile | 2m 24s | | the patch passed | | +1 :green_heart: | cc | 2m 24s | | the patch passed | | +1 :green_heart: | golang | 2m 24s | | the patch passed | | +1 :green_heart: | javac | 2m 24s | | the patch passed | | -1 :x: | blanks | 0m 0s | [/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3118/6/artifact/out/blanks-eol.txt) | The patch has 1 line(s) that end in blanks. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply | | +1 :green_heart: | hadolint | 0m 2s | | No new issues. | | +1 :green_heart: | mvnsite | 0m 24s | | the patch passed | | +1 :green_heart: | shellcheck | 0m 0s | | No new issues. | | +1 :green_heart: | shadedclient | 20m 2s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 86m 9s | | hadoop-hdfs-native-client in the patch passed. | | +1 :green_heart: | asflicense | 0m 31s | | The patch does not generate ASF License warnings. | | | | 175m 41s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3118/6/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3118 | | Optional Tests | dupname asflicense codespell shellcheck shelldocs hadolint compile cc mvnsite javac unit golang | | uname | Linux 8375064551ab 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 252ce2599ffcfbb79f783a843f3d9ee58e62af98 | | Default Java | Debian-11.0.11+9-post-Debian-1 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3118/6/testReport/ | | Max. process+thread count | 609 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: hadoop-hdfs-project/hadoop-hdfs-native-client | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3118/6/console | | versions | git=2.30.2 maven=3.6.3 shellcheck=0.7.1 hadolint=1.11.1-0-g0e692dd | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] tomscut commented on pull request #3120: HDFS-16079. Improve the block state change log
tomscut commented on pull request #3120: URL: https://github.com/apache/hadoop/pull/3120#issuecomment-864336913 Those failed UTs are unrelated to the change and work fine locally. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] tomscut commented on pull request #3119: HDFS-16078. Remove unused parameters for DatanodeManager.handleLifeli…
tomscut commented on pull request #3119: URL: https://github.com/apache/hadoop/pull/3119#issuecomment-864336517 Hi @ayushtkn , could you also help to review it? Thank you. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] tomscut commented on pull request #3119: HDFS-16078. Remove unused parameters for DatanodeManager.handleLifeli…
tomscut commented on pull request #3119: URL: https://github.com/apache/hadoop/pull/3119#issuecomment-864335737 Hi @tasanuma @jojochuang , could you please take a look at this little change? Thanks. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] LeonGao91 commented on a change in pull request #2738: HDFS-15842. HDFS mover to emit metrics.
LeonGao91 commented on a change in pull request #2738: URL: https://github.com/apache/hadoop/pull/2738#discussion_r654722499 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/mover/MoverMetrics.java ## @@ -0,0 +1,84 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hdfs.server.mover; + +import org.apache.hadoop.metrics2.annotation.Metric; +import org.apache.hadoop.metrics2.annotation.Metrics; +import org.apache.hadoop.metrics2.lib.DefaultMetricsSystem; +import org.apache.hadoop.metrics2.lib.MutableCounterLong; +import org.apache.hadoop.metrics2.lib.MutableGaugeInt; + +/** + * Metrics for HDFS Mover of a blockpool. + */ +@Metrics(about="Mover metrics", context="dfs") +final class MoverMetrics { + + private final Mover mover; + + @Metric("If mover is processing namespace.") + private MutableGaugeInt processingNamespace; + + @Metric("Number of blocks being scheduled.") + private MutableCounterLong blocksScheduled; + + @Metric("Number of files being processed.") + private MutableCounterLong filesProcessed; + + private MoverMetrics(Mover m) { +this.mover = m; + } + + public static MoverMetrics create(Mover mover) { +MoverMetrics m = new MoverMetrics(mover); +DefaultMetricsSystem.instance().unregisterSource(m.getName()); Review comment: You are right, this is not needed here. As discussed I will shutdown metrics at the end of the mover run -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #3118: [Do not commit] CI for Debian 10
hadoop-yetus commented on pull request #3118: URL: https://github.com/apache/hadoop/pull/3118#issuecomment-864322983 (!) A patch to the testing environment has been detected. Re-executing against the patched versions to perform further tests. The console is at https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3118/6/console in case of problems. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] sunchao commented on pull request #3005: HDFS-13522. RBF: Support observer node from Router-Based Federation
sunchao commented on pull request #3005: URL: https://github.com/apache/hadoop/pull/3005#issuecomment-864302452 @zhengzhuobinzzb to help reviewing, could you describe the approach you're taking in this PR in the description? cc @fengnanli too. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #3005: HDFS-13522. RBF: Support observer node from Router-Based Federation
hadoop-yetus removed a comment on pull request #3005: URL: https://github.com/apache/hadoop/pull/3005#issuecomment-840106802 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #3005: HDFS-13522. RBF: Support observer node from Router-Based Federation
hadoop-yetus removed a comment on pull request #3005: URL: https://github.com/apache/hadoop/pull/3005#issuecomment-840089955 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 57s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 11 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 15m 35s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 23m 13s | | trunk passed | | +1 :green_heart: | compile | 22m 33s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | compile | 19m 14s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | checkstyle | 4m 10s | | trunk passed | | +1 :green_heart: | mvnsite | 4m 45s | | trunk passed | | +1 :green_heart: | javadoc | 3m 29s | | trunk passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javadoc | 4m 46s | | trunk passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | spotbugs | 9m 41s | | trunk passed | | +1 :green_heart: | shadedclient | 17m 9s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 20s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 26s | | the patch passed | | +1 :green_heart: | compile | 22m 3s | | the patch passed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 | | +1 :green_heart: | javac | 22m 3s | | the patch passed | | +1 :green_heart: | compile | 19m 14s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | +1 :green_heart: | javac | 19m 14s | | the patch passed | | -1 :x: | blanks | 0m 0s | [/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3005/2/artifact/out/blanks-eol.txt) | The patch has 1 line(s) that end in blanks. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply | | -0 :warning: | checkstyle | 4m 5s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3005/2/artifact/out/results-checkstyle-root.txt) | root: The patch generated 54 new + 905 unchanged - 1 fixed = 959 total (was 906) | | +1 :green_heart: | mvnsite | 4m 44s | | the patch passed | | +1 :green_heart: | xml | 0m 3s | | The patch has no ill-formed XML file. | | -1 :x: | javadoc | 0m 43s | [/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-rbf-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3005/2/artifact/out/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-rbf-jdkUbuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04.txt) | hadoop-hdfs-rbf in the patch failed with JDK Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04. | | +1 :green_heart: | javadoc | 4m 44s | | the patch passed with JDK Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 | | -1 :x: | spotbugs | 1m 34s | [/new-spotbugs-hadoop-hdfs-project_hadoop-hdfs-rbf.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3005/2/artifact/out/new-spotbugs-hadoop-hdfs-project_hadoop-hdfs-rbf.html) | hadoop-hdfs-project/hadoop-hdfs-rbf generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) | | +1 :green_heart: | shadedclient | 17m 13s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 18m 21s | [/patch-unit-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3005/2/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt) | hadoop-common in the patch passed. | | +1 :green_heart: | unit | 2m 27s | | hadoop-hdfs-client in the patch passed. | | -1 :x: | unit | 383m 37s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3005/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | -1 :x: | unit | 35m 59s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3005/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt) | hadoop-hdfs-rbf in the patch passed. | | +0 :ok: | asflicense |
[GitHub] [hadoop] Jing9 commented on a change in pull request #2738: HDFS-15842. HDFS mover to emit metrics.
Jing9 commented on a change in pull request #2738: URL: https://github.com/apache/hadoop/pull/2738#discussion_r654685245 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/mover/MoverMetrics.java ## @@ -0,0 +1,84 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hdfs.server.mover; + +import org.apache.hadoop.metrics2.annotation.Metric; +import org.apache.hadoop.metrics2.annotation.Metrics; +import org.apache.hadoop.metrics2.lib.DefaultMetricsSystem; +import org.apache.hadoop.metrics2.lib.MutableCounterLong; +import org.apache.hadoop.metrics2.lib.MutableGaugeInt; + +/** + * Metrics for HDFS Mover of a blockpool. + */ +@Metrics(about="Mover metrics", context="dfs") +final class MoverMetrics { + + private final Mover mover; + + @Metric("If mover is processing namespace.") + private MutableGaugeInt processingNamespace; + + @Metric("Number of blocks being scheduled.") + private MutableCounterLong blocksScheduled; + + @Metric("Number of files being processed.") + private MutableCounterLong filesProcessed; + + private MoverMetrics(Mover m) { +this.mover = m; + } + + public static MoverMetrics create(Mover mover) { +MoverMetrics m = new MoverMetrics(mover); +DefaultMetricsSystem.instance().unregisterSource(m.getName()); Review comment: Any reason we want to call unregister here? Can we call unregister at the end of the mover running? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ferhui merged pull request #3114: HDFS-13671. Namenode deletes large dir slowly caused by FoldedTreeSet#removeAndGet
ferhui merged pull request #3114: URL: https://github.com/apache/hadoop/pull/3114 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] virajjasani edited a comment on pull request #3115: HDFS-16075. Use empty array constants present in StorageType and DatanodeInfo to avoid creating redundant objects
virajjasani edited a comment on pull request #3115: URL: https://github.com/apache/hadoop/pull/3115#issuecomment-864127746 > @virajjasani Thanks. > It seems that the following source files have the same problem. > > > hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/TaskCompletionEvent.java > > hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/TaskCompletionEvent.java > > hadoop-tools/hadoop-resourceestimator/src/main/java/org/apache/hadoop/resourceestimator/common/config/ResourceEstimatorUtil.java > > hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/CachedBlock.java Thanks for pointing this out @ferhui. IIUC, `new TaskCompletionEvent[0]` is being used at few places and we can replace them, however I could not see issue with `ResourceEstimatorUtil` and `CachedBlock`. Could you please help me understand? Thanks Edit: Is it good to track `TaskCompletionEvent` changes in separate MapReduce Jira? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ferhui commented on pull request #3114: HDFS-13671. Namenode deletes large dir slowly caused by FoldedTreeSet#removeAndGet
ferhui commented on pull request #3114: URL: https://github.com/apache/hadoop/pull/3114#issuecomment-863660083 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ferhui commented on pull request #3115: HDFS-16075. Use empty array constants present in StorageType and DatanodeInfo to avoid creating redundant objects
ferhui commented on pull request #3115: URL: https://github.com/apache/hadoop/pull/3115#issuecomment-864111057 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] tomscut commented on pull request #3114: HDFS-13671. Namenode deletes large dir slowly caused by FoldedTreeSet#removeAndGet
tomscut commented on pull request #3114: URL: https://github.com/apache/hadoop/pull/3114#issuecomment-863816478 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] goiri commented on a change in pull request #3100: HDFS-16065. RBF: Add metrics to record Router's operations
goiri commented on a change in pull request #3100: URL: https://github.com/apache/hadoop/pull/3100#discussion_r653838543 ## File path: hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientMetrics.java ## @@ -0,0 +1,646 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hdfs.server.federation.router; Review comment: We can ignore those. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] virajjasani opened a new pull request #3121: HDFS-16080. RBF: Invoking method in all locations should break the loop after successful result
virajjasani opened a new pull request #3121: URL: https://github.com/apache/hadoop/pull/3121 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] AlphaGouGe commented on pull request #3114: HDFS-13671. Namenode deletes large dir slowly caused by FoldedTreeSet#removeAndGet
AlphaGouGe commented on pull request #3114: URL: https://github.com/apache/hadoop/pull/3114#issuecomment-863810415 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #3005: HDFS-13522. RBF: Support observer node from Router-Based Federation
hadoop-yetus commented on pull request #3005: URL: https://github.com/apache/hadoop/pull/3005#issuecomment-864078875 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 19m 48s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 11 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 12m 46s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 20m 12s | | trunk passed | | +1 :green_heart: | compile | 20m 56s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 18m 11s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 3m 51s | | trunk passed | | +1 :green_heart: | mvnsite | 5m 24s | | trunk passed | | +1 :green_heart: | javadoc | 4m 13s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 5m 35s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 9m 55s | | trunk passed | | +1 :green_heart: | shadedclient | 14m 57s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 27s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 30s | | the patch passed | | +1 :green_heart: | compile | 20m 18s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 20m 18s | | the patch passed | | +1 :green_heart: | compile | 18m 14s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 18m 14s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 3m 42s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3005/15/artifact/out/results-checkstyle-root.txt) | root: The patch generated 5 new + 439 unchanged - 1 fixed = 444 total (was 440) | | +1 :green_heart: | mvnsite | 5m 21s | | the patch passed | | +1 :green_heart: | xml | 0m 3s | | The patch has no ill-formed XML file. | | +1 :green_heart: | javadoc | 4m 10s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 5m 32s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 10m 35s | | the patch passed | | +1 :green_heart: | shadedclient | 15m 2s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 17m 2s | | hadoop-common in the patch passed. | | +1 :green_heart: | unit | 2m 39s | | hadoop-hdfs-client in the patch passed. | | -1 :x: | unit | 399m 9s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3005/15/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | unit | 30m 43s | | hadoop-hdfs-rbf in the patch passed. | | +1 :green_heart: | asflicense | 1m 6s | | The patch does not generate ASF License warnings. | | | | 677m 17s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.fs.viewfs.TestViewFSOverloadSchemeWithMountTableConfigInHDFS | | | hadoop.hdfs.web.TestWebHdfsFileSystemContract | | | hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor | | | hadoop.hdfs.TestDFSShell | | | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby | | | hadoop.hdfs.server.namenode.ha.TestEditLogTailer | | | hadoop.fs.viewfs.TestViewFileSystemOverloadSchemeWithHdfsScheme | | | hadoop.hdfs.server.namenode.TestDecommissioningStatus | | | hadoop.fs.viewfs.TestViewFileSystemOverloadSchemeHdfsFileSystemContract | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3005/15/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3005 | | Optional Tests | dupname asflic
[GitHub] [hadoop] hadoop-yetus commented on pull request #3114: HDFS-13671. Namenode deletes large dir slowly caused by FoldedTreeSet#removeAndGet
hadoop-yetus commented on pull request #3114: URL: https://github.com/apache/hadoop/pull/3114#issuecomment-863779456 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #2639: HDFS-15785. Datanode to support using DNS to resolve nameservices to IP addresses to get list of namenodes.
hadoop-yetus commented on pull request #2639: URL: https://github.com/apache/hadoop/pull/2639#issuecomment-863817574 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] tomscut opened a new pull request #3120: HDFS-16079. Improve the block state change log
tomscut opened a new pull request #3120: URL: https://github.com/apache/hadoop/pull/3120 JIRA: [HDFS-16079](https://issues.apache.org/jira/browse/HDFS-16079) Improve the block state change log. Add readOnlyReplicas and replicasOnStaleNodes. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ferhui commented on a change in pull request #3114: HDFS-13671. Namenode deletes large dir slowly caused by FoldedTreeSet#removeAndGet
ferhui commented on a change in pull request #3114: URL: https://github.com/apache/hadoop/pull/3114#discussion_r654226534 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java ## @@ -4120,6 +4077,12 @@ private boolean processAndHandleReportedBlock( DatanodeStorageInfo storageInfo, Block block, ReplicaState reportedState, DatanodeDescriptor delHintNode) throws IOException { +// blockReceived reports a finalized block +Collection toAdd = new LinkedList<>(); +Collection toInvalidate = new LinkedList(); Review comment: THanks, resolve -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #3120: HDFS-16079. Improve the block state change log
hadoop-yetus commented on pull request #3120: URL: https://github.com/apache/hadoop/pull/3120#issuecomment-864262029 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 27m 20s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 37m 54s | | trunk passed | | +1 :green_heart: | compile | 1m 45s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 1m 32s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 1m 12s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 51s | | trunk passed | | +1 :green_heart: | javadoc | 1m 8s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 35s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 4m 7s | | trunk passed | | +1 :green_heart: | shadedclient | 22m 39s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 30s | | the patch passed | | +1 :green_heart: | compile | 1m 47s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 1m 47s | | the patch passed | | +1 :green_heart: | compile | 1m 39s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 1m 39s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 1m 10s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 36s | | the patch passed | | +1 :green_heart: | javadoc | 0m 59s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 23s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 21s | | the patch passed | | +1 :green_heart: | shadedclient | 18m 54s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 360m 17s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3120/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 49s | | The patch does not generate ASF License warnings. | | | | 491m 28s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor | | | hadoop.hdfs.server.namenode.ha.TestEditLogTailer | | | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby | | | hadoop.hdfs.TestDFSShell | | | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsVolumeList | | | hadoop.hdfs.server.namenode.TestDecommissioningStatus | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3120/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3120 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 2c8aa74d942f 4.15.0-128-generic #131-Ubuntu SMP Wed Dec 9 06:57:35 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / b7a6850643cb0134138fbcb5762e33701991d9f3 | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3120/1/testReport/ | | Max. process+thread count | 2124 (vs. ulimit of 5500) | | modules | C: ha
[GitHub] [hadoop] whbing commented on pull request #3114: HDFS-13671. Namenode deletes large dir slowly caused by FoldedTreeSet#removeAndGet
whbing commented on pull request #3114: URL: https://github.com/apache/hadoop/pull/3114#issuecomment-863810193 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-16528) Update document for web authentication kerberos principal configuration
[ https://issues.apache.org/jira/browse/HADOOP-16528?focusedWorklogId=612005&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-612005 ] ASF GitHub Bot logged work on HADOOP-16528: --- Author: ASF GitHub Bot Created on: 18/Jun/21 21:08 Start Date: 18/Jun/21 21:08 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on pull request #1953: URL: https://github.com/apache/hadoop/pull/1953#issuecomment-864064259 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 24m 45s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | markdownlint | 0m 0s | | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 12 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 12m 44s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 27m 17s | | trunk passed | | +1 :green_heart: | compile | 31m 5s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 25m 35s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 5m 9s | | trunk passed | | +1 :green_heart: | mvnsite | 4m 38s | | trunk passed | | +1 :green_heart: | javadoc | 3m 0s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 4m 18s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 8m 53s | | trunk passed | | +1 :green_heart: | shadedclient | 18m 31s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 31s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 30s | | the patch passed | | +1 :green_heart: | compile | 29m 43s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 29m 43s | | the patch passed | | +1 :green_heart: | compile | 25m 58s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 25m 58s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 4m 45s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-1953/6/artifact/out/results-checkstyle-root.txt) | root: The patch generated 1 new + 969 unchanged - 2 fixed = 970 total (was 971) | | +1 :green_heart: | mvnsite | 4m 19s | | the patch passed | | +1 :green_heart: | xml | 0m 2s | | The patch has no ill-formed XML file. | | +1 :green_heart: | javadoc | 2m 44s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 4m 2s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 8m 38s | | the patch passed | | +1 :green_heart: | shadedclient | 17m 51s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 19m 27s | | hadoop-common in the patch passed. | | -1 :x: | unit | 408m 23s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-1953/6/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | unit | 3m 40s | | hadoop-aws in the patch passed. | | +1 :green_heart: | asflicense | 1m 45s | | The patch does not generate ASF License warnings. | | | | 704m 5s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped | | | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby | | | hadoop.hdfs.server.namenode.ha.TestEditLogTailer | | | hadoop.hdfs.TestDFSShell | | | hadoop.hdfs.TestSnapshotCommands | | | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsVolumeList | | | hadoop.fs.viewfs.TestViewFSOverloadSchemeWithMountTableConfigInHDFS | | | hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackof
[jira] [Updated] (HADOOP-16528) Update document for web authentication kerberos principal configuration
[ https://issues.apache.org/jira/browse/HADOOP-16528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HADOOP-16528: Labels: pull-request-available (was: ) > Update document for web authentication kerberos principal configuration > --- > > Key: HADOOP-16528 > URL: https://issues.apache.org/jira/browse/HADOOP-16528 > Project: Hadoop Common > Issue Type: Bug > Components: auth >Reporter: Chen Zhang >Assignee: Masatake Iwasaki >Priority: Major > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > The config \{{dfs.web.authentication.kerberos.principal}} is not used anymore > after HADOOP-16354, but the document for WebHDFS is not updated, the > hdfs-default.xml should be updated as well. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #1953: HADOOP-16528. Update document for web authentication kerberos principal configuration.
hadoop-yetus commented on pull request #1953: URL: https://github.com/apache/hadoop/pull/1953#issuecomment-864064259 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 24m 45s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | markdownlint | 0m 0s | | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 12 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 12m 44s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 27m 17s | | trunk passed | | +1 :green_heart: | compile | 31m 5s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 25m 35s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 5m 9s | | trunk passed | | +1 :green_heart: | mvnsite | 4m 38s | | trunk passed | | +1 :green_heart: | javadoc | 3m 0s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 4m 18s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 8m 53s | | trunk passed | | +1 :green_heart: | shadedclient | 18m 31s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 31s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 30s | | the patch passed | | +1 :green_heart: | compile | 29m 43s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 29m 43s | | the patch passed | | +1 :green_heart: | compile | 25m 58s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 25m 58s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 4m 45s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-1953/6/artifact/out/results-checkstyle-root.txt) | root: The patch generated 1 new + 969 unchanged - 2 fixed = 970 total (was 971) | | +1 :green_heart: | mvnsite | 4m 19s | | the patch passed | | +1 :green_heart: | xml | 0m 2s | | The patch has no ill-formed XML file. | | +1 :green_heart: | javadoc | 2m 44s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 4m 2s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 8m 38s | | the patch passed | | +1 :green_heart: | shadedclient | 17m 51s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 19m 27s | | hadoop-common in the patch passed. | | -1 :x: | unit | 408m 23s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-1953/6/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | unit | 3m 40s | | hadoop-aws in the patch passed. | | +1 :green_heart: | asflicense | 1m 45s | | The patch does not generate ASF License warnings. | | | | 704m 5s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped | | | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby | | | hadoop.hdfs.server.namenode.ha.TestEditLogTailer | | | hadoop.hdfs.TestDFSShell | | | hadoop.hdfs.TestSnapshotCommands | | | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsVolumeList | | | hadoop.fs.viewfs.TestViewFSOverloadSchemeWithMountTableConfigInHDFS | | | hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor | | | hadoop.hdfs.TestHDFSFileSystemContract | | | hadoop.hdfs.server.namenode.TestDecommissioningStatus | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-1953/6/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1953 | | Optional Tests | dupname
[GitHub] [hadoop] LeonGao91 commented on a change in pull request #2639: HDFS-15785. Datanode to support using DNS to resolve nameservices to IP addresses to get list of namenodes.
LeonGao91 commented on a change in pull request #2639: URL: https://github.com/apache/hadoop/pull/2639#discussion_r654073694 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java ## @@ -647,6 +634,58 @@ public static String addKeySuffixes(String key, String... suffixes) { getNNLifelineRpcAddressesForCluster(Configuration conf) throws IOException { +Collection parentNameServices = getParentNameServices(conf); + +return getAddressesForNsIds(conf, parentNameServices, null, +DFS_NAMENODE_LIFELINE_RPC_ADDRESS_KEY); + } + + // + /** + * Returns the configured address for all NameNodes in the cluster. + * This is similar with DFSUtilClient.getAddressesForNsIds() + * but can access DFSConfigKeys. + * + * @param conf configuration + * @param defaultAddress default address to return in case key is not found. + * @param keys Set of keys to look for in the order of preference + * + * @return a map(nameserviceId to map(namenodeId to InetSocketAddress)) + */ + static Map> getAddressesForNsIds( Review comment: Yeah that sounds better, let me try it out -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] virajjasani commented on pull request #3115: HDFS-16075. Use empty array constants present in StorageType and DatanodeInfo to avoid creating redundant objects
virajjasani commented on pull request #3115: URL: https://github.com/apache/hadoop/pull/3115#issuecomment-863768777 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ferhui merged pull request #3113: HDFS-13671. Namenode deletes large dir slowly caused by FoldedTreeSet#removeAndGet
ferhui merged pull request #3113: URL: https://github.com/apache/hadoop/pull/3113 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] whbing commented on a change in pull request #3114: HDFS-13671. Namenode deletes large dir slowly caused by FoldedTreeSet#removeAndGet
whbing commented on a change in pull request #3114: URL: https://github.com/apache/hadoop/pull/3114#discussion_r654193188 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java ## @@ -4120,6 +4077,12 @@ private boolean processAndHandleReportedBlock( DatanodeStorageInfo storageInfo, Block block, ReplicaState reportedState, DatanodeDescriptor delHintNode) throws IOException { +// blockReceived reports a finalized block +Collection toAdd = new LinkedList<>(); +Collection toInvalidate = new LinkedList(); Review comment: Nit: can be <> -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ferhui commented on pull request #3113: HDFS-13671. Namenode deletes large dir slowly caused by FoldedTreeSet#removeAndGet
ferhui commented on pull request #3113: URL: https://github.com/apache/hadoop/pull/3113#issuecomment-863797134 @AlphaGouGe Thanks -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] tasanuma commented on pull request #3115: HDFS-16075. Use empty array constants present in StorageType and DatanodeInfo to avoid creating redundant objects
tasanuma commented on pull request #3115: URL: https://github.com/apache/hadoop/pull/3115#issuecomment-863700707 It makes sense to me. A finalized empty array is immutable. @virajjasani Could you fix the new checkstyle warning? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] AlphaGouGe commented on pull request #3113: HDFS-13671. Namenode deletes large dir slowly caused by FoldedTreeSet#removeAndGet
AlphaGouGe commented on pull request #3113: URL: https://github.com/apache/hadoop/pull/3113#issuecomment-863796508 LGTM -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] symious commented on a change in pull request #3100: HDFS-16065. RBF: Add metrics to record Router's operations
symious commented on a change in pull request #3100: URL: https://github.com/apache/hadoop/pull/3100#discussion_r654378797 ## File path: hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterClientMetrics.java ## @@ -0,0 +1,646 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.hdfs.server.federation.router; Review comment: Ok, thanks for the review. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #3115: HDFS-16075. Use empty array constants present in StorageType and DatanodeInfo to avoid creating redundant objects
hadoop-yetus commented on pull request #3115: URL: https://github.com/apache/hadoop/pull/3115#issuecomment-863434460 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ayushtkn commented on a change in pull request #3121: HDFS-16080. RBF: Invoking method in all locations should break the loop after successful result
ayushtkn commented on a change in pull request #3121: URL: https://github.com/apache/hadoop/pull/3121#discussion_r654568584 ## File path: hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcClient.java ## @@ -1129,25 +1129,17 @@ private static boolean isExpectedValue(Object expectedValue, Object value) { * Invoke method in all locations and return success if any succeeds. * * @param The type of the remote location. - * @param The type of the remote method return. * @param locations List of remote locations to call concurrently. * @param method The remote method and parameters to invoke. * @return If the call succeeds in any location. * @throws IOException If any of the calls return an exception. */ - public boolean invokeAll( + public boolean invokeAll( final Collection locations, final RemoteMethod method) - throws IOException { -boolean anyResult = false; + throws IOException { Map results = invokeConcurrent(locations, method, false, false, Boolean.class); -for (Boolean value : results.values()) { - boolean result = value.booleanValue(); - if (result) { -anyResult = true; - } -} -return anyResult; +return results.values().stream().anyMatch(value -> value); Review comment: why don't we just do `results.containsValues()`? Some performance benefit here? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ferhui edited a comment on pull request #3115: HDFS-16075. Use empty array constants present in StorageType and DatanodeInfo to avoid creating redundant objects
ferhui edited a comment on pull request #3115: URL: https://github.com/apache/hadoop/pull/3115#issuecomment-864141691 @virajjasani Thanks, we don't need to replace anything with ResourceEstimatorUtil and CachedBlock, I just grep EMPTY_ARRAY in source files. > Is it good to track TaskCompletionEvent changes in separate MapReduce Jira? Agree -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bogthe commented on a change in pull request #3109: HADOOP-17764. S3AInputStream read does not re-open the input stream on the second read retry attempt
bogthe commented on a change in pull request #3109: URL: https://github.com/apache/hadoop/pull/3109#discussion_r653931489 ## File path: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java ## @@ -396,6 +396,41 @@ private void incrementBytesRead(long bytesRead) { } } + @FunctionalInterface + interface CheckedIntSupplier { +int get() throws IOException; + } + + /** + * Helper function that allows to retry an IntSupplier in case of `IOException`. + * This function is used by `read()` and `read(buf, off, len)` functions. It tries to run + * `readFn` and in case of `IOException`: + * 1. If it gets an EOFException, return -1 + * 2. Else, run `onReadFailure` and retry running `readFn`. If it fails again, + * we run `onReadFailure` and re-throw the error. + * @param readFn the function to read, it must return an integer + * @param length length of data being attempted to read + * @return -1 if `readFn` throws EOFException, else returns int value from the result of `readFn` + * @throws IOException if retry of `readFn` also fails with `IOException` + */ + private int retryReadOnce(CheckedIntSupplier readFn, int length) throws IOException { +try { + return readFn.get(); +} catch (EOFException e) { + return -1; +} catch (IOException e) { + onReadFailure(e, length, e instanceof SocketTimeoutException); Review comment: I see you're calling `onReadFailure` with `length` instead of `1`. Any reasoning for this? That is used to calculate the range for a `GetObjectRequest` when the stream is being reopened. If it's intended then I would be curious of the impact it has on larger objects, have you done any testing around it? ## File path: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java ## @@ -1439,10 +1439,13 @@ public S3Object getObject(GetObjectRequest request) { * using FS state as well as the status. * @param fileStatus file status. * @param seekPolicy input policy for this operation + * @param changePolicy change policy for this operation. * @param readAheadRange readahead value. + * @param auditSpan audit span. * @return a context for read and select operations. */ - private S3AReadOpContext createReadContext( + @VisibleForTesting + protected S3AReadOpContext createReadContext( Review comment: I'm not really convinced that this is needed. Check the main comment for details. ## File path: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AInputStreamRetry.java ## @@ -0,0 +1,167 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.s3a; + +import javax.net.ssl.SSLException; +import java.io.IOException; +import java.net.SocketException; +import java.nio.charset.Charset; + +import com.amazonaws.services.s3.model.GetObjectRequest; +import com.amazonaws.services.s3.model.ObjectMetadata; +import com.amazonaws.services.s3.model.S3Object; +import com.amazonaws.services.s3.model.S3ObjectInputStream; +import org.junit.Test; + +import org.apache.commons.io.IOUtils; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.fs.s3a.audit.impl.NoopSpan; +import org.apache.hadoop.fs.s3a.auth.delegation.EncryptionSecrets; +import org.apache.hadoop.fs.s3a.impl.ChangeDetectionPolicy; + +import static java.lang.Math.min; +import static org.junit.Assert.assertArrayEquals; +import static org.junit.Assert.assertEquals; + +/** + * Tests S3AInputStream retry behavior on read failure. + * These tests are for validating expected behavior of retrying the S3AInputStream + * read() and read(b, off, len), it tests that the read should reopen the input stream and retry + * the read when IOException is thrown during the read process. + */ +public class TestS3AInputStreamRetry extends AbstractS3AMockTest { + + String input = "ab"; + + @Test + public void testInputStreamReadRetryForException() throws IOException { +S3AInputStream s3AInputStream = getMockedS3AInputStream(); + +assertEquals("'a' from the test input stream 'ab' should be the first character being read", +in
[jira] [Work logged] (HADOOP-17764) S3AInputStream read does not re-open the input stream on the second read retry attempt
[ https://issues.apache.org/jira/browse/HADOOP-17764?focusedWorklogId=611897&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-611897 ] ASF GitHub Bot logged work on HADOOP-17764: --- Author: ASF GitHub Bot Created on: 18/Jun/21 20:54 Start Date: 18/Jun/21 20:54 Worklog Time Spent: 10m Work Description: bogthe commented on a change in pull request #3109: URL: https://github.com/apache/hadoop/pull/3109#discussion_r653931489 ## File path: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java ## @@ -396,6 +396,41 @@ private void incrementBytesRead(long bytesRead) { } } + @FunctionalInterface + interface CheckedIntSupplier { +int get() throws IOException; + } + + /** + * Helper function that allows to retry an IntSupplier in case of `IOException`. + * This function is used by `read()` and `read(buf, off, len)` functions. It tries to run + * `readFn` and in case of `IOException`: + * 1. If it gets an EOFException, return -1 + * 2. Else, run `onReadFailure` and retry running `readFn`. If it fails again, + * we run `onReadFailure` and re-throw the error. + * @param readFn the function to read, it must return an integer + * @param length length of data being attempted to read + * @return -1 if `readFn` throws EOFException, else returns int value from the result of `readFn` + * @throws IOException if retry of `readFn` also fails with `IOException` + */ + private int retryReadOnce(CheckedIntSupplier readFn, int length) throws IOException { +try { + return readFn.get(); +} catch (EOFException e) { + return -1; +} catch (IOException e) { + onReadFailure(e, length, e instanceof SocketTimeoutException); Review comment: I see you're calling `onReadFailure` with `length` instead of `1`. Any reasoning for this? That is used to calculate the range for a `GetObjectRequest` when the stream is being reopened. If it's intended then I would be curious of the impact it has on larger objects, have you done any testing around it? ## File path: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java ## @@ -1439,10 +1439,13 @@ public S3Object getObject(GetObjectRequest request) { * using FS state as well as the status. * @param fileStatus file status. * @param seekPolicy input policy for this operation + * @param changePolicy change policy for this operation. * @param readAheadRange readahead value. + * @param auditSpan audit span. * @return a context for read and select operations. */ - private S3AReadOpContext createReadContext( + @VisibleForTesting + protected S3AReadOpContext createReadContext( Review comment: I'm not really convinced that this is needed. Check the main comment for details. ## File path: hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AInputStreamRetry.java ## @@ -0,0 +1,167 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.s3a; + +import javax.net.ssl.SSLException; +import java.io.IOException; +import java.net.SocketException; +import java.nio.charset.Charset; + +import com.amazonaws.services.s3.model.GetObjectRequest; +import com.amazonaws.services.s3.model.ObjectMetadata; +import com.amazonaws.services.s3.model.S3Object; +import com.amazonaws.services.s3.model.S3ObjectInputStream; +import org.junit.Test; + +import org.apache.commons.io.IOUtils; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.fs.s3a.audit.impl.NoopSpan; +import org.apache.hadoop.fs.s3a.auth.delegation.EncryptionSecrets; +import org.apache.hadoop.fs.s3a.impl.ChangeDetectionPolicy; + +import static java.lang.Math.min; +import static org.junit.Assert.assertArrayEquals; +import static org.junit.Assert.assertEquals; + +/** + * Tests S3AInputStream retry behavior on read failure. + * These tests are for validating expected behavior of retrying the S3AInputStream + * read() and read(b, off, len), it tests that the read should reopen the input stream and
[GitHub] [hadoop] hadoop-yetus commented on pull request #3118: [Do not commit] CI for Debian 10
hadoop-yetus commented on pull request #3118: URL: https://github.com/apache/hadoop/pull/3118#issuecomment-863450554 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #3117: HDFS-16076. Avoid using slow DataNodes for reading by sorting locations
hadoop-yetus commented on pull request #3117: URL: https://github.com/apache/hadoop/pull/3117#issuecomment-863612534 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] tomscut opened a new pull request #3119: HDFS-16078. Remove unused parameters for DatanodeManager.handleLifeli…
tomscut opened a new pull request #3119: URL: https://github.com/apache/hadoop/pull/3119 JIRA: [HDFS-16078](https://issues.apache.org/jira/browse/HDFS-16078) Remove unused parameters (blockPoolId, maxTransfers) for DatanodeManager.handleLifeline(). -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #3121: HDFS-16080. RBF: Invoking method in all locations should break the loop after successful result
hadoop-yetus commented on pull request #3121: URL: https://github.com/apache/hadoop/pull/3121#issuecomment-864214409 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 21m 10s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 33m 28s | | trunk passed | | +1 :green_heart: | compile | 0m 40s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 0m 35s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 0m 23s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 39s | | trunk passed | | +1 :green_heart: | javadoc | 0m 37s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 51s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 1m 14s | | trunk passed | | +1 :green_heart: | shadedclient | 17m 4s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 32s | | the patch passed | | +1 :green_heart: | compile | 0m 34s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 0m 34s | | the patch passed | | +1 :green_heart: | compile | 0m 28s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 0m 28s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 16s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 31s | | the patch passed | | +1 :green_heart: | javadoc | 0m 31s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 0m 48s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 1m 19s | | the patch passed | | +1 :green_heart: | shadedclient | 16m 47s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 25m 0s | | hadoop-hdfs-rbf in the patch passed. | | +1 :green_heart: | asflicense | 0m 29s | | The patch does not generate ASF License warnings. | | | | 125m 23s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3121/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3121 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 3520cb7c 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 5ec12dce8c203665c003f54ed77c54b1583e328c | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3121/2/testReport/ | | Max. process+thread count | 2255 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: hadoop-hdfs-project/hadoop-hdfs-rbf | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3121/2/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@inf
[GitHub] [hadoop] GauthamBanasandra opened a new pull request #3118: [Do not commit] CI for Debian 10
GauthamBanasandra opened a new pull request #3118: URL: https://github.com/apache/hadoop/pull/3118 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] virajjasani commented on a change in pull request #3121: HDFS-16080. RBF: Invoking method in all locations should break the loop after successful result
virajjasani commented on a change in pull request #3121: URL: https://github.com/apache/hadoop/pull/3121#discussion_r654572527 ## File path: hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcClient.java ## @@ -1129,25 +1129,17 @@ private static boolean isExpectedValue(Object expectedValue, Object value) { * Invoke method in all locations and return success if any succeeds. * * @param The type of the remote location. - * @param The type of the remote method return. * @param locations List of remote locations to call concurrently. * @param method The remote method and parameters to invoke. * @return If the call succeeds in any location. * @throws IOException If any of the calls return an exception. */ - public boolean invokeAll( + public boolean invokeAll( final Collection locations, final RemoteMethod method) - throws IOException { -boolean anyResult = false; + throws IOException { Map results = invokeConcurrent(locations, method, false, false, Boolean.class); -for (Boolean value : results.values()) { - boolean result = value.booleanValue(); - if (result) { -anyResult = true; - } -} -return anyResult; +return results.values().stream().anyMatch(value -> value); Review comment: Hmm nice one. I think one is not much better than the other, it's just about using stream vs for loop (and could open up for multiple discussions :) ). I agree that using containsValue() should be more lightweight so I am fine using it if you have strong preference. `TreeMap.containsValue()`: ``` public boolean containsValue(Object value) { for (Entry e = getFirstEntry(); e != null; e = successor(e)) if (valEquals(value, e.value)) return true; return false; } ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #3119: HDFS-16078. Remove unused parameters for DatanodeManager.handleLifeli…
hadoop-yetus commented on pull request #3119: URL: https://github.com/apache/hadoop/pull/3119#issuecomment-864176054 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 15m 21s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 37m 33s | | trunk passed | | +1 :green_heart: | compile | 1m 42s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 1m 31s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 1m 14s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 37s | | trunk passed | | +1 :green_heart: | javadoc | 1m 10s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 38s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 4m 16s | | trunk passed | | +1 :green_heart: | shadedclient | 18m 46s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 14s | | the patch passed | | +1 :green_heart: | compile | 1m 15s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 1m 15s | | the patch passed | | +1 :green_heart: | compile | 1m 11s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 1m 11s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 54s | | hadoop-hdfs-project/hadoop-hdfs: The patch generated 0 new + 200 unchanged - 1 fixed = 200 total (was 201) | | +1 :green_heart: | mvnsite | 1m 14s | | the patch passed | | +1 :green_heart: | javadoc | 0m 47s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 19s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | spotbugs | 3m 13s | | the patch passed | | +1 :green_heart: | shadedclient | 16m 5s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 227m 35s | | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 45s | | The patch does not generate ASF License warnings. | | | | 337m 33s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3119/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3119 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 7139ccdf9c16 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 3654e0871083f06392c6109e69d882b048810157 | | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3119/1/testReport/ | | Max. process+thread count | 3236 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3119/1/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment
[jira] [Resolved] (HADOOP-17114) Replace Guava initialization of Lists.newArrayList
[ https://issues.apache.org/jira/browse/HADOOP-17114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Viraj Jasani resolved HADOOP-17114. --- Resolution: Duplicate > Replace Guava initialization of Lists.newArrayList > -- > > Key: HADOOP-17114 > URL: https://issues.apache.org/jira/browse/HADOOP-17114 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Ahmed Hussein >Priority: Major > > There are unjustified use of Guava APIs to initialize LinkedLists and > ArrayLists. This could be simply replaced by Java API. > By analyzing hadoop code, the best way to replace guava is to do the > following steps: > * create a wrapper class org.apache.hadoop.util.unguava.Lists > * implement the following interfaces in Lists: > ** public static ArrayList newArrayList() > ** public static ArrayList newArrayList(E... elements) > ** public static ArrayList newArrayList(Iterable > elements) > ** public static ArrayList newArrayList(Iterator > elements) > ** public static ArrayList newArrayListWithCapacity(int > initialArraySize) > ** public static LinkedList newLinkedList() > ** public static LinkedList newLinkedList(Iterable > elements) > ** public static List asList(@Nullable E first, E[] rest) > > After this class is created, we can simply replace the import statement in > all the source code. > > {code:java} > Targets > Occurrences of 'com.google.common.collect.Lists;' in project with mask > '*.java' > Found Occurrences (246 usages found) > org.apache.hadoop.conf (1 usage found) > TestReconfiguration.java (1 usage found) > 22 import com.google.common.collect.Lists; > org.apache.hadoop.crypto (1 usage found) > CryptoCodec.java (1 usage found) > 35 import com.google.common.collect.Lists; > org.apache.hadoop.fs.azurebfs (3 usages found) > ITestAbfsIdentityTransformer.java (1 usage found) > 25 import com.google.common.collect.Lists; > ITestAzureBlobFilesystemAcl.java (1 usage found) > 21 import com.google.common.collect.Lists; > ITestAzureBlobFileSystemCheckAccess.java (1 usage found) > 20 import com.google.common.collect.Lists; > org.apache.hadoop.fs.http.client (2 usages found) > BaseTestHttpFSWith.java (1 usage found) > 77 import com.google.common.collect.Lists; > HttpFSFileSystem.java (1 usage found) > 75 import com.google.common.collect.Lists; > org.apache.hadoop.fs.permission (2 usages found) > AclStatus.java (1 usage found) > 27 import com.google.common.collect.Lists; > AclUtil.java (1 usage found) > 26 import com.google.common.collect.Lists; > org.apache.hadoop.fs.s3a (3 usages found) > ITestS3AFailureHandling.java (1 usage found) > 23 import com.google.common.collect.Lists; > ITestS3GuardListConsistency.java (1 usage found) > 34 import com.google.common.collect.Lists; > S3AUtils.java (1 usage found) > 57 import com.google.common.collect.Lists; > org.apache.hadoop.fs.s3a.auth (1 usage found) > RolePolicies.java (1 usage found) > 26 import com.google.common.collect.Lists; > org.apache.hadoop.fs.s3a.commit (2 usages found) > ITestCommitOperations.java (1 usage found) > 28 import com.google.common.collect.Lists; > TestMagicCommitPaths.java (1 usage found) > 25 import com.google.common.collect.Lists; > org.apache.hadoop.fs.s3a.commit.staging (3 usages found) > StagingTestBase.java (1 usage found) > 47 import com.google.common.collect.Lists; > TestStagingPartitionedFileListing.java (1 usage found) > 31 import com.google.common.collect.Lists; > TestStagingPartitionedTaskCommit.java (1 usage found) > 28 import com.google.common.collect.Lists; > org.apache.hadoop.fs.s3a.impl (2 usages found) > RenameOperation.java (1 usage found) > 30 import com.google.common.collect.Lists; > TestPartialDeleteFailures.java (1 usage found) > 37 import com.google.common.collect.Lists; > org.apache.hadoop.fs.s3a.s3guard (3 usages found) > DumpS3GuardDynamoTable.java (1 usage found) > 38 import com.google.common.collect.Lists; > DynamoDBMetadataStore.java (1 usage found) > 67 import com.google.common.collect.Lists; > ITestDynamoDBMetadataStore.java (1 usage found) > 49 import com.google.common.collect.Lists; > org.apache.hadoop.fs.shell (1 usage found) > AclCommands.java (1 usage found) > 25 import com.google.common.collect.Lists; > org.apache.hadoop.fs.viewfs
[jira] [Assigned] (HADOOP-17114) Replace Guava initialization of Lists.newArrayList
[ https://issues.apache.org/jira/browse/HADOOP-17114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Viraj Jasani reassigned HADOOP-17114: - Assignee: (was: Viraj Jasani) > Replace Guava initialization of Lists.newArrayList > -- > > Key: HADOOP-17114 > URL: https://issues.apache.org/jira/browse/HADOOP-17114 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Ahmed Hussein >Priority: Major > > There are unjustified use of Guava APIs to initialize LinkedLists and > ArrayLists. This could be simply replaced by Java API. > By analyzing hadoop code, the best way to replace guava is to do the > following steps: > * create a wrapper class org.apache.hadoop.util.unguava.Lists > * implement the following interfaces in Lists: > ** public static ArrayList newArrayList() > ** public static ArrayList newArrayList(E... elements) > ** public static ArrayList newArrayList(Iterable > elements) > ** public static ArrayList newArrayList(Iterator > elements) > ** public static ArrayList newArrayListWithCapacity(int > initialArraySize) > ** public static LinkedList newLinkedList() > ** public static LinkedList newLinkedList(Iterable > elements) > ** public static List asList(@Nullable E first, E[] rest) > > After this class is created, we can simply replace the import statement in > all the source code. > > {code:java} > Targets > Occurrences of 'com.google.common.collect.Lists;' in project with mask > '*.java' > Found Occurrences (246 usages found) > org.apache.hadoop.conf (1 usage found) > TestReconfiguration.java (1 usage found) > 22 import com.google.common.collect.Lists; > org.apache.hadoop.crypto (1 usage found) > CryptoCodec.java (1 usage found) > 35 import com.google.common.collect.Lists; > org.apache.hadoop.fs.azurebfs (3 usages found) > ITestAbfsIdentityTransformer.java (1 usage found) > 25 import com.google.common.collect.Lists; > ITestAzureBlobFilesystemAcl.java (1 usage found) > 21 import com.google.common.collect.Lists; > ITestAzureBlobFileSystemCheckAccess.java (1 usage found) > 20 import com.google.common.collect.Lists; > org.apache.hadoop.fs.http.client (2 usages found) > BaseTestHttpFSWith.java (1 usage found) > 77 import com.google.common.collect.Lists; > HttpFSFileSystem.java (1 usage found) > 75 import com.google.common.collect.Lists; > org.apache.hadoop.fs.permission (2 usages found) > AclStatus.java (1 usage found) > 27 import com.google.common.collect.Lists; > AclUtil.java (1 usage found) > 26 import com.google.common.collect.Lists; > org.apache.hadoop.fs.s3a (3 usages found) > ITestS3AFailureHandling.java (1 usage found) > 23 import com.google.common.collect.Lists; > ITestS3GuardListConsistency.java (1 usage found) > 34 import com.google.common.collect.Lists; > S3AUtils.java (1 usage found) > 57 import com.google.common.collect.Lists; > org.apache.hadoop.fs.s3a.auth (1 usage found) > RolePolicies.java (1 usage found) > 26 import com.google.common.collect.Lists; > org.apache.hadoop.fs.s3a.commit (2 usages found) > ITestCommitOperations.java (1 usage found) > 28 import com.google.common.collect.Lists; > TestMagicCommitPaths.java (1 usage found) > 25 import com.google.common.collect.Lists; > org.apache.hadoop.fs.s3a.commit.staging (3 usages found) > StagingTestBase.java (1 usage found) > 47 import com.google.common.collect.Lists; > TestStagingPartitionedFileListing.java (1 usage found) > 31 import com.google.common.collect.Lists; > TestStagingPartitionedTaskCommit.java (1 usage found) > 28 import com.google.common.collect.Lists; > org.apache.hadoop.fs.s3a.impl (2 usages found) > RenameOperation.java (1 usage found) > 30 import com.google.common.collect.Lists; > TestPartialDeleteFailures.java (1 usage found) > 37 import com.google.common.collect.Lists; > org.apache.hadoop.fs.s3a.s3guard (3 usages found) > DumpS3GuardDynamoTable.java (1 usage found) > 38 import com.google.common.collect.Lists; > DynamoDBMetadataStore.java (1 usage found) > 67 import com.google.common.collect.Lists; > ITestDynamoDBMetadataStore.java (1 usage found) > 49 import com.google.common.collect.Lists; > org.apache.hadoop.fs.shell (1 usage found) > AclCommands.java (1 usage found) > 25 import com.google.common.collect.Lists; > org.apache
[jira] [Commented] (HADOOP-17114) Replace Guava initialization of Lists.newArrayList
[ https://issues.apache.org/jira/browse/HADOOP-17114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17365612#comment-17365612 ] Viraj Jasani commented on HADOOP-17114: --- With HADOOP-17152 and it's sub-tasks resolved, marking this as duplicate. Thanks > Replace Guava initialization of Lists.newArrayList > -- > > Key: HADOOP-17114 > URL: https://issues.apache.org/jira/browse/HADOOP-17114 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Ahmed Hussein >Assignee: Viraj Jasani >Priority: Major > > There are unjustified use of Guava APIs to initialize LinkedLists and > ArrayLists. This could be simply replaced by Java API. > By analyzing hadoop code, the best way to replace guava is to do the > following steps: > * create a wrapper class org.apache.hadoop.util.unguava.Lists > * implement the following interfaces in Lists: > ** public static ArrayList newArrayList() > ** public static ArrayList newArrayList(E... elements) > ** public static ArrayList newArrayList(Iterable > elements) > ** public static ArrayList newArrayList(Iterator > elements) > ** public static ArrayList newArrayListWithCapacity(int > initialArraySize) > ** public static LinkedList newLinkedList() > ** public static LinkedList newLinkedList(Iterable > elements) > ** public static List asList(@Nullable E first, E[] rest) > > After this class is created, we can simply replace the import statement in > all the source code. > > {code:java} > Targets > Occurrences of 'com.google.common.collect.Lists;' in project with mask > '*.java' > Found Occurrences (246 usages found) > org.apache.hadoop.conf (1 usage found) > TestReconfiguration.java (1 usage found) > 22 import com.google.common.collect.Lists; > org.apache.hadoop.crypto (1 usage found) > CryptoCodec.java (1 usage found) > 35 import com.google.common.collect.Lists; > org.apache.hadoop.fs.azurebfs (3 usages found) > ITestAbfsIdentityTransformer.java (1 usage found) > 25 import com.google.common.collect.Lists; > ITestAzureBlobFilesystemAcl.java (1 usage found) > 21 import com.google.common.collect.Lists; > ITestAzureBlobFileSystemCheckAccess.java (1 usage found) > 20 import com.google.common.collect.Lists; > org.apache.hadoop.fs.http.client (2 usages found) > BaseTestHttpFSWith.java (1 usage found) > 77 import com.google.common.collect.Lists; > HttpFSFileSystem.java (1 usage found) > 75 import com.google.common.collect.Lists; > org.apache.hadoop.fs.permission (2 usages found) > AclStatus.java (1 usage found) > 27 import com.google.common.collect.Lists; > AclUtil.java (1 usage found) > 26 import com.google.common.collect.Lists; > org.apache.hadoop.fs.s3a (3 usages found) > ITestS3AFailureHandling.java (1 usage found) > 23 import com.google.common.collect.Lists; > ITestS3GuardListConsistency.java (1 usage found) > 34 import com.google.common.collect.Lists; > S3AUtils.java (1 usage found) > 57 import com.google.common.collect.Lists; > org.apache.hadoop.fs.s3a.auth (1 usage found) > RolePolicies.java (1 usage found) > 26 import com.google.common.collect.Lists; > org.apache.hadoop.fs.s3a.commit (2 usages found) > ITestCommitOperations.java (1 usage found) > 28 import com.google.common.collect.Lists; > TestMagicCommitPaths.java (1 usage found) > 25 import com.google.common.collect.Lists; > org.apache.hadoop.fs.s3a.commit.staging (3 usages found) > StagingTestBase.java (1 usage found) > 47 import com.google.common.collect.Lists; > TestStagingPartitionedFileListing.java (1 usage found) > 31 import com.google.common.collect.Lists; > TestStagingPartitionedTaskCommit.java (1 usage found) > 28 import com.google.common.collect.Lists; > org.apache.hadoop.fs.s3a.impl (2 usages found) > RenameOperation.java (1 usage found) > 30 import com.google.common.collect.Lists; > TestPartialDeleteFailures.java (1 usage found) > 37 import com.google.common.collect.Lists; > org.apache.hadoop.fs.s3a.s3guard (3 usages found) > DumpS3GuardDynamoTable.java (1 usage found) > 38 import com.google.common.collect.Lists; > DynamoDBMetadataStore.java (1 usage found) > 67 import com.google.common.collect.Lists; > ITestDynamoDBMetadataStore.java (1 usage found) > 49 import com.google.common.collect.Lists; > org.apache.hadoop.fs.shell (1
[jira] [Commented] (HADOOP-17749) Remove lock contention in SelectorPool of SocketIOWithTimeout
[ https://issues.apache.org/jira/browse/HADOOP-17749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17365311#comment-17365311 ] Xuesen Liang commented on HADOOP-17749: --- ping [~omalley] , could you kindly review this issue and PR? thanks. > Remove lock contention in SelectorPool of SocketIOWithTimeout > - > > Key: HADOOP-17749 > URL: https://issues.apache.org/jira/browse/HADOOP-17749 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Reporter: Xuesen Liang >Priority: Major > Labels: pull-request-available > Time Spent: 1h > Remaining Estimate: 0h > > *SelectorPool* in > hadoop-common/src/main/java/org/apache/hadoop/net/*SocketIOWithTimeout.java* > is a point of lock contention. > For example: > {code:java} > $ grep 'waiting to lock <0x7f7d94006d90>' 63692.jstack | uniq -c > 1005 - waiting to lock <0x7f7d94006d90> (a > org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool) > {code} > and the thread stack is as follows: > {code:java} > "IPC Client (324579982) connection to /100.10.6.10:60020 from user_00" #14139 > daemon prio=5 os_prio=0 tid=0x7f7374039000 nid=0x85cc waiting for monitor > entry [0x7f6f45939000] > java.lang.Thread.State: BLOCKED (on object monitor) > at > org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.get(SocketIOWithTimeout.java:390) > - waiting to lock <0x7f7d94006d90> (a > org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool) > at > org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:325) > at > org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) > at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) > at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) > at java.io.FilterInputStream.read(FilterInputStream.java:133) > at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) > at java.io.BufferedInputStream.read(BufferedInputStream.java:265) > - locked <0x7fa818caf258> (a java.io.BufferedInputStream) > at java.io.DataInputStream.readInt(DataInputStream.java:387) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.readResponse(RpcClientImpl.java:967) > at > org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.run(RpcClientImpl.java:568) > {code} > We should remove the lock contention. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org