[jira] [Commented] (HADOOP-18820) AWS SDK v2: make the v1 bridging support optional
[ https://issues.apache.org/jira/browse/HADOOP-18820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17749485#comment-17749485 ] ASF GitHub Bot commented on HADOOP-18820: - hadoop-yetus commented on PR #5872: URL: https://github.com/apache/hadoop/pull/5872#issuecomment-1659666215 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 37s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 0s | | xmllint was not available. | | +0 :ok: | markdownlint | 0m 0s | | markdownlint was not available. | | +0 :ok: | shelldocs | 0m 0s | | Shelldocs was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 11 new or modified test files. | _ feature-HADOOP-18073-s3a-sdk-upgrade Compile Tests _ | | +0 :ok: | mvndep | 14m 36s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 22m 4s | | feature-HADOOP-18073-s3a-sdk-upgrade passed | | +1 :green_heart: | compile | 11m 9s | | feature-HADOOP-18073-s3a-sdk-upgrade passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | compile | 9m 36s | | feature-HADOOP-18073-s3a-sdk-upgrade passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | checkstyle | 2m 32s | | feature-HADOOP-18073-s3a-sdk-upgrade passed | | +1 :green_heart: | mvnsite | 15m 36s | | feature-HADOOP-18073-s3a-sdk-upgrade passed | | +1 :green_heart: | javadoc | 6m 8s | | feature-HADOOP-18073-s3a-sdk-upgrade passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 5m 18s | | feature-HADOOP-18073-s3a-sdk-upgrade passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +0 :ok: | spotbugs | 0m 17s | | branch/hadoop-project no spotbugs output file (spotbugsXml.xml) | | -1 :x: | spotbugs | 0m 42s | [/branch-spotbugs-hadoop-tools_hadoop-aws-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5872/9/artifact/out/branch-spotbugs-hadoop-tools_hadoop-aws-warnings.html) | hadoop-tools/hadoop-aws in feature-HADOOP-18073-s3a-sdk-upgrade has 1 extant spotbugs warnings. | | -1 :x: | spotbugs | 19m 15s | [/branch-spotbugs-root-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5872/9/artifact/out/branch-spotbugs-root-warnings.html) | root in feature-HADOOP-18073-s3a-sdk-upgrade has 1 extant spotbugs warnings. | | +1 :green_heart: | shadedclient | 41m 11s | | branch has no errors when building and testing our client artifacts. | | -0 :warning: | patch | 41m 30s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 25s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 20m 53s | | the patch passed | | +1 :green_heart: | compile | 10m 7s | | the patch passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javac | 10m 7s | | the patch passed | | +1 :green_heart: | compile | 9m 31s | | the patch passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | javac | 9m 31s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 2m 22s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5872/9/artifact/out/results-checkstyle-root.txt) | root: The patch generated 1 new + 12 unchanged - 12 fixed = 13 total (was 24) | | +1 :green_heart: | mvnsite | 9m 25s | | the patch passed | | +1 :green_heart: | shellcheck | 0m 0s | | No new issues. | | +1 :green_heart: | javadoc | 5m 54s | | the patch passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | -1 :x: | javadoc | 4m 43s | [/patch-javadoc-root-jdkPrivateBuild-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5872/9/artifact/out/patch-javadoc-root-jdkPrivateBuild-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09.txt) | root in the patch failed with JDK Private Bui
[GitHub] [hadoop] hadoop-yetus commented on pull request #5872: HADOOP-18820. Cut AWS v1 support
hadoop-yetus commented on PR #5872: URL: https://github.com/apache/hadoop/pull/5872#issuecomment-1659666215 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 37s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 0s | | xmllint was not available. | | +0 :ok: | markdownlint | 0m 0s | | markdownlint was not available. | | +0 :ok: | shelldocs | 0m 0s | | Shelldocs was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 11 new or modified test files. | _ feature-HADOOP-18073-s3a-sdk-upgrade Compile Tests _ | | +0 :ok: | mvndep | 14m 36s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 22m 4s | | feature-HADOOP-18073-s3a-sdk-upgrade passed | | +1 :green_heart: | compile | 11m 9s | | feature-HADOOP-18073-s3a-sdk-upgrade passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | compile | 9m 36s | | feature-HADOOP-18073-s3a-sdk-upgrade passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | checkstyle | 2m 32s | | feature-HADOOP-18073-s3a-sdk-upgrade passed | | +1 :green_heart: | mvnsite | 15m 36s | | feature-HADOOP-18073-s3a-sdk-upgrade passed | | +1 :green_heart: | javadoc | 6m 8s | | feature-HADOOP-18073-s3a-sdk-upgrade passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 5m 18s | | feature-HADOOP-18073-s3a-sdk-upgrade passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +0 :ok: | spotbugs | 0m 17s | | branch/hadoop-project no spotbugs output file (spotbugsXml.xml) | | -1 :x: | spotbugs | 0m 42s | [/branch-spotbugs-hadoop-tools_hadoop-aws-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5872/9/artifact/out/branch-spotbugs-hadoop-tools_hadoop-aws-warnings.html) | hadoop-tools/hadoop-aws in feature-HADOOP-18073-s3a-sdk-upgrade has 1 extant spotbugs warnings. | | -1 :x: | spotbugs | 19m 15s | [/branch-spotbugs-root-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5872/9/artifact/out/branch-spotbugs-root-warnings.html) | root in feature-HADOOP-18073-s3a-sdk-upgrade has 1 extant spotbugs warnings. | | +1 :green_heart: | shadedclient | 41m 11s | | branch has no errors when building and testing our client artifacts. | | -0 :warning: | patch | 41m 30s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 25s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 20m 53s | | the patch passed | | +1 :green_heart: | compile | 10m 7s | | the patch passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javac | 10m 7s | | the patch passed | | +1 :green_heart: | compile | 9m 31s | | the patch passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | javac | 9m 31s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 2m 22s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5872/9/artifact/out/results-checkstyle-root.txt) | root: The patch generated 1 new + 12 unchanged - 12 fixed = 13 total (was 24) | | +1 :green_heart: | mvnsite | 9m 25s | | the patch passed | | +1 :green_heart: | shellcheck | 0m 0s | | No new issues. | | +1 :green_heart: | javadoc | 5m 54s | | the patch passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | -1 :x: | javadoc | 4m 43s | [/patch-javadoc-root-jdkPrivateBuild-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5872/9/artifact/out/patch-javadoc-root-jdkPrivateBuild-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09.txt) | root in the patch failed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09. | | +0 :ok: | spotbugs | 0m 17s | | hadoop-project has no data from spotbugs | | +1 :green_heart: | shadedclient | 38m 15s | | patch has no errors when building and testing our client artifacts.
[GitHub] [hadoop] Hexiaoqiao commented on pull request #5900: HDFS-17134. RBF: Fix duplicate results of getListing through Router.
Hexiaoqiao commented on PR #5900: URL: https://github.com/apache/hadoop/pull/5900#issuecomment-1659638104 Let's wait another 24h, I will check in if no more comments. Thanks. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] Hexiaoqiao commented on a diff in pull request #5855: HDFS-17093. In the case of all datanodes sending FBR when the namenode restarts (large clusters), there is an issue with incompl
Hexiaoqiao commented on code in PR #5855: URL: https://github.com/apache/hadoop/pull/5855#discussion_r1280154310 ## hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java: ## @@ -2904,7 +2908,8 @@ public boolean processReport(final DatanodeID nodeID, } if (namesystem.isInStartupSafeMode() && !StorageType.PROVIDED.equals(storageInfo.getStorageType()) - && storageInfo.getBlockReportCount() > 0) { + && storageInfo.getBlockReportCount() > 0 + && totalReportNum == currentReportNum) { Review Comment: I am not sure if this will be a good solution with condition `blockReportCount == 0`, consider that one disk failed but not checked in time. Will it affect this logic here? Thanks. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] xinglin commented on a diff in pull request #5855: HDFS-17093. In the case of all datanodes sending FBR when the namenode restarts (large clusters), there is an issue with incomplete
xinglin commented on code in PR #5855: URL: https://github.com/apache/hadoop/pull/5855#discussion_r1280143136 ## hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java: ## @@ -2904,7 +2908,8 @@ public boolean processReport(final DatanodeID nodeID, } if (namesystem.isInStartupSafeMode() && !StorageType.PROVIDED.equals(storageInfo.getStorageType()) - && storageInfo.getBlockReportCount() > 0) { + && storageInfo.getBlockReportCount() > 0 + && totalReportNum == currentReportNum) { Review Comment: I like @zhangshuyan0's proposal better. The following section of code can also be separated to be a function on its own. ``` // Remove the lease when we have received block reports for all storages for a particular DN. void removeLease() { for (DatanodeStorageInfo sInfo : node.getStorageInfos()) { if (sInfo.getBlockReportCount() == 0) { needRemoveLease = false; } } if (needRemoveLease) { blockReportLeaseManager.removeLease(node); } } ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] slfan1989 commented on a diff in pull request #5897: HDFS-17128. Updating SQLDelegationTokenSecretManager to use LoadingCache so tokens are updated frequently.
slfan1989 commented on code in PR #5897: URL: https://github.com/apache/hadoop/pull/5897#discussion_r1280115270 ## hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/DelegationTokenLoadingCache.java: ## @@ -0,0 +1,116 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.security.token.delegation; + +import java.util.Collection; +import java.util.Map; +import java.util.Set; +import java.util.concurrent.TimeUnit; +import java.util.function.Function; +import org.apache.hadoop.thirdparty.com.google.common.cache.CacheBuilder; +import org.apache.hadoop.thirdparty.com.google.common.cache.CacheLoader; +import org.apache.hadoop.thirdparty.com.google.common.cache.LoadingCache; + + +/** + * Cache for delegation tokens that can handle high volume of tokens. A + * loading cache will prevent all active tokens from being in memory at the + * same time. It will also trigger more requests from the persistent token storage. + */ +public class DelegationTokenLoadingCache implements Map { + private LoadingCache internalLoadingCache; + + public DelegationTokenLoadingCache(long cacheExpirationMs, Function singleEntryFunction) { +this.internalLoadingCache = CacheBuilder.newBuilder() +.expireAfterWrite(cacheExpirationMs, TimeUnit.MILLISECONDS) Review Comment: Do we need to increase the limit of a number? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] slfan1989 commented on pull request #5897: HDFS-17128. Updating SQLDelegationTokenSecretManager to use LoadingCache so tokens are updated frequently.
slfan1989 commented on PR #5897: URL: https://github.com/apache/hadoop/pull/5897#issuecomment-1659575245 LGTM +1. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] slfan1989 commented on a diff in pull request #5897: HDFS-17128. Updating SQLDelegationTokenSecretManager to use LoadingCache so tokens are updated frequently.
slfan1989 commented on code in PR #5897: URL: https://github.com/apache/hadoop/pull/5897#discussion_r1280112864 ## hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/SQLDelegationTokenSecretManager.java: ## @@ -153,36 +161,60 @@ protected void removeStoredToken(TokenIdent ident) throws IOException { } } + @Override + protected void removeExpiredStoredToken(TokenIdent ident) { +try { + // Ensure that the token has not been renewed in SQL by + // another secret manager + DelegationTokenInformation tokenInfo = getTokenInfoFromSQL(ident); + if (tokenInfo.getRenewDate() >= Time.now()) { +LOG.info("Token was renewed by a different router and has not been deleted: " + ident); Review Comment: LOG.info("Token was renewed by a different router and has not been deleted: {}", ident); -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] xinglin commented on pull request #5878: HDFS-17030 Limit wait time for getHAServiceState in ObserverReadProxyProvider
xinglin commented on PR #5878: URL: https://github.com/apache/hadoop/pull/5878#issuecomment-1659558833 Hi @goiri, Could you review this PR? Thanks, -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] xinglin commented on pull request #5880: HDFS-17118 Fixed a couple checkstyle warnings in TestObserverReadProxyProvider
xinglin commented on PR #5880: URL: https://github.com/apache/hadoop/pull/5880#issuecomment-1659558489 Hi @goiri, Could you review it? thanks, -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18832) Upgrade aws-java-sdk to 1.12.499+
[ https://issues.apache.org/jira/browse/HADOOP-18832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17749443#comment-17749443 ] ASF GitHub Bot commented on HADOOP-18832: - virajjasani commented on PR #5908: URL: https://github.com/apache/hadoop/pull/5908#issuecomment-1659539707 re-ran the tests with assume role and encryption enabled: the only tests that are getting ignored are: - contract tests that don't apply to s3a (e.g. `fs.capability.etags.preserved.in.rename`) - `ITestS3AContractSeek`: `Tests run: 72, Failures: 0, Errors: 0, Skipped: 24, Time elapsed: 198.155 s - in org.apache.hadoop.fs.contract.s3a.ITestS3AContractSeek` 24 tests getting skipped because they need native hadoop lib: ``` if (this.sslChannelMode == OpenSSL) { assumeTrue(NativeCodeLoader.isNativeCodeLoaded() && NativeCodeLoader.buildSupportsOpenssl()); } ``` everything else is passing. > Upgrade aws-java-sdk to 1.12.499+ > - > > Key: HADOOP-18832 > URL: https://issues.apache.org/jira/browse/HADOOP-18832 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > > aws sdk versions < 1.12.499 uses a vulnerable version of netty and hence > showing up in security CVE scans (CVE-2023-34462). The safe version for netty > is 4.1.94.Final and this is used by aws-java-sdk:1.12.499+ -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] virajjasani commented on pull request #5908: HADOOP-18832. Upgrade aws-java-sdk to 1.12.499
virajjasani commented on PR #5908: URL: https://github.com/apache/hadoop/pull/5908#issuecomment-1659539707 re-ran the tests with assume role and encryption enabled: the only tests that are getting ignored are: - contract tests that don't apply to s3a (e.g. `fs.capability.etags.preserved.in.rename`) - `ITestS3AContractSeek`: `Tests run: 72, Failures: 0, Errors: 0, Skipped: 24, Time elapsed: 198.155 s - in org.apache.hadoop.fs.contract.s3a.ITestS3AContractSeek` 24 tests getting skipped because they need native hadoop lib: ``` if (this.sslChannelMode == OpenSSL) { assumeTrue(NativeCodeLoader.isNativeCodeLoaded() && NativeCodeLoader.buildSupportsOpenssl()); } ``` everything else is passing. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #5909: Hadoop 18826: [ABFS] Fix for Empty Relative Path Issue Leading to GetFileStatus("/") failure.
hadoop-yetus commented on PR #5909: URL: https://github.com/apache/hadoop/pull/5909#issuecomment-1659537917 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 28s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 36m 32s | | trunk passed | | +1 :green_heart: | compile | 0m 29s | | trunk passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | compile | 0m 27s | | trunk passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | checkstyle | 0m 26s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 31s | | trunk passed | | +1 :green_heart: | javadoc | 0m 31s | | trunk passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 0m 27s | | trunk passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | spotbugs | 0m 51s | | trunk passed | | +1 :green_heart: | shadedclient | 21m 11s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 22s | | the patch passed | | +1 :green_heart: | compile | 0m 22s | | the patch passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javac | 0m 22s | | the patch passed | | +1 :green_heart: | compile | 0m 20s | | the patch passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | javac | 0m 20s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 14s | [/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5909/2/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt) | hadoop-tools/hadoop-azure: The patch generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) | | +1 :green_heart: | mvnsite | 0m 22s | | the patch passed | | +1 :green_heart: | javadoc | 0m 21s | | the patch passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 0m 19s | | the patch passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | spotbugs | 0m 44s | | the patch passed | | +1 :green_heart: | shadedclient | 20m 47s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 1m 49s | | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 29s | | The patch does not generate ASF License warnings. | | | | 90m 26s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5909/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5909 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux a98d25ef6c45 4.15.0-212-generic #223-Ubuntu SMP Tue May 23 13:09:22 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 6a40e4d1476b24938bb3f31b4144ea2474397065 | | Default Java | Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5909/2/testReport/ | | Max. process+thread count | 604 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5909/2/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. -- This is a
[GitHub] [hadoop] hadoop-yetus commented on pull request #5899: YARN-11544. Add backlogs metrics for request proposal
hadoop-yetus commented on PR #5899: URL: https://github.com/apache/hadoop/pull/5899#issuecomment-1659407097 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 0s | | Docker mode activated. | | -1 :x: | patch | 0m 19s | | https://github.com/apache/hadoop/pull/5899 does not apply to trunk. Rebase required? Wrong Branch? See https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute for help. | | Subsystem | Report/Notes | |--:|:-| | GITHUB PR | https://github.com/apache/hadoop/pull/5899 | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5899/2/console | | versions | git=2.17.1 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18832) Upgrade aws-java-sdk to 1.12.499+
[ https://issues.apache.org/jira/browse/HADOOP-18832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17749372#comment-17749372 ] ASF GitHub Bot commented on HADOOP-18832: - hadoop-yetus commented on PR #5908: URL: https://github.com/apache/hadoop/pull/5908#issuecomment-1659385029 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 49s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 0s | | xmllint was not available. | | +0 :ok: | shelldocs | 0m 0s | | Shelldocs was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 13m 35s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 35m 47s | | trunk passed | | +1 :green_heart: | compile | 18m 36s | | trunk passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | compile | 17m 32s | | trunk passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | mvnsite | 20m 8s | | trunk passed | | +1 :green_heart: | javadoc | 9m 15s | | trunk passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 7m 29s | | trunk passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | shadedclient | 55m 5s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 47s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 34m 28s | | the patch passed | | +1 :green_heart: | compile | 18m 3s | | the patch passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javac | 18m 3s | | the patch passed | | +1 :green_heart: | compile | 16m 55s | | the patch passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | javac | 16m 55s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | mvnsite | 15m 7s | | the patch passed | | +1 :green_heart: | shellcheck | 0m 0s | | No new issues. | | +1 :green_heart: | javadoc | 8m 52s | | the patch passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 7m 33s | | the patch passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | shadedclient | 55m 43s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 788m 18s | [/patch-unit-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5908/1/artifact/out/patch-unit-root.txt) | root in the patch passed. | | +1 :green_heart: | asflicense | 1m 40s | | The patch does not generate ASF License warnings. | | | | 1098m 19s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 | | | hadoop.mapreduce.v2.TestMRJobs | | | hadoop.mapreduce.v2.TestUberAM | | | hadoop.mapreduce.v2.TestMRJobsWithProfiler | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5908/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5908 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient codespell detsecrets xmllint shellcheck shelldocs | | uname | Linux 79d428c7e96f 4.15.0-212-generic #223-Ubuntu SMP Tue May 23 13:09:22 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 9aaf50cb325dc3bbfb90c6c00b82067927a41263 | | Default Java | Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120
[GitHub] [hadoop] hadoop-yetus commented on pull request #5908: HADOOP-18832. Upgrade aws-java-sdk to 1.12.499
hadoop-yetus commented on PR #5908: URL: https://github.com/apache/hadoop/pull/5908#issuecomment-1659385029 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 49s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 0s | | xmllint was not available. | | +0 :ok: | shelldocs | 0m 0s | | Shelldocs was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 13m 35s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 35m 47s | | trunk passed | | +1 :green_heart: | compile | 18m 36s | | trunk passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | compile | 17m 32s | | trunk passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | mvnsite | 20m 8s | | trunk passed | | +1 :green_heart: | javadoc | 9m 15s | | trunk passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 7m 29s | | trunk passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | shadedclient | 55m 5s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 47s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 34m 28s | | the patch passed | | +1 :green_heart: | compile | 18m 3s | | the patch passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javac | 18m 3s | | the patch passed | | +1 :green_heart: | compile | 16m 55s | | the patch passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | javac | 16m 55s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | mvnsite | 15m 7s | | the patch passed | | +1 :green_heart: | shellcheck | 0m 0s | | No new issues. | | +1 :green_heart: | javadoc | 8m 52s | | the patch passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 7m 33s | | the patch passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | shadedclient | 55m 43s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 788m 18s | [/patch-unit-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5908/1/artifact/out/patch-unit-root.txt) | root in the patch passed. | | +1 :green_heart: | asflicense | 1m 40s | | The patch does not generate ASF License warnings. | | | | 1098m 19s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 | | | hadoop.mapreduce.v2.TestMRJobs | | | hadoop.mapreduce.v2.TestUberAM | | | hadoop.mapreduce.v2.TestMRJobsWithProfiler | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5908/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5908 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient codespell detsecrets xmllint shellcheck shelldocs | | uname | Linux 79d428c7e96f 4.15.0-212-generic #223-Ubuntu SMP Tue May 23 13:09:22 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 9aaf50cb325dc3bbfb90c6c00b82067927a41263 | | Default Java | Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5908/1/testReport/ | | Max. process+thread count | 2368 (vs. ulimit of 5500)
[GitHub] [hadoop] steveloughran commented on a diff in pull request #5909: Hadoop 18826: [ABFS] Fix for Empty Relative Path Issue Leading to GetFileStatus("/") failure.
steveloughran commented on code in PR #5909: URL: https://github.com/apache/hadoop/pull/5909#discussion_r1279811438 ## hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemFileStatus.java: ## @@ -83,9 +85,11 @@ private FileStatus validateStatus(final AzureBlobFileSystem fs, final Path name, if (isDir) { assertEquals(errorInStatus + ": permission", new FsPermission(DEFAULT_DIR_PERMISSION_VALUE), fileStatus.getPermission()); +assertTrue(fileStatus.isDirectory()); Review Comment: we always need error messages on simple asserttrue/false. Think to yourself "if this test failed and I had was the test report, what would I want to know". here: the filestatus. Something like ``` assertTrue("not a directory " + fileStatus, fileStatus.isDirectory()) ``` ## hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemFileStatus.java: ## @@ -83,9 +85,11 @@ private FileStatus validateStatus(final AzureBlobFileSystem fs, final Path name, if (isDir) { assertEquals(errorInStatus + ": permission", new FsPermission(DEFAULT_DIR_PERMISSION_VALUE), fileStatus.getPermission()); +assertTrue(fileStatus.isDirectory()); } else { assertEquals(errorInStatus + ": permission", new FsPermission(DEFAULT_FILE_PERMISSION_VALUE), fileStatus.getPermission()); +assertTrue(fileStatus.isFile()); Review Comment: same ## hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemFileStatus.java: ## @@ -144,4 +148,22 @@ public void testLastModifiedTime() throws IOException { assertTrue("lastModifiedTime should be before createEndTime", createEndTime > lastModifiedTime); } + + @Test + public void testFileStatusOnRoot() throws IOException { +AzureBlobFileSystem fs = this.getFileSystem(); Review Comment: nit: no need for `this.` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on a diff in pull request #5881: Hadoop-18759: [ABFS][Backoff-Optimization] Have a Static retry policy for connection timeout.
steveloughran commented on code in PR #5881: URL: https://github.com/apache/hadoop/pull/5881#discussion_r1279758355 ## hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/FileSystemConfigurations.java: ## @@ -40,6 +40,8 @@ public final class FileSystemConfigurations { // Retry parameter defaults. public static final int DEFAULT_MIN_BACKOFF_INTERVAL = 3 * 1000; // 3s public static final int DEFAULT_MAX_BACKOFF_INTERVAL = 30 * 1000; // 30s + public static final boolean DEFAULT_STATIC_RETRY_FOR_CONNECTION_TIMEOUT_ENABLED = true; + public static final int DEFAULT_STATIC_RETRY_INTERVAL = 1 * 1000; // 1s Review Comment: use 1_000 now we can. ## hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsRestOperation.java: ## @@ -269,12 +274,13 @@ String getClientLatency() { private boolean executeHttpOperation(final int retryCount, TracingContext tracingContext) throws AzureBlobFileSystemException { AbfsHttpOperation httpOperation; +boolean iOExceptionThrown = false; Review Comment: prefer `wasIOExceptionThrown` ## hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/RetryPolicy.java: ## @@ -0,0 +1,73 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.azurebfs.services; + +import java.net.HttpURLConnection; + +import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_CONTINUE; + +/** + * Abstract Class for Retry policy to be used by {@link AbfsClient} + * Implementation to be used is based on retry cause. + */ +public abstract class RetryPolicy { Review Comment: duplicate name; even in different packages it's painful. see org.apache.hadoop.io.retry.RetryPolicy ## hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/TestTracingContext.java: ## @@ -226,7 +229,7 @@ fileSystemId, FSOperationType.CREATE_FILESYSTEM, tracingHeaderFormat, new Tracin fs.getFileSystemId(), FSOperationType.CREATE_FILESYSTEM, false, 1)); -tracingContext.constructHeader(abfsHttpOperation, "RT"); +tracingContext.constructHeader(abfsHttpOperation, "RT", "E"); Review Comment: you should still refer to the constant in the production code; makes it easier to find usages/make changes ## hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/TestTracingContext.java: ## @@ -270,4 +273,71 @@ fileSystemId, FSOperationType.CREATE_FILESYSTEM, tracingHeaderFormat, new Tracin + "should be equal to PrimaryRequestId in the original request.") .isEqualTo(assertionPrimaryId); } + + @Test + public void testTracingContextHeaderForRetrypolicy() throws Exception { +final AzureBlobFileSystem fs = getFileSystem(); +final String fileSystemId = fs.getFileSystemId(); +final String clientCorrelationId = fs.getClientCorrelationId(); +final TracingHeaderFormat tracingHeaderFormat = TracingHeaderFormat.ALL_ID_FORMAT; +TracingContext tracingContext = new TracingContext(clientCorrelationId, +fileSystemId, FSOperationType.CREATE_FILESYSTEM, tracingHeaderFormat, new TracingHeaderValidator( +fs.getAbfsStore().getAbfsConfiguration().getClientCorrelationId(), +fs.getFileSystemId(), FSOperationType.CREATE_FILESYSTEM, false, +0)); +tracingContext.setPrimaryRequestID(); +AbfsHttpOperation abfsHttpOperation = Mockito.mock(AbfsHttpOperation.class); + Mockito.doNothing().when(abfsHttpOperation).setRequestProperty(Mockito.anyString(), Mockito.anyString()); + +tracingContext.constructHeader(abfsHttpOperation, null, null); +checkHeaderForRetryPolicyAbbreviation(tracingContext.getHeader(), null, null); + +tracingContext.constructHeader(abfsHttpOperation, null, STATIC_RETRY_POLICY_ABBREVIATION); +checkHeaderForRetryPolicyAbbreviation(tracingContext.getHeader(), null, null); + +tracingContext.constructHeader(abfsHttpOperation, null, EXPONENTIAL_RETRY_POLICY_ABBREVIATION); +checkHeaderForRetryPolicyAbbreviation(tracingContext.getHeader(), null, null); + +tracingContext.const
[jira] [Commented] (HADOOP-18832) Upgrade aws-java-sdk to 1.12.499+
[ https://issues.apache.org/jira/browse/HADOOP-18832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17749315#comment-17749315 ] ASF GitHub Bot commented on HADOOP-18832: - virajjasani commented on PR #5908: URL: https://github.com/apache/hadoop/pull/5908#issuecomment-1658954373 `AbstractSTestS3AHugeFiles` is successful with `SSE-KMS`, will run assumed role tests now > Upgrade aws-java-sdk to 1.12.499+ > - > > Key: HADOOP-18832 > URL: https://issues.apache.org/jira/browse/HADOOP-18832 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > > aws sdk versions < 1.12.499 uses a vulnerable version of netty and hence > showing up in security CVE scans (CVE-2023-34462). The safe version for netty > is 4.1.94.Final and this is used by aws-java-sdk:1.12.499+ -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] virajjasani commented on pull request #5908: HADOOP-18832. Upgrade aws-java-sdk to 1.12.499
virajjasani commented on PR #5908: URL: https://github.com/apache/hadoop/pull/5908#issuecomment-1658954373 `AbstractSTestS3AHugeFiles` is successful with `SSE-KMS`, will run assumed role tests now -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] simbadzina commented on a diff in pull request #5897: HDFS-17128. Updating SQLDelegationTokenSecretManager to use LoadingCache so tokens are updated frequently.
simbadzina commented on code in PR #5897: URL: https://github.com/apache/hadoop/pull/5897#discussion_r1279712302 ## hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/DelegationTokenLoadingCache.java: ## @@ -0,0 +1,116 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.security.token.delegation; + +import java.util.Collection; +import java.util.Map; +import java.util.Set; +import java.util.concurrent.TimeUnit; +import java.util.function.Function; +import org.apache.hadoop.thirdparty.com.google.common.cache.CacheBuilder; +import org.apache.hadoop.thirdparty.com.google.common.cache.CacheLoader; +import org.apache.hadoop.thirdparty.com.google.common.cache.LoadingCache; + + +/** + * Cache for delegation tokens that can handle high volume of tokens. A + * loading cache will prevent all active tokens from being in memory at the + * same time. It will also trigger more requests from the persistent token storage. + */ +public class DelegationTokenLoadingCache implements Map { + private LoadingCache internalLoadingCache; + + public DelegationTokenLoadingCache(long cacheExpirationMs, Function singleEntryFunction) { +this.internalLoadingCache = CacheBuilder.newBuilder() +.expireAfterWrite(cacheExpirationMs, TimeUnit.MILLISECONDS) +.build(new CacheLoader() { + @Override + public V load(K k) throws Exception { +return singleEntryFunction.apply(k); + } +}); Review Comment: Got it, thanks. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #5910: YARN-11545. Fixed FS2CS ACL conversion when all users are allowed.
hadoop-yetus commented on PR #5910: URL: https://github.com/apache/hadoop/pull/5910#issuecomment-1658908163 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 29s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 0s | | xmllint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 32m 38s | | trunk passed | | +1 :green_heart: | compile | 0m 42s | | trunk passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | compile | 0m 41s | | trunk passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | checkstyle | 0m 38s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 43s | | trunk passed | | +1 :green_heart: | javadoc | 0m 45s | | trunk passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 0m 38s | | trunk passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | spotbugs | 1m 20s | | trunk passed | | +1 :green_heart: | shadedclient | 23m 8s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 32s | | the patch passed | | +1 :green_heart: | compile | 0m 34s | | the patch passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javac | 0m 34s | | the patch passed | | +1 :green_heart: | compile | 0m 30s | | the patch passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | javac | 0m 30s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 22s | | the patch passed | | +1 :green_heart: | mvnsite | 0m 33s | | the patch passed | | +1 :green_heart: | javadoc | 0m 36s | | the patch passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 0m 33s | | the patch passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | spotbugs | 1m 21s | | the patch passed | | +1 :green_heart: | shadedclient | 26m 8s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 84m 52s | | hadoop-yarn-server-resourcemanager in the patch passed. | | +1 :green_heart: | asflicense | 0m 28s | | The patch does not generate ASF License warnings. | | | | 179m 42s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5910/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5910 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint | | uname | Linux 4d8754f436ef 4.15.0-213-generic #224-Ubuntu SMP Mon Jun 19 13:30:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / a0b20228caef9ba6a6b948523a9e991d312327a7 | | Default Java | Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5910/1/testReport/ | | Max. process+thread count | 954 (vs. ulimit of 5500) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5910/1/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Ap
[GitHub] [hadoop] hchaverri commented on a diff in pull request #5897: HDFS-17128. Updating SQLDelegationTokenSecretManager to use LoadingCache so tokens are updated frequently.
hchaverri commented on code in PR #5897: URL: https://github.com/apache/hadoop/pull/5897#discussion_r1279691057 ## hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/DelegationTokenLoadingCache.java: ## @@ -0,0 +1,116 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.security.token.delegation; + +import java.util.Collection; +import java.util.Map; +import java.util.Set; +import java.util.concurrent.TimeUnit; +import java.util.function.Function; +import org.apache.hadoop.thirdparty.com.google.common.cache.CacheBuilder; +import org.apache.hadoop.thirdparty.com.google.common.cache.CacheLoader; +import org.apache.hadoop.thirdparty.com.google.common.cache.LoadingCache; + + +/** + * Cache for delegation tokens that can handle high volume of tokens. A + * loading cache will prevent all active tokens from being in memory at the + * same time. It will also trigger more requests from the persistent token storage. + */ +public class DelegationTokenLoadingCache implements Map { + private LoadingCache internalLoadingCache; + + public DelegationTokenLoadingCache(long cacheExpirationMs, Function singleEntryFunction) { +this.internalLoadingCache = CacheBuilder.newBuilder() +.expireAfterWrite(cacheExpirationMs, TimeUnit.MILLISECONDS) +.build(new CacheLoader() { + @Override + public V load(K k) throws Exception { +return singleEntryFunction.apply(k); + } +}); Review Comment: The `get()` needs to be done on the `LoadingCache` for the `CacheLoader` to trigger. Calling it on the result of `asMap()` will just return null if the item is not in it. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] goiri commented on a diff in pull request #5901: YARN-7402. BackPort [GPG] Fix potential connection leak in GPGUtils.
goiri commented on code in PR #5901: URL: https://github.com/apache/hadoop/pull/5901#discussion_r1279682356 ## hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/main/java/org/apache/hadoop/yarn/server/globalpolicygenerator/GPGUtils.java: ## @@ -57,15 +58,23 @@ public static T invokeRMWebService(String webAddr, String path, final Class< T obj = null; WebResource webResource = client.resource(webAddr); -ClientResponse response = webResource.path("ws/v1/cluster").path(path) -.accept(MediaType.APPLICATION_XML).get(ClientResponse.class); -if (response.getStatus() == HttpServletResponse.SC_OK) { - obj = response.getEntity(returnType); -} else { - throw new YarnRuntimeException("Bad response from remote web service: " - + response.getStatus()); +ClientResponse response = null; +try { + response = webResource.path("ws/v1/cluster").path(path) + .accept(MediaType.APPLICATION_XML).get(ClientResponse.class); + if (response.getStatus() == SC_OK) { +obj = response.getEntity(returnType); + } else { +throw new YarnRuntimeException( +"Bad response from remote web service: " + response.getStatus()); + } + return obj; +} finally { + if (response != null) { +response.close(); Review Comment: set to null? ## hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-globalpolicygenerator/src/test/java/org/apache/hadoop/yarn/server/globalpolicygenerator/policygenerator/TestPolicyGenerator.java: ## @@ -44,10 +44,7 @@ import org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler; import org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerConfiguration; import org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWSConsts; -import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.CapacitySchedulerInfo; -import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ClusterMetricsInfo; -import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.SchedulerInfo; -import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.SchedulerTypeInfo; +import org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.*; Review Comment: Avoid -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] simbadzina commented on a diff in pull request #5897: HDFS-17128. Updating SQLDelegationTokenSecretManager to use LoadingCache so tokens are updated frequently.
simbadzina commented on code in PR #5897: URL: https://github.com/apache/hadoop/pull/5897#discussion_r1279656588 ## hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/DelegationTokenLoadingCache.java: ## @@ -0,0 +1,116 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.security.token.delegation; + +import java.util.Collection; +import java.util.Map; +import java.util.Set; +import java.util.concurrent.TimeUnit; +import java.util.function.Function; +import org.apache.hadoop.thirdparty.com.google.common.cache.CacheBuilder; +import org.apache.hadoop.thirdparty.com.google.common.cache.CacheLoader; +import org.apache.hadoop.thirdparty.com.google.common.cache.LoadingCache; + + +/** + * Cache for delegation tokens that can handle high volume of tokens. A + * loading cache will prevent all active tokens from being in memory at the + * same time. It will also trigger more requests from the persistent token storage. + */ +public class DelegationTokenLoadingCache implements Map { + private LoadingCache internalLoadingCache; + + public DelegationTokenLoadingCache(long cacheExpirationMs, Function singleEntryFunction) { +this.internalLoadingCache = CacheBuilder.newBuilder() +.expireAfterWrite(cacheExpirationMs, TimeUnit.MILLISECONDS) +.build(new CacheLoader() { + @Override + public V load(K k) throws Exception { +return singleEntryFunction.apply(k); + } +}); Review Comment: Thanks. Would `Cache.asMap()` be sufficient for these use cases. So doing ``` currentToken = CacheBuilder.newBuilder() ... .build() .asMap() ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hchaverri commented on a diff in pull request #5897: HDFS-17128. Updating SQLDelegationTokenSecretManager to use LoadingCache so tokens are updated frequently.
hchaverri commented on code in PR #5897: URL: https://github.com/apache/hadoop/pull/5897#discussion_r1279644268 ## hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/DelegationTokenLoadingCache.java: ## @@ -0,0 +1,116 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.security.token.delegation; + +import java.util.Collection; +import java.util.Map; +import java.util.Set; +import java.util.concurrent.TimeUnit; +import java.util.function.Function; +import org.apache.hadoop.thirdparty.com.google.common.cache.CacheBuilder; +import org.apache.hadoop.thirdparty.com.google.common.cache.CacheLoader; +import org.apache.hadoop.thirdparty.com.google.common.cache.LoadingCache; + + +/** + * Cache for delegation tokens that can handle high volume of tokens. A + * loading cache will prevent all active tokens from being in memory at the + * same time. It will also trigger more requests from the persistent token storage. + */ +public class DelegationTokenLoadingCache implements Map { + private LoadingCache internalLoadingCache; + + public DelegationTokenLoadingCache(long cacheExpirationMs, Function singleEntryFunction) { +this.internalLoadingCache = CacheBuilder.newBuilder() +.expireAfterWrite(cacheExpirationMs, TimeUnit.MILLISECONDS) +.build(new CacheLoader() { + @Override + public V load(K k) throws Exception { +return singleEntryFunction.apply(k); + } +}); Review Comment: We mainly need this to implement the Map functions so we don't have to update multiple SecretManger classes. Those classes currently expect to interact with the currentTokens Map and LoadingCaches don't provide all those methods. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] simbadzina commented on a diff in pull request #5897: HDFS-17128. Updating SQLDelegationTokenSecretManager to use LoadingCache so tokens are updated frequently.
simbadzina commented on code in PR #5897: URL: https://github.com/apache/hadoop/pull/5897#discussion_r1279637935 ## hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/DelegationTokenLoadingCache.java: ## @@ -0,0 +1,116 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.security.token.delegation; + +import java.util.Collection; +import java.util.Map; +import java.util.Set; +import java.util.concurrent.TimeUnit; +import java.util.function.Function; +import org.apache.hadoop.thirdparty.com.google.common.cache.CacheBuilder; +import org.apache.hadoop.thirdparty.com.google.common.cache.CacheLoader; +import org.apache.hadoop.thirdparty.com.google.common.cache.LoadingCache; + + +/** + * Cache for delegation tokens that can handle high volume of tokens. A + * loading cache will prevent all active tokens from being in memory at the + * same time. It will also trigger more requests from the persistent token storage. + */ +public class DelegationTokenLoadingCache implements Map { + private LoadingCache internalLoadingCache; + + public DelegationTokenLoadingCache(long cacheExpirationMs, Function singleEntryFunction) { +this.internalLoadingCache = CacheBuilder.newBuilder() +.expireAfterWrite(cacheExpirationMs, TimeUnit.MILLISECONDS) +.build(new CacheLoader() { + @Override + public V load(K k) throws Exception { +return singleEntryFunction.apply(k); + } +}); Review Comment: Do we need this as a separate class? Is there any other functionality we need other than what is in the constructor? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18832) Upgrade aws-java-sdk to 1.12.499+
[ https://issues.apache.org/jira/browse/HADOOP-18832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17749284#comment-17749284 ] ASF GitHub Bot commented on HADOOP-18832: - virajjasani commented on PR #5908: URL: https://github.com/apache/hadoop/pull/5908#issuecomment-1658777571 yes, assumed role test coverage is pending > Upgrade aws-java-sdk to 1.12.499+ > - > > Key: HADOOP-18832 > URL: https://issues.apache.org/jira/browse/HADOOP-18832 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > > aws sdk versions < 1.12.499 uses a vulnerable version of netty and hence > showing up in security CVE scans (CVE-2023-34462). The safe version for netty > is 4.1.94.Final and this is used by aws-java-sdk:1.12.499+ -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] virajjasani commented on pull request #5908: HADOOP-18832. Upgrade aws-java-sdk to 1.12.499
virajjasani commented on PR #5908: URL: https://github.com/apache/hadoop/pull/5908#issuecomment-1658777571 yes, assumed role test coverage is pending -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-18190) Collect IOStatistics during S3A prefetching
[ https://issues.apache.org/jira/browse/HADOOP-18190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-18190: Fix Version/s: 3.4.0 3.3.6 > Collect IOStatistics during S3A prefetching > > > Key: HADOOP-18190 > URL: https://issues.apache.org/jira/browse/HADOOP-18190 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.4.0 >Reporter: Steve Loughran >Assignee: Ahmar Suhail >Priority: Major > Labels: pull-request-available > Fix For: 3.4.0, 3.3.6 > > Time Spent: 4h 10m > Remaining Estimate: 0h > > There is a lot more happening in reads, so there's a lot more data to collect > and publish in IO stats for us to view in a summary at the end of processes > as well as get from the stream while it is active. > Some useful ones would seem to be: > counters > * is in memory. using 0 or 1 here lets aggregation reports count total #of > memory cached files. > * prefetching operations executed > * errors during prefetching > gauges > * number of blocks in cache > * total size of blocks > * active prefetches > + active memory used > duration tracking count/min/max/ave > * time to fetch a block > * time queued before the actual fetch begins > * time a reader is blocked waiting for a block fetch to complete > and some info on cache use itself > * number of blocks discarded unread > * number of prefetched blocks later used > * number of backward seeks to a prefetched block > * number of forward seeks to a prefetched block > the key ones I care about are > # memory consumption > # can we determine if cache is working (reads with cache hit) and when it is > not (misses, wasted prefetches) > # time blocked on executors > The stats need to be accessible on a stream even when closed, and aggregated > into the FS. once we get per-thread stats contexts we can publish there too > and collect in worker threads for reporting in task commits -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-18183) s3a audit logs to publish range start/end of GET requests in audit header
[ https://issues.apache.org/jira/browse/HADOOP-18183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran resolved HADOOP-18183. - Fix Version/s: 3.3.9 3.4.0 Resolution: Fixed > s3a audit logs to publish range start/end of GET requests in audit header > - > > Key: HADOOP-18183 > URL: https://issues.apache.org/jira/browse/HADOOP-18183 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.3.2 >Reporter: Steve Loughran >Assignee: Ankit Saurabh >Priority: Minor > Labels: pull-request-available > Fix For: 3.4.0, 3.3.9 > > > we don't get the range of ranged get requests in s3 server logs, because the > AWS s3 log doesn't record that information. we can see it's a partial get > from the 206 response, but the length of data retrieved is lost. > LoggingAuditor.beforeExecution() would need to recognise a ranged GET and > determine the extra key-val pairs for range start and end (rs & re?) > we might need to modify {{HttpReferrerAuditHeader.buildHttpReferrer()}} to > take a map of so it can dynamically create a header for each > request; currently that is not in there. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-18179) Boost S3A Stream Read Performance
[ https://issues.apache.org/jira/browse/HADOOP-18179?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran reassigned HADOOP-18179: --- Assignee: Ahmar Suhail > Boost S3A Stream Read Performance > - > > Key: HADOOP-18179 > URL: https://issues.apache.org/jira/browse/HADOOP-18179 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/s3 >Affects Versions: 3.3.2 >Reporter: Steve Loughran >Assignee: Ahmar Suhail >Priority: Major > > calibrate S3A input stream performance against recent applications/data > formats and improve where necessary. > HADOOP-18028 is a key part of this, but there are other issues/opertunities > # we could add machine parsable trace-level logging in FSDataInputStream to > collect stats on how stream apis are invoked, so collect data from real apps; > analyze > # implement those APIs which some apps use (ByteBufferPositionedReadable), > not so much for direct implementation as to get better information from the > app as its read plan > # the `normal` mode doesn't switch from sequential on forward seeks. Is that > always appropriate? > # choose different buffering options when doing whole file IO vs sequential > vs random -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18179) Boost S3A Stream Read Performance
[ https://issues.apache.org/jira/browse/HADOOP-18179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17749270#comment-17749270 ] Steve Loughran commented on HADOOP-18179: - assigned to [~ahmarsu] as he's done most of this > Boost S3A Stream Read Performance > - > > Key: HADOOP-18179 > URL: https://issues.apache.org/jira/browse/HADOOP-18179 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/s3 >Affects Versions: 3.3.2 >Reporter: Steve Loughran >Assignee: Ahmar Suhail >Priority: Major > > calibrate S3A input stream performance against recent applications/data > formats and improve where necessary. > HADOOP-18028 is a key part of this, but there are other issues/opertunities > # we could add machine parsable trace-level logging in FSDataInputStream to > collect stats on how stream apis are invoked, so collect data from real apps; > analyze > # implement those APIs which some apps use (ByteBufferPositionedReadable), > not so much for direct implementation as to get better information from the > app as its read plan > # the `normal` mode doesn't switch from sequential on forward seeks. Is that > always appropriate? > # choose different buffering options when doing whole file IO vs sequential > vs random -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-18820) AWS SDK v2: make the v1 bridging support optional
[ https://issues.apache.org/jira/browse/HADOOP-18820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-18820: Release Note: The v1 aws-sdk-bundle JAR has been removed; it only required for third party applications or for use of v1 SDK AWSCredentialsProvider classes. There is automatic migration of the standard providers from the v1 to v2 classes, so this is only of issue for third-party providers or if very esoteric classes in the V1 SDK are used. Consult the aws_sdk_upgrade document for details > AWS SDK v2: make the v1 bridging support optional > - > > Key: HADOOP-18820 > URL: https://issues.apache.org/jira/browse/HADOOP-18820 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.4.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Labels: pull-request-available > > The AWS SDK v2 code includes the v1 sdk core for plugin support of > * existing credential providers > * delegation token binding > I propose we break #2 and rely on those who have implemented to to upgrade. > apart from all the needless changes the v2 SDK did to the api (why?) this is > fairly straighforward > for #1: fix through reflection, retaining a v1 sdk dependency at test time so > we can verify that the binder works. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18820) AWS SDK v2: make the v1 bridging support optional
[ https://issues.apache.org/jira/browse/HADOOP-18820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17749244#comment-17749244 ] ASF GitHub Bot commented on HADOOP-18820: - steveloughran commented on code in PR #5872: URL: https://github.com/apache/hadoop/pull/5872#discussion_r1279497298 ## hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/aws_sdk_upgrade.md: ## @@ -49,14 +137,67 @@ has been replaced by [software.amazon.awssdk.auth.credentials.AwsCredentialsProv changed. The change in interface will mean that custom credential providers will need to be updated to now -implement `AwsCredentialsProvider` instead of `AWSCredentialProvider`. +implement `software.amazon.awssdk.auth.credentials.AwsCredentialsProvider` instead of +`com.amazonaws.auth.AWSCredentialsProvider`. + + Original v1 `AWSCredentialsProvider` interface + +Note how the interface begins with the capitalized "AWS" acronym. +The v2 interface starts with "Aws". This is a very subtle change +for developers to spot. +Compilers _will_ detect and report the type mismatch. + Review Comment: fixed > AWS SDK v2: make the v1 bridging support optional > - > > Key: HADOOP-18820 > URL: https://issues.apache.org/jira/browse/HADOOP-18820 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.4.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Labels: pull-request-available > > The AWS SDK v2 code includes the v1 sdk core for plugin support of > * existing credential providers > * delegation token binding > I propose we break #2 and rely on those who have implemented to to upgrade. > apart from all the needless changes the v2 SDK did to the api (why?) this is > fairly straighforward > for #1: fix through reflection, retaining a v1 sdk dependency at test time so > we can verify that the binder works. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on a diff in pull request #5872: HADOOP-18820. Cut AWS v1 support
steveloughran commented on code in PR #5872: URL: https://github.com/apache/hadoop/pull/5872#discussion_r1279497298 ## hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/aws_sdk_upgrade.md: ## @@ -49,14 +137,67 @@ has been replaced by [software.amazon.awssdk.auth.credentials.AwsCredentialsProv changed. The change in interface will mean that custom credential providers will need to be updated to now -implement `AwsCredentialsProvider` instead of `AWSCredentialProvider`. +implement `software.amazon.awssdk.auth.credentials.AwsCredentialsProvider` instead of +`com.amazonaws.auth.AWSCredentialsProvider`. + + Original v1 `AWSCredentialsProvider` interface + +Note how the interface begins with the capitalized "AWS" acronym. +The v2 interface starts with "Aws". This is a very subtle change +for developers to spot. +Compilers _will_ detect and report the type mismatch. + Review Comment: fixed -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18820) AWS SDK v2: make the v1 bridging support optional
[ https://issues.apache.org/jira/browse/HADOOP-18820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17749243#comment-17749243 ] ASF GitHub Bot commented on HADOOP-18820: - steveloughran commented on code in PR #5872: URL: https://github.com/apache/hadoop/pull/5872#discussion_r1279495750 ## hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/s3_select.md: ## @@ -663,6 +679,24 @@ to the `get()` call: do it. ## Troubleshooting +### `NoClassDefFoundError: software/amazon/eventstream/MessageDecoder` + +Select operation failing with a missing evenstream class. + +``` +java.io.IOException: java.lang.NoClassDefFoundError: software/amazon/eventstream/MessageDecoder Review Comment: scoped to test; added a JIRA about whether to cut entirely. If people want it now, they need a new jar > AWS SDK v2: make the v1 bridging support optional > - > > Key: HADOOP-18820 > URL: https://issues.apache.org/jira/browse/HADOOP-18820 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.4.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Labels: pull-request-available > > The AWS SDK v2 code includes the v1 sdk core for plugin support of > * existing credential providers > * delegation token binding > I propose we break #2 and rely on those who have implemented to to upgrade. > apart from all the needless changes the v2 SDK did to the api (why?) this is > fairly straighforward > for #1: fix through reflection, retaining a v1 sdk dependency at test time so > we can verify that the binder works. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on a diff in pull request #5872: HADOOP-18820. Cut AWS v1 support
steveloughran commented on code in PR #5872: URL: https://github.com/apache/hadoop/pull/5872#discussion_r1279495750 ## hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/s3_select.md: ## @@ -663,6 +679,24 @@ to the `get()` call: do it. ## Troubleshooting +### `NoClassDefFoundError: software/amazon/eventstream/MessageDecoder` + +Select operation failing with a missing evenstream class. + +``` +java.io.IOException: java.lang.NoClassDefFoundError: software/amazon/eventstream/MessageDecoder Review Comment: scoped to test; added a JIRA about whether to cut entirely. If people want it now, they need a new jar -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18820) AWS SDK v2: make the v1 bridging support optional
[ https://issues.apache.org/jira/browse/HADOOP-18820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17749242#comment-17749242 ] ASF GitHub Bot commented on HADOOP-18820: - steveloughran commented on code in PR #5872: URL: https://github.com/apache/hadoop/pull/5872#discussion_r1279492499 ## hadoop-project/pom.xml: ## @@ -1132,18 +1133,29 @@ com.amazonaws aws-java-sdk-core ${aws-java-sdk.version} + + +* +* + + software.amazon.awssdk bundle ${aws-java-sdk-v2.version} -io.netty +* Review Comment: correct. anything it declares a dependency on *should* be a false dependency. except the aws-crt is required (transfer manager) and evenstream for s3 select. both are bugs, IMO > AWS SDK v2: make the v1 bridging support optional > - > > Key: HADOOP-18820 > URL: https://issues.apache.org/jira/browse/HADOOP-18820 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.4.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Labels: pull-request-available > > The AWS SDK v2 code includes the v1 sdk core for plugin support of > * existing credential providers > * delegation token binding > I propose we break #2 and rely on those who have implemented to to upgrade. > apart from all the needless changes the v2 SDK did to the api (why?) this is > fairly straighforward > for #1: fix through reflection, retaining a v1 sdk dependency at test time so > we can verify that the binder works. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on a diff in pull request #5872: HADOOP-18820. Cut AWS v1 support
steveloughran commented on code in PR #5872: URL: https://github.com/apache/hadoop/pull/5872#discussion_r1279492499 ## hadoop-project/pom.xml: ## @@ -1132,18 +1133,29 @@ com.amazonaws aws-java-sdk-core ${aws-java-sdk.version} + + +* +* + + software.amazon.awssdk bundle ${aws-java-sdk-v2.version} -io.netty +* Review Comment: correct. anything it declares a dependency on *should* be a false dependency. except the aws-crt is required (transfer manager) and evenstream for s3 select. both are bugs, IMO -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18820) AWS SDK v2: make the v1 bridging support optional
[ https://issues.apache.org/jira/browse/HADOOP-18820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17749241#comment-17749241 ] ASF GitHub Bot commented on HADOOP-18820: - steveloughran commented on code in PR #5872: URL: https://github.com/apache/hadoop/pull/5872#discussion_r1279491510 ## hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/troubleshooting_s3a.md: ## @@ -70,14 +70,45 @@ These are Hadoop filesystem client classes, found in the `hadoop-aws` JAR. An exception reporting this class as missing means that this JAR is not on the classpath. -### `ClassNotFoundException: com.amazonaws.services.s3.AmazonS3Client` -(or other `com.amazonaws` class.) +### `NoClassDefFoundError: software/amazon/awssdk/crt/s3/S3MetaRequest` + +The library `aws-crt.jar` is not on the classpath. Its classes +are not in the AWS `bundle.jar` file, yet are needed for uploading +and renaming objects. + +Fix: add. + +``` +java.lang.BootstrapMethodError: java.lang.NoClassDefFoundError: software/amazon/awssdk/crt/s3/S3MetaRequest +at software.amazon.awssdk.services.s3.internal.crt.S3MetaRequestPauseObservable.(S3MetaRequestPauseObservable.java:33) +at software.amazon.awssdk.transfer.s3.internal.DefaultS3TransferManager.uploadFile(DefaultS3TransferManager.java:205) +at org.apache.hadoop.fs.s3a.S3AFileSystem.putObject(S3AFileSystem.java:3064) +at org.apache.hadoop.fs.s3a.S3AFileSystem.executePut(S3AFileSystem.java:4054) + +``` +### `ClassNotFoundException: software.amazon.awssdk.services.s3.S3Client` -This means that the `aws-java-sdk-bundle.jar` JAR is not on the classpath: +(or other `software.amazon` class.) + +This means that the AWS V2 SDK `bundle.jar` JAR is not on the classpath: add it. Review Comment: yes, because people forget to include them when manually setting up their spark installations, where they just drop in a random pair of hadoop-aws and aws-sdk JARs > AWS SDK v2: make the v1 bridging support optional > - > > Key: HADOOP-18820 > URL: https://issues.apache.org/jira/browse/HADOOP-18820 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.4.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Labels: pull-request-available > > The AWS SDK v2 code includes the v1 sdk core for plugin support of > * existing credential providers > * delegation token binding > I propose we break #2 and rely on those who have implemented to to upgrade. > apart from all the needless changes the v2 SDK did to the api (why?) this is > fairly straighforward > for #1: fix through reflection, retaining a v1 sdk dependency at test time so > we can verify that the binder works. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on a diff in pull request #5872: HADOOP-18820. Cut AWS v1 support
steveloughran commented on code in PR #5872: URL: https://github.com/apache/hadoop/pull/5872#discussion_r1279491510 ## hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/troubleshooting_s3a.md: ## @@ -70,14 +70,45 @@ These are Hadoop filesystem client classes, found in the `hadoop-aws` JAR. An exception reporting this class as missing means that this JAR is not on the classpath. -### `ClassNotFoundException: com.amazonaws.services.s3.AmazonS3Client` -(or other `com.amazonaws` class.) +### `NoClassDefFoundError: software/amazon/awssdk/crt/s3/S3MetaRequest` + +The library `aws-crt.jar` is not on the classpath. Its classes +are not in the AWS `bundle.jar` file, yet are needed for uploading +and renaming objects. + +Fix: add. + +``` +java.lang.BootstrapMethodError: java.lang.NoClassDefFoundError: software/amazon/awssdk/crt/s3/S3MetaRequest +at software.amazon.awssdk.services.s3.internal.crt.S3MetaRequestPauseObservable.(S3MetaRequestPauseObservable.java:33) +at software.amazon.awssdk.transfer.s3.internal.DefaultS3TransferManager.uploadFile(DefaultS3TransferManager.java:205) +at org.apache.hadoop.fs.s3a.S3AFileSystem.putObject(S3AFileSystem.java:3064) +at org.apache.hadoop.fs.s3a.S3AFileSystem.executePut(S3AFileSystem.java:4054) + +``` +### `ClassNotFoundException: software.amazon.awssdk.services.s3.S3Client` -This means that the `aws-java-sdk-bundle.jar` JAR is not on the classpath: +(or other `software.amazon` class.) + +This means that the AWS V2 SDK `bundle.jar` JAR is not on the classpath: add it. Review Comment: yes, because people forget to include them when manually setting up their spark installations, where they just drop in a random pair of hadoop-aws and aws-sdk JARs -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-18815) unnecessary NullPointerException encountered when starting HttpServer2 with prometheus enabled
[ https://issues.apache.org/jira/browse/HADOOP-18815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ConfX updated HADOOP-18815: --- Affects Version/s: 3.3.3 > unnecessary NullPointerException encountered when starting HttpServer2 with > prometheus enabled > --- > > Key: HADOOP-18815 > URL: https://issues.apache.org/jira/browse/HADOOP-18815 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.3.3 >Reporter: ConfX >Priority: Critical > Attachments: reproduce.sh > > > h2. What happened? > Attempt to start an {{HttpServer2}} failed due to an NPE thrown in > {{{}MetricsSystemImpl{}}}. > h2. Where's the bug? > In line 1278 of {{{}HttpServer2{}}}, if the support for prometheus is enabled > the server registers a prometheus sink: > {noformat} > if (prometheusSupport) { > DefaultMetricsSystem.instance() > .register("prometheus", "Hadoop metrics prometheus exporter", > prometheusMetricsSink); > }{noformat} > However, a problem is that if the MetricsSystemImpl returned by the > DefaultMetricsSystem.instance has not been start nor init, the config of the > metric system would be set to null, thus failing the nullity check at the > start of MetricsSystemImpl.registerSink. A better way of handling this would > be to check in advance if the metric system has been initialized and > initialize it if it has not been initialized.How to reproduce?(1) set > hadoop.prometheus.endpoint.enabled to true (2) run > org.apache.hadoop.http.TestHttpServer#testHttpResonseContainsDenyStacktrace > {noformat} > java.io.IOException: Problem starting http server > ... > Caused by: java.lang.NullPointerException: config > at > org.apache.hadoop.thirdparty.com.google.common.base.Preconditions.checkNotNull(Preconditions.java:899) > at > org.apache.hadoop.metrics2.impl.MetricsSystemImpl.registerSink(MetricsSystemImpl.java:298) > at > org.apache.hadoop.metrics2.impl.MetricsSystemImpl.register(MetricsSystemImpl.java:277) > at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:1279) > ... 34 more{noformat} > For an easy reproduction, run the reproduce.sh in the attachment. > We are happy to provide a patch if this issue is confirmed. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-18831) Missing null check when running doRun method
[ https://issues.apache.org/jira/browse/HADOOP-18831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ConfX updated HADOOP-18831: --- Affects Version/s: 3.3.3 > Missing null check when running doRun method > > > Key: HADOOP-18831 > URL: https://issues.apache.org/jira/browse/HADOOP-18831 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.3.3 >Reporter: ConfX >Priority: Critical > Attachments: reproduce.sh > > > h2. What happened? > Got NullPointerException when running {{doRun}} method in > {{{}ZKFailoverController.java{}}}. > h2. Where's the bug? > In line 258 of {{{}ZKFailoverController.java{}}},the code lacks a check to > verify whether {{rpcServer}} is null or not. > {noformat} > private int doRun(String[] args) > throws Exception { > ... > } catch (Exception e) { > LOG.error("The failover controller encounters runtime error: ", e); > throw e; > } finally { > rpcServer.stopAndJoin(); > ... > }{noformat} > As a result, when the configuration provides a null rpcServer, the > {{rpcServer.stopAndJoin()}} operation will throw a NullPointerException. > It is essential to add a null check for the rpcServer parameter before using > it. > h2. How to reproduce? > (1) set {{ipc.server.handler.queue.size}} to {{0}} > (2) run > {{org.apache.hadoop.ha.TestZKFailoverController#testAutoFailoverOnLostZKSession}} > h2. Stacktrace > {noformat} > Caused by: java.lang.NullPointerException > at > org.apache.hadoop.ha.ZKFailoverController.doRun(ZKFailoverController.java:258) > at > org.apache.hadoop.ha.ZKFailoverController.access$000(ZKFailoverController.java:63) > at > org.apache.hadoop.ha.ZKFailoverController$1.run(ZKFailoverController.java:181) > at > org.apache.hadoop.ha.ZKFailoverController$1.run(ZKFailoverController.java:177) > at > org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:503) > at > org.apache.hadoop.ha.ZKFailoverController.run(ZKFailoverController.java:177) > at > org.apache.hadoop.ha.MiniZKFCCluster$DummyZKFCThread.doWork(MiniZKFCCluster.java:301) > at > org.apache.hadoop.test.MultithreadedTestUtil$TestingThread.run(MultithreadedTestUtil.java:189){noformat} > For an easy reproduction, run the reproduce.sh in the attachment. > We are happy to provide a patch if this issue is confirmed. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18708) AWS SDK V2 - Implement CSE
[ https://issues.apache.org/jira/browse/HADOOP-18708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17749240#comment-17749240 ] ASF GitHub Bot commented on HADOOP-18708: - steveloughran commented on PR #5767: URL: https://github.com/apache/hadoop/pull/5767#issuecomment-1658601723 thought: we should add a test to verify backwards compat with CSE v1. proposed: src/test/resources to include a small binary file of CSE data, which is uploaded to s3 and then downloaded on a client with CSE and the same secrets. would that work? > AWS SDK V2 - Implement CSE > -- > > Key: HADOOP-18708 > URL: https://issues.apache.org/jira/browse/HADOOP-18708 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.4.0 >Reporter: Ahmar Suhail >Priority: Major > Labels: pull-request-available > > S3 Encryption client for SDK V2 is now available, so add client side > encryption back in. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on pull request #5767: HADOOP-18708. Adds in CSE.
steveloughran commented on PR #5767: URL: https://github.com/apache/hadoop/pull/5767#issuecomment-1658601723 thought: we should add a test to verify backwards compat with CSE v1. proposed: src/test/resources to include a small binary file of CSE data, which is uploaded to s3 and then downloaded on a client with CSE and the same secrets. would that work? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] slfan1989 commented on pull request #5901: YARN-7402. BackPort [GPG] Fix potential connection leak in GPGUtils.
slfan1989 commented on PR #5901: URL: https://github.com/apache/hadoop/pull/5901#issuecomment-1658591594 @goiri Can you help review this pr? Thank you very much! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] slfan1989 commented on pull request #5902: YARN-7708. BackPort [GPG] Load based policy generator.
slfan1989 commented on PR #5902: URL: https://github.com/apache/hadoop/pull/5902#issuecomment-1658589610 @goiri Can you help review this pr? Thank you very much! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] slfan1989 commented on pull request #5903: YARN-3660. [Addendum] Fix GPG Pom.xml Typo.
slfan1989 commented on PR #5903: URL: https://github.com/apache/hadoop/pull/5903#issuecomment-1658590732 @goiri @ayushtkn Can you help review this pr? Thank you very much! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] slfan1989 commented on pull request #5862: YARN-11536. [Federation] Router CLI Supports Batch Save the SubClusterPolicyConfiguration Of Queues.
slfan1989 commented on PR #5862: URL: https://github.com/apache/hadoop/pull/5862#issuecomment-1658588850 @goiri Can you help review this pr? Thank you very much! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] p-szucs opened a new pull request, #5910: YARN-11545. Fixed FS2CS ACL conversion when all users are allowed.
p-szucs opened a new pull request, #5910: URL: https://github.com/apache/hadoop/pull/5910 Change-Id: I755a21c6d300a7a831efa675819f20a35748c6b4 ### Description of PR Currently we only convert ACLs if users or groups are set. This should be extended to check if the "allAllowed" flag is set in the AcessControlList to be able to preserve * values also for the ACLs. ### How was this patch tested? Locally and unit tests. ### For code changes: - [x] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')? - [ ] Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, `NOTICE-binary` files? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] anmolanmol1234 commented on a diff in pull request #5909: Hadoop 18826: [ABFS] Fix for Empty Relative Path Issue Leading to GetFileStatus("/") failure.
anmolanmol1234 commented on code in PR #5909: URL: https://github.com/apache/hadoop/pull/5909#discussion_r1279335847 ## hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java: ## @@ -1661,7 +1661,12 @@ private String getOctalNotation(FsPermission fsPermission) { private String getRelativePath(final Path path) { Preconditions.checkNotNull(path, "path"); -return path.toUri().getPath(); +String relPath = path.toUri().getPath(); +if (relPath.isEmpty()) { + // This means tha path passed y user is absolute path of root without "/" Review Comment: nit typo: by and that -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] anmolanmol1234 commented on a diff in pull request #5909: Hadoop 18826: [ABFS] Fix for Empty Relative Path Issue Leading to GetFileStatus("/") failure.
anmolanmol1234 commented on code in PR #5909: URL: https://github.com/apache/hadoop/pull/5909#discussion_r1279335847 ## hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java: ## @@ -1661,7 +1661,12 @@ private String getOctalNotation(FsPermission fsPermission) { private String getRelativePath(final Path path) { Preconditions.checkNotNull(path, "path"); -return path.toUri().getPath(); +String relPath = path.toUri().getPath(); +if (relPath.isEmpty()) { + // This means tha path passed y user is absolute path of root without "/" Review Comment: nit typo: by -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] brumi1024 commented on pull request #5896: YARN-11543: Fix checkstyle issues after YARN-11520.
brumi1024 commented on PR #5896: URL: https://github.com/apache/hadoop/pull/5896#issuecomment-1658405691 Thanks @slfan1989 @tomscut for the review! Merged to trunk. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] brumi1024 merged pull request #5896: YARN-11543: Fix checkstyle issues after YARN-11520.
brumi1024 merged PR #5896: URL: https://github.com/apache/hadoop/pull/5896 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #5905: [YARN-11421] Graceful Decommission ignores launched containers and gets deactivated before timeout
hadoop-yetus commented on PR #5905: URL: https://github.com/apache/hadoop/pull/5905#issuecomment-1658287100 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 57s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 0s | | xmllint was not available. | | +0 :ok: | markdownlint | 0m 0s | | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 2 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 14m 2s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 35m 21s | | trunk passed | | +1 :green_heart: | compile | 7m 49s | | trunk passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | compile | 7m 21s | | trunk passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | checkstyle | 2m 6s | | trunk passed | | +1 :green_heart: | mvnsite | 4m 16s | | trunk passed | | +1 :green_heart: | javadoc | 4m 12s | | trunk passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 3m 56s | | trunk passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +0 :ok: | spotbugs | 0m 45s | | branch/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site no spotbugs output file (spotbugsXml.xml) | | +1 :green_heart: | shadedclient | 34m 21s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 32s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 25s | | the patch passed | | +1 :green_heart: | compile | 6m 56s | | the patch passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javac | 6m 56s | | the patch passed | | +1 :green_heart: | compile | 7m 15s | | the patch passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | javac | 7m 15s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 1m 52s | [/results-checkstyle-hadoop-yarn-project_hadoop-yarn.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5905/2/artifact/out/results-checkstyle-hadoop-yarn-project_hadoop-yarn.txt) | hadoop-yarn-project/hadoop-yarn: The patch generated 10 new + 202 unchanged - 0 fixed = 212 total (was 202) | | +1 :green_heart: | mvnsite | 3m 56s | | the patch passed | | +1 :green_heart: | javadoc | 3m 44s | | the patch passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 3m 23s | | the patch passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +0 :ok: | spotbugs | 0m 38s | | hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site has no data from spotbugs | | +1 :green_heart: | shadedclient | 35m 17s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 1m 20s | | hadoop-yarn-api in the patch passed. | | +1 :green_heart: | unit | 5m 54s | | hadoop-yarn-common in the patch passed. | | +1 :green_heart: | unit | 104m 45s | | hadoop-yarn-server-resourcemanager in the patch passed. | | +1 :green_heart: | unit | 0m 40s | | hadoop-yarn-site in the patch passed. | | +1 :green_heart: | asflicense | 1m 5s | | The patch does not generate ASF License warnings. | | | | 315m 28s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5905/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5905 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint markdownlint | | uname | Linux 536d233b2286 4.15.0-212-generic #223-Ubuntu SMP Tue May 23 13:09:22 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 6312eb5db56410526d58944e5b2f9d1115bafed9 | | Default Java | Private
[GitHub] [hadoop] hadoop-yetus commented on pull request #5909: Hadoop 18826: [ABFS] Fix for Empty Relative Path Issue Leading to GetFileStatus("/") failure.
hadoop-yetus commented on PR #5909: URL: https://github.com/apache/hadoop/pull/5909#issuecomment-1658269164 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 28s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 31m 55s | | trunk passed | | +1 :green_heart: | compile | 0m 29s | | trunk passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | compile | 0m 27s | | trunk passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | checkstyle | 0m 25s | | trunk passed | | +1 :green_heart: | mvnsite | 0m 32s | | trunk passed | | +1 :green_heart: | javadoc | 0m 31s | | trunk passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 0m 27s | | trunk passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | spotbugs | 0m 50s | | trunk passed | | +1 :green_heart: | shadedclient | 21m 19s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 22s | | the patch passed | | +1 :green_heart: | compile | 0m 23s | | the patch passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javac | 0m 23s | | the patch passed | | +1 :green_heart: | compile | 0m 20s | | the patch passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | javac | 0m 20s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 0m 15s | [/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5909/1/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt) | hadoop-tools/hadoop-azure: The patch generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) | | +1 :green_heart: | mvnsite | 0m 22s | | the patch passed | | +1 :green_heart: | javadoc | 0m 20s | | the patch passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 0m 19s | | the patch passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | spotbugs | 0m 44s | | the patch passed | | +1 :green_heart: | shadedclient | 21m 7s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 1m 49s | | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 29s | | The patch does not generate ASF License warnings. | | | | 86m 14s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5909/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5909 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets | | uname | Linux 9185b5d428a8 4.15.0-212-generic #223-Ubuntu SMP Tue May 23 13:09:22 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 79b56f0193cc517afb15ccd56d46aa04f40e9429 | | Default Java | Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5909/1/testReport/ | | Max. process+thread count | 554 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5909/1/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org | This message was automatically generated. -- This is a
[GitHub] [hadoop] fanluoo commented on pull request #5591: HDFS-16991. fix testMkdirsRaceWithObserverRead
fanluoo commented on PR #5591: URL: https://github.com/apache/hadoop/pull/5591#issuecomment-1658259971 > LGTM. Have triggered the build again, if the build is green will merge by EOD @ayushtkn Unfortunately, the build failed,it's seems something authorization failed, Could you help to take a look again -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] fanluoo commented on pull request #5591: HDFS-16991. fix testMkdirsRaceWithObserverRead
fanluoo commented on PR #5591: URL: https://github.com/apache/hadoop/pull/5591#issuecomment-1658254920 > LGTM. Have triggered the build again, if the build is green will merge by EOD Unfortunately, the build failed,it's seems something authorization failed, Could you help to take a look again -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #5862: YARN-11536. [Federation] Router CLI Supports Batch Save the SubClusterPolicyConfiguration Of Queues.
hadoop-yetus commented on PR #5862: URL: https://github.com/apache/hadoop/pull/5862#issuecomment-1658176642 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 28s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 1s | | detect-secrets was not available. | | +0 :ok: | buf | 0m 1s | | buf was not available. | | +0 :ok: | buf | 0m 1s | | buf was not available. | | +0 :ok: | xmllint | 0m 1s | | xmllint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 7 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 13m 52s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 20m 29s | | trunk passed | | +1 :green_heart: | compile | 5m 8s | | trunk passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | compile | 4m 42s | | trunk passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | checkstyle | 1m 12s | | trunk passed | | +1 :green_heart: | mvnsite | 4m 18s | | trunk passed | | +1 :green_heart: | javadoc | 4m 20s | | trunk passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 4m 6s | | trunk passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | spotbugs | 6m 59s | | trunk passed | | +1 :green_heart: | shadedclient | 21m 40s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 24s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 28s | | the patch passed | | +1 :green_heart: | compile | 4m 17s | | the patch passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | cc | 4m 17s | | the patch passed | | +1 :green_heart: | javac | 4m 17s | | the patch passed | | +1 :green_heart: | compile | 4m 18s | | the patch passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | cc | 4m 18s | | the patch passed | | +1 :green_heart: | javac | 4m 18s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 1m 5s | [/results-checkstyle-hadoop-yarn-project_hadoop-yarn.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5862/10/artifact/out/results-checkstyle-hadoop-yarn-project_hadoop-yarn.txt) | hadoop-yarn-project/hadoop-yarn: The patch generated 1 new + 66 unchanged - 0 fixed = 67 total (was 66) | | +1 :green_heart: | mvnsite | 3m 54s | | the patch passed | | +1 :green_heart: | javadoc | 3m 50s | | the patch passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 3m 42s | | the patch passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | spotbugs | 7m 26s | | the patch passed | | +1 :green_heart: | shadedclient | 21m 29s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 0m 56s | | hadoop-yarn-api in the patch passed. | | +1 :green_heart: | unit | 4m 49s | | hadoop-yarn-common in the patch passed. | | +1 :green_heart: | unit | 2m 47s | | hadoop-yarn-server-common in the patch passed. | | +1 :green_heart: | unit | 86m 49s | | hadoop-yarn-server-resourcemanager in the patch passed. | | +1 :green_heart: | unit | 26m 2s | | hadoop-yarn-client in the patch passed. | | +1 :green_heart: | unit | 0m 38s | | hadoop-yarn-server-router in the patch passed. | | +1 :green_heart: | asflicense | 0m 49s | | The patch does not generate ASF License warnings. | | | | 267m 55s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5862/10/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5862 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets cc buflint bufcompat xmllint | | uname | Linux 8a1443b47e72 4.15.0-213-gene
[jira] [Commented] (HADOOP-18826) abfs getFileStatus(/) fails with "Value for one of the query parameters specified in the request URI is invalid.", 400
[ https://issues.apache.org/jira/browse/HADOOP-18826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17749146#comment-17749146 ] Anuj Modi commented on HADOOP-18826: PR for fix: [Hadoop 18826: [ABFS] Fix for Empty Relative Path Issue Leading to GetFileStatus("/") failure. by anujmodi2021 · Pull Request #5909 · apache/hadoop (github.com)|https://github.com/apache/hadoop/pull/5909] > abfs getFileStatus(/) fails with "Value for one of the query parameters > specified in the request URI is invalid.", 400 > -- > > Key: HADOOP-18826 > URL: https://issues.apache.org/jira/browse/HADOOP-18826 > Project: Hadoop Common > Issue Type: Bug > Components: fs/azure >Affects Versions: 3.3.1, 3.3.2, 3.3.5, 3.3.3, 3.3.4, 3.3.6 >Reporter: Sergey Shabalov >Assignee: Anuj Modi >Priority: Major > Attachments: test_hadoop-azure-3_3_1-FileSystem_getFileStatus - > Copy.zip > > > I am using hadoop-azure-3.3.0.jar and have written code: > {code:java} > static final String ROOT_DIR = > "abfs://ssh-test...@sshadlsgen2.dfs.core.windows.net", > Configuration config = new Configuration(); > config.set("fs.defaultFS",ROOT_DIR); > config.set("fs.adl.oauth2.access.token.provider.type","ClientCredential"); > config.set("fs.adl.oauth2.client.id",""); > config.set("fs.adl.oauth2.credential",""); > config.set("fs.adl.oauth2.refresh.url",""); > config.set("fs.azure.account.key.sshadlsgen2.dfs.core.windows.net",ACCESS_TOKEN); > > config.set("fs.azure.skipUserGroupMetadataDuringInitialization","true"); > FileSystem fs = FileSystem.get(config); > System.out.println( "\nfs:'"+fs.toString()+"'"); > FileStatus status = fs.getFileStatus(new Path(ROOT_DIR)); // !!! > Exception in 3.3.1-* > System.out.println( "\nstatus:'"+status.toString()+"'"); > {code} > It did work properly till 3.3.1. > But in 3.3.1 it fails with exception: > {code:java} > Caused by: Operation failed: "Value for one of the query parameters specified > in the request URI is invalid.", 400, HEAD, > https://sshadlsgen2.dfs.core.windows.net/ssh-test-fs?upn=false&action=getAccessControl&timeout=90 > at > org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation.completeExecute(AbfsRestOperation.java:218) > at > org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation.lambda$execute$0(AbfsRestOperation.java:181) > at > org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.measureDurationOfInvocation(IOStatisticsBinding.java:494) > at > org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.trackDurationOfInvocation(IOStatisticsBinding.java:465) > at > org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation.execute(AbfsRestOperation.java:179) > at > org.apache.hadoop.fs.azurebfs.services.AbfsClient.getAclStatus(AbfsClient.java:942) > at > org.apache.hadoop.fs.azurebfs.services.AbfsClient.getAclStatus(AbfsClient.java:924) > at > org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.getFileStatus(AzureBlobFileSystemStore.java:846) > at > org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.getFileStatus(AzureBlobFileSystem.java:507) > {code} > I performed some research and found: > In hadoop-azure-3.3.0.jar we see: > {code:java} > org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore{ > ... > public FileStatus getFileStatus(final Path path) throws IOException { > ... > Line 604: op = > client.getAclStatus(AbfsHttpConstants.FORWARD_SLASH + > AbfsHttpConstants.ROOT_PATH); > ... > } > ... > } {code} > and this code produces REST request: > {code:java} > https://sshadlsgen2.dfs.core.windows.net/ssh-test-fs//?upn=false&action=getAccessControl&timeout=90 > {code} > There is finalizes slash in path part > "...ssh-test-fs{*}{color:#de350b}//{color}{*}?upn=false..." This request does > work properly. > But since hadoop-azure-3.3.1.jar till latest hadoop-azure-3.3.6.jar we see: > {code:java} > org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore { > ... > public FileStatus getFileStatus(final Path path) throws IOException { > ... > perfInfo.registerCallee("getAclStatus"); > Line 846: op = client.getAclStatus(getRelativePath(path)); > ... > } > ... > } > Line 1492: > private String getRelativePath(final Path path) { > ... > return path.toUri().getPath(); > } {code} > and this code prduces REST request: > {code:java} > https://sshadlsgen2.dfs.core.windows.net/ssh-test-fs?upn=false&action=getAccessControl&timeout=90 > {code} > There is not finalizes slash in path part "...ssh-test-fs?upn=false..." It > happens because the new code
[GitHub] [hadoop] anujmodi2021 opened a new pull request, #5909: Hadoop 18826
anujmodi2021 opened a new pull request, #5909: URL: https://github.com/apache/hadoop/pull/5909 ### Description of PR Jira Ticket: https://issues.apache.org/jira/browse/HADOOP-18826 Recently a bug was reported in ABFS getFileStatus() call (Refer to the Jira Ticket Above for more details). Bug was only hit when getFileStatus() call was made on HNS account and the path passed was absolute path of the root without a "/" at the end. This is equivalent of having an empty relative path which was causing the issue. This PR fixes the issue by appending a "/" at the end of empty relative path. This change will affect only those calls which are made on absolute path of the root without "/" at the end. ### How was this patch tested? Test case added for failing scenario. Complete test suite was run. ### For code changes: - [ ] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')? - [ ] Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, `NOTICE-binary` files? AGGREGATED TEST RESULT HNS-OAuth [INFO] Results: [INFO] [ERROR] Tests run: 141, Failures: 0, Errors: 0, Skipped: 5 [INFO] Results: [INFO] [ERROR] Errors: [ERROR] ITestAzureBlobFileSystemLease.testAcquireRetry:329 » TestTimedOut test timed o... [INFO] [ERROR] Tests run: 587, Failures: 0, Errors: 1, Skipped: 54 [INFO] Results: [INFO] [WARNING] Tests run: 339, Failures: 0, Errors: 0, Skipped: 41 HNS-SharedKey [INFO] Results: [INFO] [ERROR] Failures: [ERROR] TestAbfsClientThrottlingAnalyzer.testManySuccessAndErrorsAndWaiting:181->fuzzyValidate:64 The actual value 28 is not within the expected range: [5.60, 8.40]. [INFO] [ERROR] Tests run: 141, Failures: 1, Errors: 0, Skipped: 5 [INFO] Results: [INFO] [ERROR] Errors: [ERROR] ITestAzureBlobFileSystemLease.testAcquireRetry:329 » TestTimedOut test timed o... [INFO] [ERROR] Tests run: 587, Failures: 0, Errors: 1, Skipped: 54 [INFO] Results: [INFO] [WARNING] Tests run: 339, Failures: 0, Errors: 0, Skipped: 41 NonHNS-SharedKey [INFO] Results: [INFO] [ERROR] Tests run: 141, Failures: 0, Errors: 0, Skipped: 11 [INFO] Results: [INFO] [ERROR] Failures: [ERROR] ITestAzureBlobFileSystemCheckAccess.testCheckAccessForAccountWithoutNS:181 Expecting org.apache.hadoop.security.AccessControlException with text "This request is not authorized to perform this operation using this permission.", 403 but got : "void" [INFO] [ERROR] Tests run: 587, Failures: 1, Errors: 0, Skipped: 277 [INFO] Results: [INFO] [WARNING] Tests run: 339, Failures: 0, Errors: 0, Skipped: 44 AppendBlob-HNS-OAuth [INFO] Results: [INFO] [ERROR] Tests run: 141, Failures: 0, Errors: 0, Skipped: 5 [INFO] Results: [INFO] [ERROR] Errors: [ERROR] ITestAzureBlobFileSystemLease.testAcquireRetry:329 » TestTimedOut test timed o... [INFO] [ERROR] Tests run: 587, Failures: 0, Errors: 1, Skipped: 54 [INFO] Results: [INFO] [WARNING] Tests run: 339, Failures: 0, Errors: 0, Skipped: 41 Time taken: 79 mins 18 secs. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #5902: YARN-7708. BackPort [GPG] Load based policy generator.
hadoop-yetus commented on PR #5902: URL: https://github.com/apache/hadoop/pull/5902#issuecomment-1658091023 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 58s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | detsecrets | 0m 0s | | detect-secrets was not available. | | +0 :ok: | xmllint | 0m 0s | | xmllint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 14m 25s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 36m 7s | | trunk passed | | +1 :green_heart: | compile | 8m 10s | | trunk passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | compile | 7m 27s | | trunk passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | checkstyle | 2m 1s | | trunk passed | | +1 :green_heart: | mvnsite | 5m 25s | | trunk passed | | +1 :green_heart: | javadoc | 5m 31s | | trunk passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 4m 44s | | trunk passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | spotbugs | 15m 43s | | trunk passed | | +1 :green_heart: | shadedclient | 37m 56s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 30s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 4m 29s | | the patch passed | | +1 :green_heart: | compile | 7m 31s | | the patch passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javac | 7m 31s | | the patch passed | | +1 :green_heart: | compile | 7m 24s | | the patch passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | javac | 7m 24s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 1m 54s | | the patch passed | | +1 :green_heart: | mvnsite | 5m 5s | | the patch passed | | +1 :green_heart: | javadoc | 5m 7s | | the patch passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 | | +1 :green_heart: | javadoc | 4m 27s | | the patch passed with JDK Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | +1 :green_heart: | spotbugs | 16m 12s | | the patch passed | | +1 :green_heart: | shadedclient | 38m 15s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 243m 40s | | hadoop-yarn in the patch passed. | | +1 :green_heart: | unit | 1m 10s | | hadoop-yarn-api in the patch passed. | | +1 :green_heart: | unit | 5m 40s | | hadoop-yarn-common in the patch passed. | | +1 :green_heart: | unit | 0m 59s | | hadoop-yarn-server-globalpolicygenerator in the patch passed. | | +1 :green_heart: | asflicense | 1m 2s | | The patch does not generate ASF License warnings. | | | | 488m 3s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5902/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/5902 | | Optional Tests | dupname asflicense codespell detsecrets xmllint compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle | | uname | Linux 58f4ed5108e7 4.15.0-212-generic #223-Ubuntu SMP Tue May 23 13:09:22 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 7c7443957e01c24d66e16073da4131d5cbe275db | | Default Java | Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5902/3/testReport/ | | Max. process+thread count | 1752 (vs. ulimit of 5500) | | modules | C: hadoop-yarn-project/hadoop-yarn hadoop-yarn-project/hadoop-yarn/hadoop-ya
[GitHub] [hadoop] hfutatzhanghb commented on pull request #5889: HDFS-17123. Sort datanodeStorages when generating StorageBlockReport[] in method BPServiceActor#blockReport for future convenience
hfutatzhanghb commented on PR #5889: URL: https://github.com/apache/hadoop/pull/5889#issuecomment-1658078551 > Hi @hfutatzhanghb , Thanks for your contribution. When review PRs, I found that #5889, #5891, #5814 both try to solve the same issue but split them, right? If true, I just suggest that file new JIRA and locate other JIRAs to its subtask. And submit one PR to solve this issue too. If large patch here, we should split and explain why and how. The final purpose is still to focus origin and solution. Thanks again. @Hexiaoqiao Thanks for your reviewing and suggestions, yes, the PRs you mentioned above are all aim to solve the same issue. I will do that soonly, thanks again~ -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16146) Make start-build-env.sh safe in case of misusage of DOCKER_INTERACTIVE_RUN
[ https://issues.apache.org/jira/browse/HADOOP-16146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Marton Elek updated HADOOP-16146: - Status: Open (was: Patch Available) > Make start-build-env.sh safe in case of misusage of DOCKER_INTERACTIVE_RUN > -- > > Key: HADOOP-16146 > URL: https://issues.apache.org/jira/browse/HADOOP-16146 > Project: Hadoop Common > Issue Type: Bug >Reporter: Marton Elek >Assignee: Marton Elek >Priority: Major > Labels: pull-request-available > > [~aw] reported the problem in HDDS-891: > {quote}DOCKER_INTERACTIVE_RUN opens the door for users to set command line > options to docker. Most notably, -c and -v and a few others that share one > particular characteristic: they reference the file system. As soon as shell > code hits the file system, it is no longer safe to assume space delimited > options. In other words, -c /My Cool Filesystem/Docker Files/config.json or > -v /c_drive/Program Files/Data:/data may be something a user wants to do, but > the script now breaks because of the IFS assumptions. > {quote} > DOCKER_INTERACTIVE_RUN was used in jenkins to run normal build process in > docker. In case of DOCKER_INTERACTIVE_RUN was set to empty the docker > container is started without the "-i -t" flags. > It can be improved by checking the value of the environment variable and > enable only fixed set of values. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-16146) Make start-build-env.sh safe in case of misusage of DOCKER_INTERACTIVE_RUN
[ https://issues.apache.org/jira/browse/HADOOP-16146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Marton Elek resolved HADOOP-16146. -- Resolution: Won't Fix no review > Make start-build-env.sh safe in case of misusage of DOCKER_INTERACTIVE_RUN > -- > > Key: HADOOP-16146 > URL: https://issues.apache.org/jira/browse/HADOOP-16146 > Project: Hadoop Common > Issue Type: Bug >Reporter: Marton Elek >Assignee: Marton Elek >Priority: Major > Labels: pull-request-available > > [~aw] reported the problem in HDDS-891: > {quote}DOCKER_INTERACTIVE_RUN opens the door for users to set command line > options to docker. Most notably, -c and -v and a few others that share one > particular characteristic: they reference the file system. As soon as shell > code hits the file system, it is no longer safe to assume space delimited > options. In other words, -c /My Cool Filesystem/Docker Files/config.json or > -v /c_drive/Program Files/Data:/data may be something a user wants to do, but > the script now breaks because of the IFS assumptions. > {quote} > DOCKER_INTERACTIVE_RUN was used in jenkins to run normal build process in > docker. In case of DOCKER_INTERACTIVE_RUN was set to empty the docker > container is started without the "-i -t" flags. > It can be improved by checking the value of the environment variable and > enable only fixed set of values. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16146) Make start-build-env.sh safe in case of misusage of DOCKER_INTERACTIVE_RUN
[ https://issues.apache.org/jira/browse/HADOOP-16146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HADOOP-16146: Labels: pull-request-available (was: ) > Make start-build-env.sh safe in case of misusage of DOCKER_INTERACTIVE_RUN > -- > > Key: HADOOP-16146 > URL: https://issues.apache.org/jira/browse/HADOOP-16146 > Project: Hadoop Common > Issue Type: Bug >Reporter: Marton Elek >Assignee: Marton Elek >Priority: Major > Labels: pull-request-available > > [~aw] reported the problem in HDDS-891: > {quote}DOCKER_INTERACTIVE_RUN opens the door for users to set command line > options to docker. Most notably, -c and -v and a few others that share one > particular characteristic: they reference the file system. As soon as shell > code hits the file system, it is no longer safe to assume space delimited > options. In other words, -c /My Cool Filesystem/Docker Files/config.json or > -v /c_drive/Program Files/Data:/data may be something a user wants to do, but > the script now breaks because of the IFS assumptions. > {quote} > DOCKER_INTERACTIVE_RUN was used in jenkins to run normal build process in > docker. In case of DOCKER_INTERACTIVE_RUN was set to empty the docker > container is started without the "-i -t" flags. > It can be improved by checking the value of the environment variable and > enable only fixed set of values. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16146) Make start-build-env.sh safe in case of misusage of DOCKER_INTERACTIVE_RUN
[ https://issues.apache.org/jira/browse/HADOOP-16146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17749115#comment-17749115 ] ASF GitHub Bot commented on HADOOP-16146: - elek closed pull request #516: HADOOP-16146. Make start-build-env.sh safe in case of misusage of DOCKER_INTERACTIVE_RUN. URL: https://github.com/apache/hadoop/pull/516 > Make start-build-env.sh safe in case of misusage of DOCKER_INTERACTIVE_RUN > -- > > Key: HADOOP-16146 > URL: https://issues.apache.org/jira/browse/HADOOP-16146 > Project: Hadoop Common > Issue Type: Bug >Reporter: Marton Elek >Assignee: Marton Elek >Priority: Major > Labels: pull-request-available > > [~aw] reported the problem in HDDS-891: > {quote}DOCKER_INTERACTIVE_RUN opens the door for users to set command line > options to docker. Most notably, -c and -v and a few others that share one > particular characteristic: they reference the file system. As soon as shell > code hits the file system, it is no longer safe to assume space delimited > options. In other words, -c /My Cool Filesystem/Docker Files/config.json or > -v /c_drive/Program Files/Data:/data may be something a user wants to do, but > the script now breaks because of the IFS assumptions. > {quote} > DOCKER_INTERACTIVE_RUN was used in jenkins to run normal build process in > docker. In case of DOCKER_INTERACTIVE_RUN was set to empty the docker > container is started without the "-i -t" flags. > It can be improved by checking the value of the environment variable and > enable only fixed set of values. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16146) Make start-build-env.sh safe in case of misusage of DOCKER_INTERACTIVE_RUN
[ https://issues.apache.org/jira/browse/HADOOP-16146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17749114#comment-17749114 ] ASF GitHub Bot commented on HADOOP-16146: - elek commented on PR #516: URL: https://github.com/apache/hadoop/pull/516#issuecomment-1658050670 Looks like nobody is interested about this change. > Make start-build-env.sh safe in case of misusage of DOCKER_INTERACTIVE_RUN > -- > > Key: HADOOP-16146 > URL: https://issues.apache.org/jira/browse/HADOOP-16146 > Project: Hadoop Common > Issue Type: Bug >Reporter: Marton Elek >Assignee: Marton Elek >Priority: Major > > [~aw] reported the problem in HDDS-891: > {quote}DOCKER_INTERACTIVE_RUN opens the door for users to set command line > options to docker. Most notably, -c and -v and a few others that share one > particular characteristic: they reference the file system. As soon as shell > code hits the file system, it is no longer safe to assume space delimited > options. In other words, -c /My Cool Filesystem/Docker Files/config.json or > -v /c_drive/Program Files/Data:/data may be something a user wants to do, but > the script now breaks because of the IFS assumptions. > {quote} > DOCKER_INTERACTIVE_RUN was used in jenkins to run normal build process in > docker. In case of DOCKER_INTERACTIVE_RUN was set to empty the docker > container is started without the "-i -t" flags. > It can be improved by checking the value of the environment variable and > enable only fixed set of values. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] elek closed pull request #516: HADOOP-16146. Make start-build-env.sh safe in case of misusage of DOCKER_INTERACTIVE_RUN.
elek closed pull request #516: HADOOP-16146. Make start-build-env.sh safe in case of misusage of DOCKER_INTERACTIVE_RUN. URL: https://github.com/apache/hadoop/pull/516 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] elek commented on pull request #516: HADOOP-16146. Make start-build-env.sh safe in case of misusage of DOCKER_INTERACTIVE_RUN.
elek commented on PR #516: URL: https://github.com/apache/hadoop/pull/516#issuecomment-1658050670 Looks like nobody is interested about this change. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18833) Install bats for building Hadoop on Windows
[ https://issues.apache.org/jira/browse/HADOOP-18833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17749107#comment-17749107 ] Steve Loughran commented on HADOOP-18833: - well, install it. it's done outside the hadoop build. > Install bats for building Hadoop on Windows > --- > > Key: HADOOP-18833 > URL: https://issues.apache.org/jira/browse/HADOOP-18833 > Project: Hadoop Common > Issue Type: Bug > Components: common >Affects Versions: 3.4.0 > Environment: Windows 10 >Reporter: Gautham Banasandra >Assignee: Gautham Banasandra >Priority: Major > Fix For: 3.4.0 > > Attachments: archive.zip > > > We get the following error while building Hadoop on Windows (logs attached - > [^archive.zip] ) - > {code} > [INFO] --- maven-antrun-plugin:1.8:run (common-test-bats-driver) @ > hadoop-common --- > [INFO] Executing tasks > main: > [exec] > [exec] > [exec] ERROR: bats not installed. Skipping bash tests. > [exec] ERROR: Please install bats as soon as possible. > [exec] > {code} > We need to install bats to fix this. -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on pull request #5908: HADOOP-18832. Upgrade aws-java-sdk to 1.12.499
steveloughran commented on PR #5908: URL: https://github.com/apache/hadoop/pull/5908#issuecomment-1658017436 you've seen the "qualifying an update" section of the testing docs, right? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18832) Upgrade aws-java-sdk to 1.12.499+
[ https://issues.apache.org/jira/browse/HADOOP-18832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17749105#comment-17749105 ] ASF GitHub Bot commented on HADOOP-18832: - steveloughran commented on PR #5908: URL: https://github.com/apache/hadoop/pull/5908#issuecomment-1658017436 you've seen the "qualifying an update" section of the testing docs, right? > Upgrade aws-java-sdk to 1.12.499+ > - > > Key: HADOOP-18832 > URL: https://issues.apache.org/jira/browse/HADOOP-18832 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > Labels: pull-request-available > > aws sdk versions < 1.12.499 uses a vulnerable version of netty and hence > showing up in security CVE scans (CVE-2023-34462). The safe version for netty > is 4.1.94.Final and this is used by aws-java-sdk:1.12.499+ -- This message was sent by Atlassian Jira (v8.20.10#820010) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] LiuGuH commented on pull request #5888: HDFS-17121. BPServiceActor to provide new thread to handle FBR
LiuGuH commented on PR #5888: URL: https://github.com/apache/hadoop/pull/5888#issuecomment-1657869992 Apache Yetus(jenkins) error: mvninstallCould not transfer artifact org.codehaus.mojo:extra-enforcer-rules:pom:1.5.1 from/to central (https://repo.maven.apache.org/maven2): Transfer failed for https://repo.maven.apache.org/maven2/org/codehaus/mojo/extra-enforcer-rules/1.5.1/extra-enforcer-rules-1.5.1.pom: Connection reset -> [Help 1] It could be a network connection problem. How can I trigger compilation without change the code ? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org