[GitHub] [hadoop] tomscut commented on pull request #4090: HDFS-16516. Fix Fsshell wrong params
tomscut commented on pull request #4090: URL: https://github.com/apache/hadoop/pull/4090#issuecomment-1074626818 LGTM. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] tomscut commented on pull request #4087: HDFS-16513. [SBN read] Observer Namenode does not trigger the edits r…
tomscut commented on pull request #4087: URL: https://github.com/apache/hadoop/pull/4087#issuecomment-1074579167 Hi @xkrogen @sunchao , please take a look at this. Thanks. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] tomscut commented on pull request #4057: HDFS-16498. Fix NPE for checkBlockReportLease
tomscut commented on pull request #4057: URL: https://github.com/apache/hadoop/pull/4057#issuecomment-1074573849 Hi @ayushtkn , I fixed the problem you mentioned, please have a look. Thanks. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #4057: HDFS-16498. Fix NPE for checkBlockReportLease
hadoop-yetus commented on pull request #4057: URL: https://github.com/apache/hadoop/pull/4057#issuecomment-1074439142 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 57s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 35m 59s | | trunk passed | | +1 :green_heart: | compile | 1m 32s | | trunk passed with JDK Ubuntu-11.0.14+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 1m 22s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 1m 0s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 31s | | trunk passed | | +1 :green_heart: | javadoc | 1m 4s | | trunk passed with JDK Ubuntu-11.0.14+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 35s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 3m 24s | | trunk passed | | +1 :green_heart: | shadedclient | 25m 43s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 19s | | the patch passed | | +1 :green_heart: | compile | 1m 25s | | the patch passed with JDK Ubuntu-11.0.14+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 1m 25s | | the patch passed | | +1 :green_heart: | compile | 1m 18s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 1m 18s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 53s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 22s | | the patch passed | | +1 :green_heart: | javadoc | 0m 56s | | the patch passed with JDK Ubuntu-11.0.14+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 30s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 3m 26s | | the patch passed | | +1 :green_heart: | shadedclient | 25m 47s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 444m 6s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4057/6/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 52s | | The patch does not generate ASF License warnings. | | | | 553m 59s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover | | | hadoop.hdfs.server.namenode.TestFileTruncate | | | hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes | | | hadoop.hdfs.server.namenode.TestFsck | | | hadoop.hdfs.server.namenode.TestAddStripedBlockInFBR | | | hadoop.hdfs.server.namenode.TestCheckpoint | | | hadoop.hdfs.server.namenode.ha.TestHASafeMode | | | hadoop.hdfs.server.namenode.TestAddBlock | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4057/6/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/4057 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux bbfd86f52e7c 4.15.0-166-generic #174-Ubuntu SMP Wed Dec 8 19:07:44 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 1558c1020841ab7769f79753048f321c2dd81996 | | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.14+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4057/6/testReport/ | | Max. process+thread count | 2607 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-p
[GitHub] [hadoop] hadoop-yetus commented on pull request #4089: HDFS-16515. Improve ec exception message
hadoop-yetus commented on pull request #4089: URL: https://github.com/apache/hadoop/pull/4089#issuecomment-1074415214 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 17m 49s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 36m 48s | | trunk passed | | +1 :green_heart: | compile | 1m 32s | | trunk passed with JDK Ubuntu-11.0.14+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 1m 21s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 1m 0s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 29s | | trunk passed | | +1 :green_heart: | javadoc | 1m 4s | | trunk passed with JDK Ubuntu-11.0.14+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 29s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 3m 21s | | trunk passed | | +1 :green_heart: | shadedclient | 25m 46s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 17s | | the patch passed | | +1 :green_heart: | compile | 1m 23s | | the patch passed with JDK Ubuntu-11.0.14+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 1m 23s | | the patch passed | | +1 :green_heart: | compile | 1m 13s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 1m 13s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 51s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 20s | | the patch passed | | +1 :green_heart: | javadoc | 0m 54s | | the patch passed with JDK Ubuntu-11.0.14+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 27s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 3m 25s | | the patch passed | | +1 :green_heart: | shadedclient | 26m 50s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 378m 38s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4089/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 39s | | The patch does not generate ASF License warnings. | | | | 507m 5s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.TestUnsetAndChangeDirectoryEcPolicy | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4089/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/4089 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 5d6287e673e0 4.15.0-163-generic #171-Ubuntu SMP Fri Nov 5 11:55:11 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / f50700e0f9c76cdff6845db5fb335a9d5a7d8269 | | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.14+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4089/1/testReport/ | | Max. process+thread count | 1777 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4089/1/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This messag
[jira] [Created] (HADOOP-18167) Add metrics to track delegation token secret manager operations
Hector Sandoval Chaverri created HADOOP-18167: - Summary: Add metrics to track delegation token secret manager operations Key: HADOOP-18167 URL: https://issues.apache.org/jira/browse/HADOOP-18167 Project: Hadoop Common Issue Type: Improvement Reporter: Hector Sandoval Chaverri New metrics to track operations that store, update and remove delegation tokens in implementations of AbstractDelegationTokenSecretManager. This will help evaluate the impact of using different secret managers and add optimizations. -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #4060: YARN-11084. Introduce new config to specify AM default node-label whe…
hadoop-yetus commented on pull request #4060: URL: https://github.com/apache/hadoop/pull/4060#issuecomment-1074331708 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 17m 7s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | markdownlint | 0m 1s | | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 12m 36s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 25m 50s | | trunk passed | | +1 :green_heart: | compile | 10m 16s | | trunk passed with JDK Ubuntu-11.0.14+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 8m 47s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 1m 39s | | trunk passed | | +1 :green_heart: | mvnsite | 3m 27s | | trunk passed | | +1 :green_heart: | javadoc | 3m 3s | | trunk passed with JDK Ubuntu-11.0.14+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 2m 48s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +0 :ok: | spotbugs | 0m 28s | | branch/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site no spotbugs output file (spotbugsXml.xml) | | +1 :green_heart: | shadedclient | 23m 23s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 22s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 18s | | the patch passed | | +1 :green_heart: | compile | 9m 44s | | the patch passed with JDK Ubuntu-11.0.14+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 9m 44s | | the patch passed | | +1 :green_heart: | compile | 8m 49s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 8m 49s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 1m 33s | [/results-checkstyle-hadoop-yarn-project_hadoop-yarn.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4060/5/artifact/out/results-checkstyle-hadoop-yarn-project_hadoop-yarn.txt) | hadoop-yarn-project/hadoop-yarn: The patch generated 1 new + 229 unchanged - 0 fixed = 230 total (was 229) | | +1 :green_heart: | mvnsite | 3m 12s | | the patch passed | | +1 :green_heart: | xml | 0m 2s | | The patch has no ill-formed XML file. | | +1 :green_heart: | javadoc | 2m 46s | | the patch passed with JDK Ubuntu-11.0.14+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 2m 37s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +0 :ok: | spotbugs | 0m 24s | | hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site has no data from spotbugs | | +1 :green_heart: | shadedclient | 23m 4s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 1m 3s | [/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4060/5/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api.txt) | hadoop-yarn-api in the patch passed. | | +1 :green_heart: | unit | 4m 42s | | hadoop-yarn-common in the patch passed. | | -1 :x: | unit | 101m 12s | [/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4060/5/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt) | hadoop-yarn-server-resourcemanager in the patch passed. | | +1 :green_heart: | unit | 0m 24s | | hadoop-yarn-site in the patch passed. | | +1 :green_heart: | asflicense | 0m 45s | | The patch does not generate ASF License warnings. | | | | 288m 28s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.yarn.conf.TestYarnConfigurationFields | | | hadoop.yarn.server.resourcemanager.TestAppManager | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/
[jira] [Work logged] (HADOOP-18154) S3A Authentication to support WebIdentity
[ https://issues.apache.org/jira/browse/HADOOP-18154?focusedWorklogId=745370&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-745370 ] ASF GitHub Bot logged work on HADOOP-18154: --- Author: ASF GitHub Bot Created on: 21/Mar/22 18:59 Start Date: 21/Mar/22 18:59 Worklog Time Spent: 10m Work Description: steveloughran commented on a change in pull request #4070: URL: https://github.com/apache/hadoop/pull/4070#discussion_r831445562 ## File path: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java ## @@ -142,6 +142,10 @@ private Constants() { public static final String ASSUMED_ROLE_CREDENTIALS_DEFAULT = SimpleAWSCredentialsProvider.NAME; + /** + * Absolute path to the web identity token file Review comment: nit, add a . at the end of the sentence. javadoc versions like that. also in docs, say "path in local/mounted filesystem" so it is clear it is not a cluster fs like HDFS ## File path: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/OIDCTokenCredentialsProvider.java ## @@ -0,0 +1,105 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.s3a; Review comment: can you move to org.apache.hadoop.fs.s3a.auth ## File path: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/OIDCTokenCredentialsProvider.java ## @@ -0,0 +1,105 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.s3a; + +import com.amazonaws.auth.AWSCredentials; +import com.amazonaws.auth.AWSCredentialsProvider; +import com.amazonaws.auth.WebIdentityTokenCredentialsProvider; + +import org.apache.commons.lang3.StringUtils; + +import org.apache.hadoop.classification.InterfaceAudience; +import org.apache.hadoop.classification.InterfaceStability; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.security.ProviderUtils; + +import org.slf4j.Logger; + +import java.io.IOException; + +import static org.apache.hadoop.fs.s3a.Constants.*; + +/** + * Support OpenID Connect (OIDC) token for authenticating with AWS. + * + * Please note that users may reference this class name from configuration + * property fs.s3a.aws.credentials.provider. Therefore, changing the class name + * would be a backward-incompatible change. + * + * This credential provider must not fail in creation because that will + * break a chain of credential providers. + */ +@InterfaceAudience.Public +@InterfaceStability.Stable +public class OIDCTokenCredentialsProvider implements AWSCredentialsProvider { + public static final String NAME + = "org.apache.hadoop.fs.s3a.OIDCTokenCredentialsProvider"; + + /** Reuse the S3AFileSystem log. */ + private static final Logger LOG = S3AFileSystem.LOG; + + private String jwtPath; + private String roleARN; + private String sessionName; + private IOException lookupIOE; + + public OIDCTokenCredentialsProvider(Configuration conf) { +try { + Configuration c = ProviderUtils.excludeIncompatibleCredentialProviders( + conf, S3AFileSystem.class); + this.jwtPath = S3AUtils.lookupPassword(c, JWT_PATH, null); + this.roleARN = S3AUtils.lookupPassword(c, ASSUMED_ROLE_ARN, null); + this.sessionName = S3AUtils.lookupPassword(c, ASSUMED_ROLE_SESS
[GitHub] [hadoop] steveloughran commented on a change in pull request #4070: HADOOP-18154. S3A Authentication to support WebIdentity
steveloughran commented on a change in pull request #4070: URL: https://github.com/apache/hadoop/pull/4070#discussion_r831445562 ## File path: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java ## @@ -142,6 +142,10 @@ private Constants() { public static final String ASSUMED_ROLE_CREDENTIALS_DEFAULT = SimpleAWSCredentialsProvider.NAME; + /** + * Absolute path to the web identity token file Review comment: nit, add a . at the end of the sentence. javadoc versions like that. also in docs, say "path in local/mounted filesystem" so it is clear it is not a cluster fs like HDFS ## File path: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/OIDCTokenCredentialsProvider.java ## @@ -0,0 +1,105 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.s3a; Review comment: can you move to org.apache.hadoop.fs.s3a.auth ## File path: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/OIDCTokenCredentialsProvider.java ## @@ -0,0 +1,105 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.s3a; + +import com.amazonaws.auth.AWSCredentials; +import com.amazonaws.auth.AWSCredentialsProvider; +import com.amazonaws.auth.WebIdentityTokenCredentialsProvider; + +import org.apache.commons.lang3.StringUtils; + +import org.apache.hadoop.classification.InterfaceAudience; +import org.apache.hadoop.classification.InterfaceStability; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.security.ProviderUtils; + +import org.slf4j.Logger; + +import java.io.IOException; + +import static org.apache.hadoop.fs.s3a.Constants.*; + +/** + * Support OpenID Connect (OIDC) token for authenticating with AWS. + * + * Please note that users may reference this class name from configuration + * property fs.s3a.aws.credentials.provider. Therefore, changing the class name + * would be a backward-incompatible change. + * + * This credential provider must not fail in creation because that will + * break a chain of credential providers. + */ +@InterfaceAudience.Public +@InterfaceStability.Stable +public class OIDCTokenCredentialsProvider implements AWSCredentialsProvider { + public static final String NAME + = "org.apache.hadoop.fs.s3a.OIDCTokenCredentialsProvider"; + + /** Reuse the S3AFileSystem log. */ + private static final Logger LOG = S3AFileSystem.LOG; + + private String jwtPath; + private String roleARN; + private String sessionName; + private IOException lookupIOE; + + public OIDCTokenCredentialsProvider(Configuration conf) { +try { + Configuration c = ProviderUtils.excludeIncompatibleCredentialProviders( + conf, S3AFileSystem.class); + this.jwtPath = S3AUtils.lookupPassword(c, JWT_PATH, null); + this.roleARN = S3AUtils.lookupPassword(c, ASSUMED_ROLE_ARN, null); + this.sessionName = S3AUtils.lookupPassword(c, ASSUMED_ROLE_SESSION_NAME, null); +} catch (IOException e) { + lookupIOE = e; +} + } + + public AWSCredentials getCredentials() { + if (lookupIOE != null) { + // propagate any initialization problem + throw new CredentialInitializationException(lookupIOE.toString(), + lookupIOE); + } + + LOG.debug("jwtPath {} roleARN {} sessionName {}", jwtPath, roleARN, sessionName); + + if (
[jira] [Work logged] (HADOOP-18154) S3A Authentication to support WebIdentity
[ https://issues.apache.org/jira/browse/HADOOP-18154?focusedWorklogId=745369&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-745369 ] ASF GitHub Bot logged work on HADOOP-18154: --- Author: ASF GitHub Bot Created on: 21/Mar/22 18:56 Start Date: 21/Mar/22 18:56 Worklog Time Spent: 10m Work Description: steveloughran commented on a change in pull request #4070: URL: https://github.com/apache/hadoop/pull/4070#discussion_r831444157 ## File path: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/OIDCTokenCredentialsProvider.java ## @@ -0,0 +1,79 @@ +package org.apache.hadoop.fs.s3a; + +import org.apache.commons.lang3.StringUtils; +import com.amazonaws.auth.AWSCredentials; +import com.amazonaws.auth.AWSCredentialsProvider; +import com.amazonaws.auth.WebIdentityTokenCredentialsProvider; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.security.ProviderUtils; +import org.slf4j.Logger; + +import java.io.IOException; + +/** + * WebIdentityTokenCredentialsProvider supports static configuration + * of OIDC token path, role ARN and role session name. + * + */ +//@InterfaceAudience.Public +//@InterfaceStability.Stable +public class OIDCTokenCredentialsProvider implements AWSCredentialsProvider { +public static final String NAME += "org.apache.hadoop.fs.s3a.OIDCTokenCredentialsProvider"; + +//these are the parameters to document and to pass along with the class +//usually from import static org.apache.hadoop.fs.s3a.Constants.*; +public static final String JWT_PATH = "fs.s3a.jwt.path"; +public static final String ROLE_ARN = "fs.s3a.role.arn"; +public static final String SESSION_NAME = "fs.s3a.session.name"; + +/** Reuse the S3AFileSystem log. */ +private static final Logger LOG = S3AFileSystem.LOG; + +private String jwtPath; +private String roleARN; +private String sessionName; +private IOException lookupIOE; + +public OIDCTokenCredentialsProvider(Configuration conf) { +try { +Configuration c = ProviderUtils.excludeIncompatibleCredentialProviders( +conf, S3AFileSystem.class); +this.jwtPath = S3AUtils.lookupPassword(c, JWT_PATH, null); +this.roleARN = S3AUtils.lookupPassword(c, ROLE_ARN, null); +this.sessionName = S3AUtils.lookupPassword(c, SESSION_NAME, null); +} catch (IOException e) { +lookupIOE = e; +} +} + +public AWSCredentials getCredentials() { +if (lookupIOE != null) { +// propagate any initialization problem +throw new CredentialInitializationException(lookupIOE.toString(), +lookupIOE); +} + +LOG.debug("jwtPath {} roleARN {}", jwtPath, roleARN); + +if (!StringUtils.isEmpty(jwtPath) && !StringUtils.isEmpty(roleARN)) { +final AWSCredentialsProvider credentialsProvider = +WebIdentityTokenCredentialsProvider.builder() +.webIdentityTokenFile(jwtPath) Review comment: i was just wondering how the secrets get around. for other credentials we can pick them up from the user launching, say, a distcp job, and they will get passed round. alternatively, they can go into a cluster FS like hdfs. if it works with your k8s setup, then the docs should say "mount a shared volume in your containers". support for credential propagation can be added by someone else when they needed it -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 745369) Time Spent: 2h (was: 1h 50m) > S3A Authentication to support WebIdentity > - > > Key: HADOOP-18154 > URL: https://issues.apache.org/jira/browse/HADOOP-18154 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/s3 >Affects Versions: 2.10.1 >Reporter: Ju Clarysse >Assignee: Ju Clarysse >Priority: Major > Labels: pull-request-available > Time Spent: 2h > Remaining Estimate: 0h > > We are using the latest version of > [delta-sharing|https://github.com/delta-io/delta-sharing] which takes > advantage of > [hadoop-aws|https://hadoop.apache.org/docs/current/hadoop-aws/tools/hadoop-aws/index.html] > (S3A) connector in [Hadoop release version > 2.10.1|https://github.com/apache/hadoop/tree/rel/release-2.10.1] to mount an > AWS S3 File System. In our particular setup, all services
[GitHub] [hadoop] steveloughran commented on a change in pull request #4070: HADOOP-18154. S3A Authentication to support WebIdentity
steveloughran commented on a change in pull request #4070: URL: https://github.com/apache/hadoop/pull/4070#discussion_r831444157 ## File path: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/OIDCTokenCredentialsProvider.java ## @@ -0,0 +1,79 @@ +package org.apache.hadoop.fs.s3a; + +import org.apache.commons.lang3.StringUtils; +import com.amazonaws.auth.AWSCredentials; +import com.amazonaws.auth.AWSCredentialsProvider; +import com.amazonaws.auth.WebIdentityTokenCredentialsProvider; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.security.ProviderUtils; +import org.slf4j.Logger; + +import java.io.IOException; + +/** + * WebIdentityTokenCredentialsProvider supports static configuration + * of OIDC token path, role ARN and role session name. + * + */ +//@InterfaceAudience.Public +//@InterfaceStability.Stable +public class OIDCTokenCredentialsProvider implements AWSCredentialsProvider { +public static final String NAME += "org.apache.hadoop.fs.s3a.OIDCTokenCredentialsProvider"; + +//these are the parameters to document and to pass along with the class +//usually from import static org.apache.hadoop.fs.s3a.Constants.*; +public static final String JWT_PATH = "fs.s3a.jwt.path"; +public static final String ROLE_ARN = "fs.s3a.role.arn"; +public static final String SESSION_NAME = "fs.s3a.session.name"; + +/** Reuse the S3AFileSystem log. */ +private static final Logger LOG = S3AFileSystem.LOG; + +private String jwtPath; +private String roleARN; +private String sessionName; +private IOException lookupIOE; + +public OIDCTokenCredentialsProvider(Configuration conf) { +try { +Configuration c = ProviderUtils.excludeIncompatibleCredentialProviders( +conf, S3AFileSystem.class); +this.jwtPath = S3AUtils.lookupPassword(c, JWT_PATH, null); +this.roleARN = S3AUtils.lookupPassword(c, ROLE_ARN, null); +this.sessionName = S3AUtils.lookupPassword(c, SESSION_NAME, null); +} catch (IOException e) { +lookupIOE = e; +} +} + +public AWSCredentials getCredentials() { +if (lookupIOE != null) { +// propagate any initialization problem +throw new CredentialInitializationException(lookupIOE.toString(), +lookupIOE); +} + +LOG.debug("jwtPath {} roleARN {}", jwtPath, roleARN); + +if (!StringUtils.isEmpty(jwtPath) && !StringUtils.isEmpty(roleARN)) { +final AWSCredentialsProvider credentialsProvider = +WebIdentityTokenCredentialsProvider.builder() +.webIdentityTokenFile(jwtPath) Review comment: i was just wondering how the secrets get around. for other credentials we can pick them up from the user launching, say, a distcp job, and they will get passed round. alternatively, they can go into a cluster FS like hdfs. if it works with your k8s setup, then the docs should say "mount a shared volume in your containers". support for credential propagation can be added by someone else when they needed it -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-18154) S3A Authentication to support WebIdentity
[ https://issues.apache.org/jira/browse/HADOOP-18154?focusedWorklogId=745367&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-745367 ] ASF GitHub Bot logged work on HADOOP-18154: --- Author: ASF GitHub Bot Created on: 21/Mar/22 18:53 Start Date: 21/Mar/22 18:53 Worklog Time Spent: 10m Work Description: hadoop-yetus removed a comment on pull request #4070: URL: https://github.com/apache/hadoop/pull/4070#issuecomment-1068099293 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 745367) Time Spent: 1h 50m (was: 1h 40m) > S3A Authentication to support WebIdentity > - > > Key: HADOOP-18154 > URL: https://issues.apache.org/jira/browse/HADOOP-18154 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/s3 >Affects Versions: 2.10.1 >Reporter: Ju Clarysse >Assignee: Ju Clarysse >Priority: Major > Labels: pull-request-available > Time Spent: 1h 50m > Remaining Estimate: 0h > > We are using the latest version of > [delta-sharing|https://github.com/delta-io/delta-sharing] which takes > advantage of > [hadoop-aws|https://hadoop.apache.org/docs/current/hadoop-aws/tools/hadoop-aws/index.html] > (S3A) connector in [Hadoop release version > 2.10.1|https://github.com/apache/hadoop/tree/rel/release-2.10.1] to mount an > AWS S3 File System. In our particular setup, all services are operated in > Amazon Elastic Kubernetes Service (EKS) and need to comply to the AWS > security concept [IAM roles for service > accounts|https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html] > (IRSA). > As [Delta sharing S3 connection|https://github.com/delta-io/delta-sharing#s3] > doesn't offer any corresponding support, we patched hadoop-aws-2.10.1 to > address this need via a new credentials provider class > org.apache.hadoop.fs.s3a.OIDCTokenCredentialsProvider. We also upgraded > dependency aws-java-sdk-bundle to its latest version 1.12.167 as [AWS > WebIdentityTokenCredentialsProvider > class|https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/auth/WebIdentityTokenCredentialsProvider.html%E2%80%A6] > was not yet available in original version 1.11.271. > We believe that other delta-sharing users could benefit from this short-term > contribution. Then sooner or later, delta-sharing owners will have to upgrade > their project to a more recent version of hadoop-aws that is probably more > widely used. The effort to promote this change is probably low. > Additional note: AWS WebIdentityTokenCredentialsProvider class is directly > supported by Spark applications submitted with configuration properties > `spark.hadoop.fs.s3a.aws.credentials.provider`and > `spark.kubernetes.authenticate.submission.oauthToken` > ([doc|https://spark.apache.org/docs/latest/running-on-kubernetes.html#spark-properties]). > So bringing this support to Hadoop will primarily be interesting for > non-Spark users. -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #4070: HADOOP-18154. S3A Authentication to support WebIdentity
hadoop-yetus removed a comment on pull request #4070: URL: https://github.com/apache/hadoop/pull/4070#issuecomment-1068099293 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-18160) `org.wildfly.openssl` should not be shaded by Hadoop build
[ https://issues.apache.org/jira/browse/HADOOP-18160?focusedWorklogId=745366&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-745366 ] ASF GitHub Bot logged work on HADOOP-18160: --- Author: ASF GitHub Bot Created on: 21/Mar/22 18:52 Start Date: 21/Mar/22 18:52 Worklog Time Spent: 10m Work Description: steveloughran commented on pull request #4074: URL: https://github.com/apache/hadoop/pull/4074#issuecomment-1074288070 raised on the list...if nobody objects i will merge (remind me if I don't) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 745366) Time Spent: 50m (was: 40m) > `org.wildfly.openssl` should not be shaded by Hadoop build > -- > > Key: HADOOP-18160 > URL: https://issues.apache.org/jira/browse/HADOOP-18160 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 3.3.1 > Environment: hadoop 3.3.1 > spark 3.2.1 > JDK8 >Reporter: André F. >Priority: Minor > Labels: pull-request-available > Time Spent: 50m > Remaining Estimate: 0h > > `org.wildfly.openssl` is a runtime library and its references are being > shaded on Hadoop, breaking the integration with other frameworks like Spark, > whenever the "fs.s3a.ssl.channel.mode" is set to "openssl". The error > produced in this situation is: > {code:java} > Suppressed: java.lang.NoClassDefFoundError: > org/apache/hadoop/shaded/org/wildfly/openssl/OpenSSLProvider{code} > Whenever it tries to be instantiated from the `DelegatingSSLSocketFactory`. > Spark tries to add it to its classpath without the shade, thus creating this > issue. > Dependencies which are not on "compile" scope should probably not be shaded > to avoid this kind of integration issues. > -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on pull request #4074: HADOOP-18160 Avoid shading wildfly.openssl runtime dependency
steveloughran commented on pull request #4074: URL: https://github.com/apache/hadoop/pull/4074#issuecomment-1074288070 raised on the list...if nobody objects i will merge (remind me if I don't) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work logged] (HADOOP-15566) Support OpenTelemetry
[ https://issues.apache.org/jira/browse/HADOOP-15566?focusedWorklogId=745363&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-745363 ] ASF GitHub Bot logged work on HADOOP-15566: --- Author: ASF GitHub Bot Created on: 21/Mar/22 18:45 Start Date: 21/Mar/22 18:45 Worklog Time Spent: 10m Work Description: steveloughran commented on a change in pull request #3445: URL: https://github.com/apache/hadoop/pull/3445#discussion_r831412903 ## File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/tracing/Span.java ## @@ -17,28 +17,54 @@ */ package org.apache.hadoop.tracing; +import io.opentelemetry.context.Scope; + import java.io.Closeable; public class Span implements Closeable { - + private io.opentelemetry.api.trace.Span span = null; Review comment: 1. add some javadoc to line 24 to warn this wraps the opentelemetry span. 2. skip the =null assignment, as it will save the jvm from some needless init. assuming a lot of spans get created, this may matter 3. give the field a different name. like 'openspan' ## File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/tracing/Span.java ## @@ -17,28 +17,54 @@ */ package org.apache.hadoop.tracing; +import io.opentelemetry.context.Scope; + import java.io.Closeable; public class Span implements Closeable { - + private io.opentelemetry.api.trace.Span span = null; public Span() { } + public Span(io.opentelemetry.api.trace.Span span){ +this.span = span; + } + public Span addKVAnnotation(String key, String value) { +if(span != null){ + span.setAttribute(key, value); +} return this; } public Span addTimelineAnnotation(String msg) { +if(span != null){ + span.addEvent(msg); +} return this; } public SpanContext getContext() { +if(span != null){ + return new SpanContext(span.getSpanContext()); +} return null; } public void finish() { +close(); } public void close() { +if(span != null){ + span.end(); +} + } + + public Scope makeCurrent() { Review comment: nit: add javadocs ## File path: hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/tracing/TestTracer.java ## @@ -0,0 +1,33 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.tracing; + +import org.junit.Test; + +import static org.junit.Assert.*; Review comment: * should extend AbstractHadoopTestBase or HadoopTestBase here. * there is scope for a lot more tests ## File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/tracing/Span.java ## @@ -17,28 +17,54 @@ */ package org.apache.hadoop.tracing; +import io.opentelemetry.context.Scope; + import java.io.Closeable; public class Span implements Closeable { - + private io.opentelemetry.api.trace.Span span = null; public Span() { } + public Span(io.opentelemetry.api.trace.Span span){ +this.span = span; + } + public Span addKVAnnotation(String key, String value) { +if(span != null){ + span.setAttribute(key, value); +} return this; } public Span addTimelineAnnotation(String msg) { +if(span != null){ + span.addEvent(msg); +} return this; } public SpanContext getContext() { +if(span != null){ + return new SpanContext(span.getSpanContext()); +} return null; } public void finish() { +close(); } public void close() { +if(span != null){ + span.end(); Review comment: would span need to be nullified here. or is it ok to invoke it after being ended? ## File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/tracing/SpanContext.java ## @@ -18,15 +18,75 @@ package org.apache.hadoop.tracing; import java.io.Closeable; +import java.util.HashMap; +import java.util.Map; + +import io.opentelemetry.api.trace.Span; +import io.opentelemetry.api.trace.TraceFlags; +import io.ope
[GitHub] [hadoop] steveloughran commented on a change in pull request #3445: HADOOP-15566 Opentelemetry changes using java agent
steveloughran commented on a change in pull request #3445: URL: https://github.com/apache/hadoop/pull/3445#discussion_r831412903 ## File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/tracing/Span.java ## @@ -17,28 +17,54 @@ */ package org.apache.hadoop.tracing; +import io.opentelemetry.context.Scope; + import java.io.Closeable; public class Span implements Closeable { - + private io.opentelemetry.api.trace.Span span = null; Review comment: 1. add some javadoc to line 24 to warn this wraps the opentelemetry span. 2. skip the =null assignment, as it will save the jvm from some needless init. assuming a lot of spans get created, this may matter 3. give the field a different name. like 'openspan' ## File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/tracing/Span.java ## @@ -17,28 +17,54 @@ */ package org.apache.hadoop.tracing; +import io.opentelemetry.context.Scope; + import java.io.Closeable; public class Span implements Closeable { - + private io.opentelemetry.api.trace.Span span = null; public Span() { } + public Span(io.opentelemetry.api.trace.Span span){ +this.span = span; + } + public Span addKVAnnotation(String key, String value) { +if(span != null){ + span.setAttribute(key, value); +} return this; } public Span addTimelineAnnotation(String msg) { +if(span != null){ + span.addEvent(msg); +} return this; } public SpanContext getContext() { +if(span != null){ + return new SpanContext(span.getSpanContext()); +} return null; } public void finish() { +close(); } public void close() { +if(span != null){ + span.end(); +} + } + + public Scope makeCurrent() { Review comment: nit: add javadocs ## File path: hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/tracing/TestTracer.java ## @@ -0,0 +1,33 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.tracing; + +import org.junit.Test; + +import static org.junit.Assert.*; Review comment: * should extend AbstractHadoopTestBase or HadoopTestBase here. * there is scope for a lot more tests ## File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/tracing/Span.java ## @@ -17,28 +17,54 @@ */ package org.apache.hadoop.tracing; +import io.opentelemetry.context.Scope; + import java.io.Closeable; public class Span implements Closeable { - + private io.opentelemetry.api.trace.Span span = null; public Span() { } + public Span(io.opentelemetry.api.trace.Span span){ +this.span = span; + } + public Span addKVAnnotation(String key, String value) { +if(span != null){ + span.setAttribute(key, value); +} return this; } public Span addTimelineAnnotation(String msg) { +if(span != null){ + span.addEvent(msg); +} return this; } public SpanContext getContext() { +if(span != null){ + return new SpanContext(span.getSpanContext()); +} return null; } public void finish() { +close(); } public void close() { +if(span != null){ + span.end(); Review comment: would span need to be nullified here. or is it ok to invoke it after being ended? ## File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/tracing/SpanContext.java ## @@ -18,15 +18,75 @@ package org.apache.hadoop.tracing; import java.io.Closeable; +import java.util.HashMap; +import java.util.Map; + +import io.opentelemetry.api.trace.Span; +import io.opentelemetry.api.trace.TraceFlags; +import io.opentelemetry.api.trace.TraceState; +import io.opentelemetry.api.trace.TraceStateBuilder; +import org.apache.hadoop.thirdparty.protobuf.ByteString; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; /** * Wrapper class for SpanContext to avoid using OpenTracing/OpenTelemetry * SpanContext class directly for better separation. */ -public class SpanContext implements Closeable { - public SpanContext() { +public class
[GitHub] [hadoop] hadoop-yetus commented on pull request #4066: YARN-11087. Introduce the config to control the refresh interval in R…
hadoop-yetus commented on pull request #4066: URL: https://github.com/apache/hadoop/pull/4066#issuecomment-1074278520 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 54s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 12m 34s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 29m 18s | | trunk passed | | +1 :green_heart: | compile | 12m 59s | | trunk passed with JDK Ubuntu-11.0.14+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 8m 46s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 1m 38s | | trunk passed | | +1 :green_heart: | mvnsite | 2m 55s | | trunk passed | | +1 :green_heart: | javadoc | 2m 35s | | trunk passed with JDK Ubuntu-11.0.14+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 2m 22s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 5m 54s | | trunk passed | | +1 :green_heart: | shadedclient | 24m 51s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 22s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 6s | | the patch passed | | +1 :green_heart: | compile | 9m 49s | | the patch passed with JDK Ubuntu-11.0.14+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 9m 49s | | the patch passed | | +1 :green_heart: | compile | 8m 44s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 8m 44s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 1m 33s | [/results-checkstyle-hadoop-yarn-project_hadoop-yarn.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4066/2/artifact/out/results-checkstyle-hadoop-yarn-project_hadoop-yarn.txt) | hadoop-yarn-project/hadoop-yarn: The patch generated 1 new + 164 unchanged - 1 fixed = 165 total (was 165) | | +1 :green_heart: | mvnsite | 2m 43s | | the patch passed | | +1 :green_heart: | xml | 0m 2s | | The patch has no ill-formed XML file. | | +1 :green_heart: | javadoc | 2m 21s | | the patch passed with JDK Ubuntu-11.0.14+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 2m 13s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 6m 18s | | the patch passed | | +1 :green_heart: | shadedclient | 25m 12s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 1m 3s | | hadoop-yarn-api in the patch passed. | | +1 :green_heart: | unit | 4m 46s | | hadoop-yarn-common in the patch passed. | | +1 :green_heart: | unit | 100m 59s | | hadoop-yarn-server-resourcemanager in the patch passed. | | +1 :green_heart: | asflicense | 0m 46s | | The patch does not generate ASF License warnings. | | | | 274m 33s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4066/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/4066 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell xml | | uname | Linux 85a5f05a9e5e 4.15.0-163-generic #171-Ubuntu SMP Fri Nov 5 11:55:11 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 06c0c2c9f612f954faaf066189e7d652d5187a9b | | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.14+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | Test Results | https://ci-hadoop.apache.org/job/had
[jira] [Work logged] (HADOOP-15566) Support OpenTelemetry
[ https://issues.apache.org/jira/browse/HADOOP-15566?focusedWorklogId=745340&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-745340 ] ASF GitHub Bot logged work on HADOOP-15566: --- Author: ASF GitHub Bot Created on: 21/Mar/22 18:16 Start Date: 21/Mar/22 18:16 Worklog Time Spent: 10m Work Description: hadoop-yetus removed a comment on pull request #3445: URL: https://github.com/apache/hadoop/pull/3445#issuecomment-920792524 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 7s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | shelldocs | 0m 0s | | Shelldocs was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 15m 18s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 25m 3s | | trunk passed | | +1 :green_heart: | compile | 24m 36s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 22m 26s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 3m 58s | | trunk passed | | +1 :green_heart: | mvnsite | 2m 2s | | trunk passed | | +1 :green_heart: | javadoc | 1m 29s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 2m 3s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +0 :ok: | spotbugs | 0m 31s | | branch/hadoop-project no spotbugs output file (spotbugsXml.xml) | | +1 :green_heart: | shadedclient | 36m 43s | | branch has no errors when building and testing our client artifacts. | | -0 :warning: | patch | 37m 11s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 30s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 25s | | the patch passed | | +1 :green_heart: | compile | 27m 8s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 27m 8s | | the patch passed | | +1 :green_heart: | compile | 23m 21s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 23m 21s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 4m 13s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3445/1/artifact/out/results-checkstyle-root.txt) | root: The patch generated 13 new + 3 unchanged - 2 fixed = 16 total (was 5) | | +1 :green_heart: | mvnsite | 2m 13s | | the patch passed | | +1 :green_heart: | shellcheck | 0m 0s | | No new issues. | | +1 :green_heart: | xml | 0m 4s | | The patch has no ill-formed XML file. | | +1 :green_heart: | javadoc | 1m 47s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 2m 22s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +0 :ok: | spotbugs | 0m 35s | | hadoop-project has no data from spotbugs | | -1 :x: | spotbugs | 3m 2s | [/new-spotbugs-hadoop-common-project_hadoop-common.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3445/1/artifact/out/new-spotbugs-hadoop-common-project_hadoop-common.html) | hadoop-common-project/hadoop-common generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | | -1 :x: | shadedclient | 53m 24s | | patch has errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 0m 35s | | hadoop-project in the patch passed. | | +1 :green_heart: | unit | 18m 26s | | hadoop-common in the patch passed. | | +1 :green_heart: | asflicense | 1m 5s | | The patch does not generate ASF License warnings. | | | | 278m 22s | | | | Reason | Tests | |---:|:--| | SpotBugs | module:hadoop-common-project/hadoop-common | | | Unread field:Tracer.java:[line 87] | | Subsystem | Rep
[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #3445: HADOOP-15566 Opentelemetry changes using java agent
hadoop-yetus removed a comment on pull request #3445: URL: https://github.com/apache/hadoop/pull/3445#issuecomment-920792524 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 7s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | shelldocs | 0m 0s | | Shelldocs was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 15m 18s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 25m 3s | | trunk passed | | +1 :green_heart: | compile | 24m 36s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 22m 26s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 3m 58s | | trunk passed | | +1 :green_heart: | mvnsite | 2m 2s | | trunk passed | | +1 :green_heart: | javadoc | 1m 29s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 2m 3s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +0 :ok: | spotbugs | 0m 31s | | branch/hadoop-project no spotbugs output file (spotbugsXml.xml) | | +1 :green_heart: | shadedclient | 36m 43s | | branch has no errors when building and testing our client artifacts. | | -0 :warning: | patch | 37m 11s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 30s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 25s | | the patch passed | | +1 :green_heart: | compile | 27m 8s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 27m 8s | | the patch passed | | +1 :green_heart: | compile | 23m 21s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | javac | 23m 21s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | -0 :warning: | checkstyle | 4m 13s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3445/1/artifact/out/results-checkstyle-root.txt) | root: The patch generated 13 new + 3 unchanged - 2 fixed = 16 total (was 5) | | +1 :green_heart: | mvnsite | 2m 13s | | the patch passed | | +1 :green_heart: | shellcheck | 0m 0s | | No new issues. | | +1 :green_heart: | xml | 0m 4s | | The patch has no ill-formed XML file. | | +1 :green_heart: | javadoc | 1m 47s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 2m 22s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +0 :ok: | spotbugs | 0m 35s | | hadoop-project has no data from spotbugs | | -1 :x: | spotbugs | 3m 2s | [/new-spotbugs-hadoop-common-project_hadoop-common.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3445/1/artifact/out/new-spotbugs-hadoop-common-project_hadoop-common.html) | hadoop-common-project/hadoop-common generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | | -1 :x: | shadedclient | 53m 24s | | patch has errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 0m 35s | | hadoop-project in the patch passed. | | +1 :green_heart: | unit | 18m 26s | | hadoop-common in the patch passed. | | +1 :green_heart: | asflicense | 1m 5s | | The patch does not generate ASF License warnings. | | | | 278m 22s | | | | Reason | Tests | |---:|:--| | SpotBugs | module:hadoop-common-project/hadoop-common | | | Unread field:Tracer.java:[line 87] | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3445/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/3445 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient codespell xml shellcheck shelldocs spotbugs checkstyle | | uname | Linux 7f0ca
[jira] [Work logged] (HADOOP-15566) Support OpenTelemetry
[ https://issues.apache.org/jira/browse/HADOOP-15566?focusedWorklogId=745338&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-745338 ] ASF GitHub Bot logged work on HADOOP-15566: --- Author: ASF GitHub Bot Created on: 21/Mar/22 18:14 Start Date: 21/Mar/22 18:14 Worklog Time Spent: 10m Work Description: hadoop-yetus removed a comment on pull request #3445: URL: https://github.com/apache/hadoop/pull/3445#issuecomment-979460747 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 19s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | shelldocs | 0m 1s | | Shelldocs was not available. | | +0 :ok: | buf | 0m 1s | | buf was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 12m 50s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 29m 2s | | trunk passed | | +1 :green_heart: | compile | 30m 6s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 25m 30s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 4m 10s | | trunk passed | | +1 :green_heart: | mvnsite | 5m 14s | | trunk passed | | +1 :green_heart: | javadoc | 3m 58s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 4m 58s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +0 :ok: | spotbugs | 0m 31s | | branch/hadoop-project no spotbugs output file (spotbugsXml.xml) | | +0 :ok: | spotbugs | 0m 29s | | branch/hadoop-tools/hadoop-tools-dist no spotbugs output file (spotbugsXml.xml) | | +1 :green_heart: | shadedclient | 22m 56s | | branch has no errors when building and testing our client artifacts. | | -0 :warning: | patch | 23m 18s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 30s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 35s | | the patch passed | | +1 :green_heart: | compile | 22m 56s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | -1 :x: | cc | 22m 56s | [/results-compile-cc-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3445/6/artifact/out/results-compile-cc-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt) | root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 generated 37 new + 286 unchanged - 37 fixed = 323 total (was 323) | | -1 :x: | javac | 22m 56s | [/results-compile-javac-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3445/6/artifact/out/results-compile-javac-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt) | root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 generated 2 new + 1931 unchanged - 0 fixed = 1933 total (was 1931) | | +1 :green_heart: | compile | 20m 13s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | -1 :x: | cc | 20m 13s | [/results-compile-cc-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3445/6/artifact/out/results-compile-cc-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt) | root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 generated 26 new + 297 unchanged - 26 fixed = 323 total (was 323) | | -1 :x: | javac | 20m 13s | [/results-compile-javac-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3445/6/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt) | root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 generated 2 new + 1805 unchanged - 0 fixed = 1807 total (was 1805) | | -1 :x: | bla
[jira] [Work logged] (HADOOP-15566) Support OpenTelemetry
[ https://issues.apache.org/jira/browse/HADOOP-15566?focusedWorklogId=745339&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-745339 ] ASF GitHub Bot logged work on HADOOP-15566: --- Author: ASF GitHub Bot Created on: 21/Mar/22 18:14 Start Date: 21/Mar/22 18:14 Worklog Time Spent: 10m Work Description: hadoop-yetus removed a comment on pull request #3445: URL: https://github.com/apache/hadoop/pull/3445#issuecomment-977877431 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 745339) Time Spent: 9h 50m (was: 9h 40m) > Support OpenTelemetry > - > > Key: HADOOP-15566 > URL: https://issues.apache.org/jira/browse/HADOOP-15566 > Project: Hadoop Common > Issue Type: New Feature > Components: metrics, tracing >Affects Versions: 3.1.0 >Reporter: Todd Lipcon >Assignee: Siyao Meng >Priority: Major > Labels: pull-request-available, security > Attachments: HADOOP-15566-WIP.1.patch, HADOOP-15566.000.WIP.patch, > OpenTelemetry Support Scope Doc v2.pdf, OpenTracing Support Scope Doc.pdf, > Screen Shot 2018-06-29 at 11.59.16 AM.png, ss-trace-s3a.png > > Time Spent: 9h 50m > Remaining Estimate: 0h > > The HTrace incubator project has voted to retire itself and won't be making > further releases. The Hadoop project currently has various hooks with HTrace. > It seems in some cases (eg HDFS-13702) these hooks have had measurable > performance overhead. Given these two factors, I think we should consider > removing the HTrace integration. If there is someone willing to do the work, > replacing it with OpenTracing might be a better choice since there is an > active community. -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #3445: HADOOP-15566 Opentelemetry changes using java agent
hadoop-yetus removed a comment on pull request #3445: URL: https://github.com/apache/hadoop/pull/3445#issuecomment-977877431 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #3445: HADOOP-15566 Opentelemetry changes using java agent
hadoop-yetus removed a comment on pull request #3445: URL: https://github.com/apache/hadoop/pull/3445#issuecomment-979460747 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 19s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +0 :ok: | shelldocs | 0m 1s | | Shelldocs was not available. | | +0 :ok: | buf | 0m 1s | | buf was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 12m 50s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 29m 2s | | trunk passed | | +1 :green_heart: | compile | 30m 6s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 25m 30s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +1 :green_heart: | checkstyle | 4m 10s | | trunk passed | | +1 :green_heart: | mvnsite | 5m 14s | | trunk passed | | +1 :green_heart: | javadoc | 3m 58s | | trunk passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 4m 58s | | trunk passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | +0 :ok: | spotbugs | 0m 31s | | branch/hadoop-project no spotbugs output file (spotbugsXml.xml) | | +0 :ok: | spotbugs | 0m 29s | | branch/hadoop-tools/hadoop-tools-dist no spotbugs output file (spotbugsXml.xml) | | +1 :green_heart: | shadedclient | 22m 56s | | branch has no errors when building and testing our client artifacts. | | -0 :warning: | patch | 23m 18s | | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 30s | | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 3m 35s | | the patch passed | | +1 :green_heart: | compile | 22m 56s | | the patch passed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 | | -1 :x: | cc | 22m 56s | [/results-compile-cc-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3445/6/artifact/out/results-compile-cc-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt) | root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 generated 37 new + 286 unchanged - 37 fixed = 323 total (was 323) | | -1 :x: | javac | 22m 56s | [/results-compile-javac-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3445/6/artifact/out/results-compile-javac-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt) | root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 generated 2 new + 1931 unchanged - 0 fixed = 1933 total (was 1931) | | +1 :green_heart: | compile | 20m 13s | | the patch passed with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 | | -1 :x: | cc | 20m 13s | [/results-compile-cc-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3445/6/artifact/out/results-compile-cc-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt) | root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 generated 26 new + 297 unchanged - 26 fixed = 323 total (was 323) | | -1 :x: | javac | 20m 13s | [/results-compile-javac-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3445/6/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt) | root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 with JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 generated 2 new + 1805 unchanged - 0 fixed = 1807 total (was 1805) | | -1 :x: | blanks | 0m 0s | [/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3445/6/artifact/out/blanks-eol.txt) | The patch has 1 line(s) that end in blanks. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply | | -0 :warning: | checkstyle | 3m 55s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3445/6/artifact/out/results-checksty
[jira] [Work logged] (HADOOP-17428) ABFS: Implementation for getContentSummary
[ https://issues.apache.org/jira/browse/HADOOP-17428?focusedWorklogId=745336&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-745336 ] ASF GitHub Bot logged work on HADOOP-17428: --- Author: ASF GitHub Bot Created on: 21/Mar/22 18:13 Start Date: 21/Mar/22 18:13 Worklog Time Spent: 10m Work Description: steveloughran commented on pull request #2549: URL: https://github.com/apache/hadoop/pull/2549#issuecomment-1074248757 afraid my manifest committer changes have broken this. can you rebase and we can get this in. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 745336) Time Spent: 3h 40m (was: 3.5h) > ABFS: Implementation for getContentSummary > -- > > Key: HADOOP-17428 > URL: https://issues.apache.org/jira/browse/HADOOP-17428 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.3.0 >Reporter: Sumangala Patki >Assignee: Sumangala Patki >Priority: Major > Labels: pull-request-available > Time Spent: 3h 40m > Remaining Estimate: 0h > > Adds implementation for HDFS method getContentSummary, which takes in a Path > argument and returns details such as file/directory count and space utilized > under that path. -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on pull request #2549: HADOOP-17428. ABFS: Implementation for getContentSummary
steveloughran commented on pull request #2549: URL: https://github.com/apache/hadoop/pull/2549#issuecomment-1074248757 afraid my manifest committer changes have broken this. can you rebase and we can get this in. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #4077: HDFS-16509. Fix decommission UnsupportedOperationException
hadoop-yetus commented on pull request #4077: URL: https://github.com/apache/hadoop/pull/4077#issuecomment-1074209141 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 34s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 34m 50s | | trunk passed | | +1 :green_heart: | compile | 1m 35s | | trunk passed with JDK Ubuntu-11.0.14+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 1m 26s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 1m 4s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 34s | | trunk passed | | +1 :green_heart: | javadoc | 1m 8s | | trunk passed with JDK Ubuntu-11.0.14+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 36s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 3m 28s | | trunk passed | | +1 :green_heart: | shadedclient | 23m 6s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 21s | | the patch passed | | +1 :green_heart: | compile | 1m 21s | | the patch passed with JDK Ubuntu-11.0.14+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 1m 21s | | the patch passed | | +1 :green_heart: | compile | 1m 15s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 1m 15s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 53s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 26s | | the patch passed | | +1 :green_heart: | javadoc | 0m 56s | | the patch passed with JDK Ubuntu-11.0.14+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 26s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 3m 31s | | the patch passed | | +1 :green_heart: | shadedclient | 22m 43s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 243m 15s | | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 45s | | The patch does not generate ASF License warnings. | | | | 346m 52s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4077/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/4077 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 28536d259809 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 5aa5b15c1212863f602632793c89c189d6074df7 | | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.14+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4077/2/testReport/ | | Max. process+thread count | 3214 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4077/2/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about th
[jira] [Updated] (HADOOP-16254) Add proxy address in IPC connection
[ https://issues.apache.org/jira/browse/HADOOP-16254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Owen O'Malley updated HADOOP-16254: --- Resolution: Duplicate Status: Resolved (was: Patch Available) This has been fixed by using the CallerContext. > Add proxy address in IPC connection > --- > > Key: HADOOP-16254 > URL: https://issues.apache.org/jira/browse/HADOOP-16254 > Project: Hadoop Common > Issue Type: New Feature > Components: ipc >Reporter: Xiaoqiao He >Assignee: Xiaoqiao He >Priority: Blocker > Attachments: HADOOP-16254.001.patch, HADOOP-16254.002.patch, > HADOOP-16254.004.patch, HADOOP-16254.005.patch, HADOOP-16254.006.patch, > HADOOP-16254.007.patch > > > In order to support data locality of RBF, we need to add new field about > client hostname in the RPC headers of Router protocol calls. > clientHostname represents hostname of client and forward by Router to > Namenode to support data locality friendly. See more [RBF Data Locality > Design|https://issues.apache.org/jira/secure/attachment/12965092/RBF%20Data%20Locality%20Design.pdf] > in HDFS-13248 and [maillist > vote|http://mail-archives.apache.org/mod_mbox/hadoop-common-dev/201904.mbox/%3CCAF3Ajax7hGxvowg4K_HVTZeDqC5H=3bfb7mv5sz5mgvadhv...@mail.gmail.com%3E]. -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #4078: HDFS-16510. Fix EC decommission when rack is not enough
hadoop-yetus commented on pull request #4078: URL: https://github.com/apache/hadoop/pull/4078#issuecomment-1074208100 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 1m 14s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 1s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 34m 43s | | trunk passed | | +1 :green_heart: | compile | 1m 34s | | trunk passed with JDK Ubuntu-11.0.14+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 1m 27s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 1m 6s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 35s | | trunk passed | | +1 :green_heart: | javadoc | 1m 8s | | trunk passed with JDK Ubuntu-11.0.14+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 33s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 3m 25s | | trunk passed | | +1 :green_heart: | shadedclient | 22m 49s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 25s | | the patch passed | | +1 :green_heart: | compile | 1m 24s | | the patch passed with JDK Ubuntu-11.0.14+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 1m 24s | | the patch passed | | +1 :green_heart: | compile | 1m 17s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 1m 17s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 1m 0s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 25s | | the patch passed | | +1 :green_heart: | javadoc | 0m 55s | | the patch passed with JDK Ubuntu-11.0.14+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 30s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 3m 29s | | the patch passed | | +1 :green_heart: | shadedclient | 22m 28s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 242m 25s | | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 45s | | The patch does not generate ASF License warnings. | | | | 346m 32s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4078/5/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/4078 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 3c3ca0354cfe 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 6675d6f7d8e9ec22611378770ac5b7be0075c2bb | | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.14+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4078/5/testReport/ | | Max. process+thread count | 2889 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4078/5/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org --
[GitHub] [hadoop] hadoop-yetus commented on pull request #4081: HDFS-13248: Namenode needs to use the actual client IP when going through RBF proxy.
hadoop-yetus commented on pull request #4081: URL: https://github.com/apache/hadoop/pull/4081#issuecomment-1074128937 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 0s | | Docker mode activated. | | -1 :x: | patch | 0m 19s | | https://github.com/apache/hadoop/pull/4081 does not apply to trunk. Rebase required? Wrong Branch? See https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute for help. | | Subsystem | Report/Notes | |--:|:-| | GITHUB PR | https://github.com/apache/hadoop/pull/4081 | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4081/9/console | | versions | git=2.17.1 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] omalley closed pull request #4081: HDFS-13248: Namenode needs to use the actual client IP when going through RBF proxy.
omalley closed pull request #4081: URL: https://github.com/apache/hadoop/pull/4081 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] 9uapaw commented on a change in pull request #4060: YARN-11084. Introduce new config to specify AM default node-label whe…
9uapaw commented on a change in pull request #4060: URL: https://github.com/apache/hadoop/pull/4060#discussion_r831268938 ## File path: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/NodeLabel.md ## @@ -170,6 +170,14 @@ Applications can use following Java APIs to specify node label to request * `ResourceRequest.setNodeLabelExpression(..)` to set node label expression for individual resource requests. This can overwrite node label expression set in ApplicationSubmissionContext * Specify `setAMContainerResourceRequest.setNodeLabelExpression` in `ApplicationSubmissionContext` to indicate expected node label for application master container. +__Default AM node-label Configuration__ + +Property | Value +- | -- +yarn.resourcemanager.node-labels.am.default-node-label-expression | Besides when ApplicationMaster of application is not specified any node-labels, then the configuration will attach the default node-label to AM. The default of this config is disabled. + Review comment: Could you rephrase it to: "Overwrites default-node-label-expression only for the ApplicationMaster container. It is disabled by default." -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #4090: HDFS-16516. Fix Fsshell wrong params
hadoop-yetus commented on pull request #4090: URL: https://github.com/apache/hadoop/pull/4090#issuecomment-1074003499 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 16m 49s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +0 :ok: | markdownlint | 0m 0s | | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 36m 25s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 29s | | trunk passed | | +1 :green_heart: | shadedclient | 60m 40s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 1s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | mvnsite | 1m 19s | | the patch passed | | +1 :green_heart: | shadedclient | 23m 50s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | asflicense | 0m 30s | | The patch does not generate ASF License warnings. | | | | 104m 13s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4090/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/4090 | | Optional Tests | dupname asflicense mvnsite codespell markdownlint | | uname | Linux 78394158b723 4.15.0-162-generic #170-Ubuntu SMP Mon Oct 18 11:38:05 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / e0ee53e7aa1b129a7639d72c6a8e392c35205924 | | Max. process+thread count | 528 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4090/1/console | | versions | git=2.25.1 maven=3.6.3 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] zuston commented on pull request #4060: YARN-11084. Introduce new config to specify AM default node-label whe…
zuston commented on pull request #4060: URL: https://github.com/apache/hadoop/pull/4060#issuecomment-1073984190 Updated. Could u help review again? @9uapaw -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #4088: HDFS-16514. Reduce the failover sleep time if multiple namenode are c…
hadoop-yetus commented on pull request #4088: URL: https://github.com/apache/hadoop/pull/4088#issuecomment-1073958681 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 12m 52s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +0 :ok: | mvndep | 12m 28s | | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 29m 14s | | trunk passed | | +1 :green_heart: | compile | 31m 9s | | trunk passed with JDK Ubuntu-11.0.14+9-Ubuntu-0ubuntu2.20.04 | | -1 :x: | compile | 21m 46s | [/branch-compile-root-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4088/1/artifact/out/branch-compile-root-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt) | root in trunk failed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07. | | -0 :warning: | checkstyle | 0m 39s | [/buildtool-branch-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4088/1/artifact/out/buildtool-branch-checkstyle-root.txt) | The patch fails to run checkstyle in root | | -1 :x: | mvnsite | 0m 38s | [/branch-mvnsite-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4088/1/artifact/out/branch-mvnsite-hadoop-common-project_hadoop-common.txt) | hadoop-common in trunk failed. | | -1 :x: | mvnsite | 0m 39s | [/branch-mvnsite-hadoop-hdfs-project_hadoop-hdfs-client.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4088/1/artifact/out/branch-mvnsite-hadoop-hdfs-project_hadoop-hdfs-client.txt) | hadoop-hdfs-client in trunk failed. | | -1 :x: | javadoc | 0m 39s | [/branch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.14+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4088/1/artifact/out/branch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.14+9-Ubuntu-0ubuntu2.20.04.txt) | hadoop-common in trunk failed with JDK Ubuntu-11.0.14+9-Ubuntu-0ubuntu2.20.04. | | -1 :x: | javadoc | 0m 39s | [/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-client-jdkUbuntu-11.0.14+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4088/1/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-client-jdkUbuntu-11.0.14+9-Ubuntu-0ubuntu2.20.04.txt) | hadoop-hdfs-client in trunk failed with JDK Ubuntu-11.0.14+9-Ubuntu-0ubuntu2.20.04. | | -1 :x: | javadoc | 0m 37s | [/branch-javadoc-hadoop-common-project_hadoop-common-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4088/1/artifact/out/branch-javadoc-hadoop-common-project_hadoop-common-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt) | hadoop-common in trunk failed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07. | | -1 :x: | javadoc | 0m 38s | [/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-client-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4088/1/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-client-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt) | hadoop-hdfs-client in trunk failed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07. | | -1 :x: | spotbugs | 0m 38s | [/branch-spotbugs-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4088/1/artifact/out/branch-spotbugs-hadoop-common-project_hadoop-common.txt) | hadoop-common in trunk failed. | | -1 :x: | spotbugs | 0m 39s | [/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-client.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4088/1/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-client.txt) | hadoop-hdfs-client in trunk failed. | | +1 :green_heart: | shadedclient | 7m 24s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 21s | | Maven dependency ordering for patch | | -1 :x: | mvninstall | 0m 22s | [/patch-mvninstall-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org
[GitHub] [hadoop] zuston commented on a change in pull request #4060: YARN-11084. Introduce new config to specify AM default node-label whe…
zuston commented on a change in pull request #4060: URL: https://github.com/apache/hadoop/pull/4060#discussion_r831169268 ## File path: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml ## @@ -3440,6 +3440,15 @@ 180 + + +When AM of app is not specified any node-labels and this configuration will +attach the default node-label to AM, which the default of config is disabled. Review comment: Got it. I will set the detailed description in the nodelabel.md -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] 9uapaw commented on a change in pull request #4060: YARN-11084. Introduce new config to specify AM default node-label whe…
9uapaw commented on a change in pull request #4060: URL: https://github.com/apache/hadoop/pull/4060#discussion_r831166584 ## File path: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml ## @@ -3440,6 +3440,15 @@ 180 + + +When AM of app is not specified any node-labels and this configuration will +attach the default node-label to AM, which the default of config is disabled. Review comment: Yes, I did mean that. You can set a detailed description in the markdown files, in this case: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/NodeLabel.md -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] 9uapaw commented on pull request #4060: YARN-11084. Introduce new config to specify AM default node-label whe…
9uapaw commented on pull request #4060: URL: https://github.com/apache/hadoop/pull/4060#issuecomment-1073954813 Also, could you address the checkstyle issues as well please? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] zuston commented on a change in pull request #4060: YARN-11084. Introduce new config to specify AM default node-label whe…
zuston commented on a change in pull request #4060: URL: https://github.com/apache/hadoop/pull/4060#discussion_r831164915 ## File path: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml ## @@ -3440,6 +3440,15 @@ 180 + + +When AM of app is not specified any node-labels and this configuration will +attach the default node-label to AM, which the default of config is disabled. Review comment: Sorry I know it. Just need to remove the value. ## File path: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml ## @@ -3440,6 +3440,15 @@ 180 + + +When AM of app is not specified any node-labels and this configuration will +attach the default node-label to AM, which the default of config is disabled. Review comment: Sorry I know how to do. Just need to remove the value. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] zuston commented on a change in pull request #4060: YARN-11084. Introduce new config to specify AM default node-label whe…
zuston commented on a change in pull request #4060: URL: https://github.com/apache/hadoop/pull/4060#discussion_r831163824 ## File path: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml ## @@ -3440,6 +3440,15 @@ 180 + + +When AM of app is not specified any node-labels and this configuration will +attach the default node-label to AM, which the default of config is disabled. Review comment: Do u mean that it should not be specified in yarn-default.xml? But if not, where should be attached detailed config description? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] 9uapaw commented on a change in pull request #4060: YARN-11084. Introduce new config to specify AM default node-label whe…
9uapaw commented on a change in pull request #4060: URL: https://github.com/apache/hadoop/pull/4060#discussion_r831157934 ## File path: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml ## @@ -3440,6 +3440,15 @@ 180 + + +When AM of app is not specified any node-labels and this configuration will +attach the default node-label to AM, which the default of config is disabled. Review comment: I think it is superfluous to specify an empty value as a yarn-default. It is by default null, if you do not set a default value for it. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] zuston edited a comment on pull request #4060: YARN-11084. Introduce new config to specify AM default node-label whe…
zuston edited a comment on pull request #4060: URL: https://github.com/apache/hadoop/pull/4060#issuecomment-1073944578 Gentle ping @9uapaw @szilard-nemeth . Thanks ~ -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] zuston commented on pull request #4060: YARN-11084. Introduce new config to specify AM default node-label whe…
zuston commented on pull request #4060: URL: https://github.com/apache/hadoop/pull/4060#issuecomment-1073944578 Gentle ping @9uapaw . Thanks ~ -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] zuston commented on pull request #4066: YARN-11087. Introduce the config to control the refresh interval in R…
zuston commented on pull request #4066: URL: https://github.com/apache/hadoop/pull/4066#issuecomment-1073942121 Done. Could u help check again ? @9uapaw -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] zuston commented on a change in pull request #4066: YARN-11087. Introduce the config to control the refresh interval in R…
zuston commented on a change in pull request #4066: URL: https://github.com/apache/hadoop/pull/4066#discussion_r831147114 ## File path: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml ## @@ -3431,15 +3431,27 @@ When "yarn.node-labels.configuration-type" is configured with -"delegated-centralized", then periodically node labels are retrieved -from the node labels provider. This configuration is to define the -interval. If -1 is configured then node labels are retrieved from -provider only once for each node after it registers. Defaults to 30 mins. +"delegated-centralized", then periodically node labels of cluster +all nodes are retrieved from the node labels provider. This +configuration is to define the interval. If -1 is configured then +node labels are retrieved from provider only once for each node +after it registers. Defaults to 30 mins. yarn.resourcemanager.node-labels.provider.fetch-interval-ms 180 + + + When "yarn.node-labels.configuration-type" is configured with + "delegated-centralized", then periodically node labels from newly + registered nodes are retrieved from the node labels provider. + Defaults to 30 secs. Review comment: Done ## File path: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml ## @@ -3431,15 +3431,27 @@ When "yarn.node-labels.configuration-type" is configured with -"delegated-centralized", then periodically node labels are retrieved -from the node labels provider. This configuration is to define the -interval. If -1 is configured then node labels are retrieved from -provider only once for each node after it registers. Defaults to 30 mins. +"delegated-centralized", then periodically node labels of cluster +all nodes are retrieved from the node labels provider. This Review comment: Done -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] 9uapaw commented on pull request #4021: YARN-10565. Refactor CS queue initialization to simplify weight mode …
9uapaw commented on pull request #4021: URL: https://github.com/apache/hadoop/pull/4021#issuecomment-1073929147 Thank you for the change @brumi1024, committed to trunk. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] 9uapaw closed pull request #4021: YARN-10565. Refactor CS queue initialization to simplify weight mode …
9uapaw closed pull request #4021: URL: https://github.com/apache/hadoop/pull/4021 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] szilard-nemeth closed pull request #4072: YARN-11089. Fix typo in rm audit log
szilard-nemeth closed pull request #4072: URL: https://github.com/apache/hadoop/pull/4072 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] szilard-nemeth closed pull request #4065: YARN-11086. Add space in debug log of ParentQueue
szilard-nemeth closed pull request #4065: URL: https://github.com/apache/hadoop/pull/4065 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] szilard-nemeth commented on pull request #4065: YARN-11086. Add space in debug log of ParentQueue
szilard-nemeth commented on pull request #4065: URL: https://github.com/apache/hadoop/pull/4065#issuecomment-1073907958 Thanks @zuston for working on this. Patch LGTM, committed to trunk. Thanks @9uapaw for the review. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] 9uapaw commented on a change in pull request #4066: YARN-11087. Introduce the config to control the refresh interval in R…
9uapaw commented on a change in pull request #4066: URL: https://github.com/apache/hadoop/pull/4066#discussion_r831100127 ## File path: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml ## @@ -3431,15 +3431,27 @@ When "yarn.node-labels.configuration-type" is configured with -"delegated-centralized", then periodically node labels are retrieved -from the node labels provider. This configuration is to define the -interval. If -1 is configured then node labels are retrieved from -provider only once for each node after it registers. Defaults to 30 mins. +"delegated-centralized", then periodically node labels of cluster +all nodes are retrieved from the node labels provider. This Review comment: Same here. ## File path: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml ## @@ -3431,15 +3431,27 @@ When "yarn.node-labels.configuration-type" is configured with -"delegated-centralized", then periodically node labels are retrieved -from the node labels provider. This configuration is to define the -interval. If -1 is configured then node labels are retrieved from -provider only once for each node after it registers. Defaults to 30 mins. +"delegated-centralized", then periodically node labels of cluster +all nodes are retrieved from the node labels provider. This +configuration is to define the interval. If -1 is configured then +node labels are retrieved from provider only once for each node +after it registers. Defaults to 30 mins. yarn.resourcemanager.node-labels.provider.fetch-interval-ms 180 + + + When "yarn.node-labels.configuration-type" is configured with + "delegated-centralized", then periodically node labels from newly + registered nodes are retrieved from the node labels provider. + Defaults to 30 secs. Review comment: Could you rephrase it as: "When "yarn.node-labels.configuration-type" is configured with "delegated-centralized", then node labels of newly registered nodes are updated by periodically retrieving node labels from the provider. Defaults to 30 secs." -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] szilard-nemeth commented on pull request #4072: YARN-11089. Fix typo in rm audit log
szilard-nemeth commented on pull request #4072: URL: https://github.com/apache/hadoop/pull/4072#issuecomment-1073886387 Thanks @zuston for working on this. Patch LGTM, committed to trunk. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] GuoPhilipse opened a new pull request #4090: HDFS-16516. Fix Fsshell wrong params
GuoPhilipse opened a new pull request #4090: URL: https://github.com/apache/hadoop/pull/4090 Fix wrong param name in FileSystemShell -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #4086: HDFS-16471. Make HDFS ls tool cross platform
hadoop-yetus removed a comment on pull request #4086: URL: https://github.com/apache/hadoop/pull/4086#issuecomment-1073237489 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] GuoPhilipse opened a new pull request #4089: HDFS-16515. Improve ec exception message
GuoPhilipse opened a new pull request #4089: URL: https://github.com/apache/hadoop/pull/4089 Currently, if we set an erasure coding policy for a file,it only shows the following message, which is not that clear to user, we can improve it. `RemoteException: Attempt to set an erasure coding policy for a file /ns-test1/test20211026/testec` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18161) [WASB] Retry not getting implemented when using wasb scheme in hadoop-azure 2.7.4
[ https://issues.apache.org/jira/browse/HADOOP-18161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17509811#comment-17509811 ] Aryan commented on HADOOP-18161: [~ste...@apache.org] Do we have IO retry currently implemented for WASB protocol as we have for ABFS protocol with hadoop-azure? I came across this doc:[https://hadoop.apache.org/docs/stable/hadoop-azure/abfs.html] which has different properties for IO retry for ABFS protocol..but din't get any such doc for WASB protocol? Can you please confirm? > [WASB] Retry not getting implemented when using wasb scheme in hadoop-azure > 2.7.4 > -- > > Key: HADOOP-18161 > URL: https://issues.apache.org/jira/browse/HADOOP-18161 > Project: Hadoop Common > Issue Type: Bug > Components: fs/azure >Affects Versions: 2.7.4 >Reporter: Aryan >Priority: Minor > > I am using prestodb to read data from blob. > Presto is using hadoop-azure-2.7.4 jar. > I'm using *wasb* scheme to query the data on blob. I'm afraid for some reason > the hadoop-azure library is not retrying when getting IO exception. > Attaching the stack trace below, > {code:java} > com.facebook.presto.spi.PrestoException: Error reading from > wasb://oemdpv3prd...@oemdpv3prd.blob.core.windows.net/data/pipelines/hudi/kafka/telemetrics_v2/dp.hmi.quectel.bms.data.packet.v2/dt=2022-01-15/e576abc3-942a-434d-be02-6899798258eb-0_5-13327-290407_20220115211203.parquet > at position 65924529 > at > com.facebook.presto.hive.parquet.HdfsParquetDataSource.readInternal(HdfsParquetDataSource.java:66) > at > com.facebook.presto.parquet.AbstractParquetDataSource.readFully(AbstractParquetDataSource.java:60) > at > com.facebook.presto.parquet.AbstractParquetDataSource.readFully(AbstractParquetDataSource.java:51) > at > com.facebook.presto.parquet.reader.ParquetReader.readPrimitive(ParquetReader.java:247) > at > com.facebook.presto.parquet.reader.ParquetReader.readColumnChunk(ParquetReader.java:330) > at > com.facebook.presto.parquet.reader.ParquetReader.readBlock(ParquetReader.java:313) > at > com.facebook.presto.hive.parquet.ParquetPageSource$ParquetBlockLoader.load(ParquetPageSource.java:182) > at > com.facebook.presto.hive.parquet.ParquetPageSource$ParquetBlockLoader.load(ParquetPageSource.java:160) > at > com.facebook.presto.common.block.LazyBlock.assureLoaded(LazyBlock.java:291) > at > com.facebook.presto.common.block.LazyBlock.getLoadedBlock(LazyBlock.java:282) > at > com.facebook.presto.operator.ScanFilterAndProjectOperator$RecordingLazyBlockLoader.load(ScanFilterAndProjectOperator.java:314) > at > com.facebook.presto.operator.ScanFilterAndProjectOperator$RecordingLazyBlockLoader.load(ScanFilterAndProjectOperator.java:300) > at > com.facebook.presto.common.block.LazyBlock.assureLoaded(LazyBlock.java:291) > at > com.facebook.presto.common.block.LazyBlock.getLoadedBlock(LazyBlock.java:282) > at > com.facebook.presto.operator.project.InputPageProjection.project(InputPageProjection.java:69) > at > com.facebook.presto.operator.project.PageProjectionWithOutputs.project(PageProjectionWithOutputs.java:56) > at > com.facebook.presto.operator.project.PageProcessor$ProjectSelectedPositions.processBatch(PageProcessor.java:323) > at > com.facebook.presto.operator.project.PageProcessor$ProjectSelectedPositions.process(PageProcessor.java:197) > at > com.facebook.presto.operator.WorkProcessorUtils$ProcessWorkProcessor.process(WorkProcessorUtils.java:315) > at > com.facebook.presto.operator.WorkProcessorUtils$YieldingIterator.computeNext(WorkProcessorUtils.java:79) > at > com.facebook.presto.operator.WorkProcessorUtils$YieldingIterator.computeNext(WorkProcessorUtils.java:65) > at > com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:141) > at > com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:136) > at > com.facebook.presto.operator.project.MergingPageOutput.getOutput(MergingPageOutput.java:113) > at > com.facebook.presto.operator.ScanFilterAndProjectOperator.processPageSource(ScanFilterAndProjectOperator.java:295) > at > com.facebook.presto.operator.ScanFilterAndProjectOperator.getOutput(ScanFilterAndProjectOperator.java:242) > at com.facebook.presto.operator.Driver.processInternal(Driver.java:418) > at > com.facebook.presto.operator.Driver.lambda$processFor$9(Driver.java:301) > at com.facebook.presto.operator.Driver.tryWithLock(Driver.java:722) > at com.facebook.presto.operator.Driver.processFor(Driver.java:294) > at > com.facebook.presto.execution.SqlTaskExecution$DriverSplitRunner.processFor(SqlTaskExecution.java:1077) > at > com.facebook.presto.execution.executor.P
[GitHub] [hadoop] liubingxing opened a new pull request #4088: HDFS-16514. Reduce the failover sleep time if multiple namenode are c…
liubingxing opened a new pull request #4088: URL: https://github.com/apache/hadoop/pull/4088 JIRA: [HDFS-16514](https://issues.apache.org/jira/browse/HDFS-16514) Recently, we used the [Standby Read] feature in our test cluster, and deployed 4 namenode as follow: node1 -> active nn node2 -> standby nn node3 -> observer nn node3 -> observer nn If we set ’dfs.client.failover.random.order=true‘, the client may failover twice and wait a long time to send msync to active namenode. ![image](https://user-images.githubusercontent.com/2844826/159257471-4398ae11-fad3-4aee-8f56-1b89bef2f611.png) I think we can reduce the sleep time of the first several failover based on the number of namenode. For example, if 4 namenode are configured, the sleep time of first three failover operations is set to zero. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #4078: HDFS-16510. Fix EC decommission when rack is not enough
hadoop-yetus commented on pull request #4078: URL: https://github.com/apache/hadoop/pull/4078#issuecomment-1073742232 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 42s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | | The patch appears to include 1 new or modified test files. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 33m 33s | | trunk passed | | +1 :green_heart: | compile | 1m 28s | | trunk passed with JDK Ubuntu-11.0.14+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 1m 22s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 1m 1s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 29s | | trunk passed | | +1 :green_heart: | javadoc | 1m 4s | | trunk passed with JDK Ubuntu-11.0.14+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 32s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 3m 15s | | trunk passed | | +1 :green_heart: | shadedclient | 22m 30s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 20s | | the patch passed | | +1 :green_heart: | compile | 1m 21s | | the patch passed with JDK Ubuntu-11.0.14+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 1m 21s | | the patch passed | | +1 :green_heart: | compile | 1m 15s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 1m 15s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 53s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 20s | | the patch passed | | +1 :green_heart: | javadoc | 0m 51s | | the patch passed with JDK Ubuntu-11.0.14+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 25s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 3m 14s | | the patch passed | | +1 :green_heart: | shadedclient | 22m 24s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | -1 :x: | unit | 296m 52s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4078/4/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 46s | | The patch does not generate ASF License warnings. | | | | 397m 23s | | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.datanode.TestBlockScanner | | | hadoop.hdfs.server.mover.TestStorageMover | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration | | | hadoop.hdfs.server.datanode.TestDirectoryScanner | | | hadoop.hdfs.server.datanode.TestLargeBlockReport | | | hadoop.hdfs.server.datanode.TestDataNodeRollingUpgrade | | | hadoop.hdfs.server.datanode.TestBlockRecovery2 | | | hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics | | | hadoop.hdfs.server.balancer.TestBalancerRPCDelay | | | hadoop.hdfs.server.mover.TestMover | | | hadoop.hdfs.server.datanode.TestDataNodeReconfiguration | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4078/4/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/4078 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 3a4bfccb7a6c 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 5189b31c69764ff5f731fbfe8a1033594c83a427 | | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.14+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | Test Result
[GitHub] [hadoop] tomscut commented on a change in pull request #4057: HDFS-16498. Fix NPE for checkBlockReportLease
tomscut commented on a change in pull request #4057: URL: https://github.com/apache/hadoop/pull/4057#discussion_r830932014 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java ## @@ -2751,6 +2751,11 @@ public boolean checkBlockReportLease(BlockReportContext context, return true; } DatanodeDescriptor node = datanodeManager.getDatanode(nodeID); +if (node == null) { + final UnregisteredNodeException e = new UnregisteredNodeException(nodeID, null); + NameNode.stateChangeLog.error("BLOCK* NameSystem.getDatanode: " + e.getLocalizedMessage()); + throw e; Review comment: > Can you share me the log message and the exception trace post this. We have passed null here in the exception. I feel like it can lead to something like: `Node null is expected to serve this storage` > > Which doesn't make sense to me. May be some more appropriate message should be there. @ayushtkn Yes, you are right. The log will be: `Node null is expected to serve this storage`. I passed `node` in last commit, but there is a spotbug: [link](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4057/2/artifact/out/new-spotbugs-hadoop-hdfs-project_hadoop-hdfs.html). So I passed `null`. This is just for adaptation UnregisteredNodeException constructor. Do you have any suggestions for this. Thank you very much. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #4087: HDFS-16513. [SBN read] Observer Namenode does not trigger the edits r…
hadoop-yetus commented on pull request #4087: URL: https://github.com/apache/hadoop/pull/4087#issuecomment-1073667887 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Logfile | Comment | |::|--:|:|::|:---:| | +0 :ok: | reexec | 0m 51s | | Docker mode activated. | _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | | No case conflicting files found. | | +0 :ok: | codespell | 0m 0s | | codespell was not available. | | +1 :green_heart: | @author | 0m 0s | | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 35m 29s | | trunk passed | | +1 :green_heart: | compile | 1m 31s | | trunk passed with JDK Ubuntu-11.0.14+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | compile | 1m 26s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | checkstyle | 1m 1s | | trunk passed | | +1 :green_heart: | mvnsite | 1m 32s | | trunk passed | | +1 :green_heart: | javadoc | 1m 9s | | trunk passed with JDK Ubuntu-11.0.14+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 34s | | trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 3m 16s | | trunk passed | | +1 :green_heart: | shadedclient | 23m 44s | | branch has no errors when building and testing our client artifacts. | _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 1m 22s | | the patch passed | | +1 :green_heart: | compile | 1m 25s | | the patch passed with JDK Ubuntu-11.0.14+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javac | 1m 25s | | the patch passed | | +1 :green_heart: | compile | 1m 15s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | javac | 1m 15s | | the patch passed | | +1 :green_heart: | blanks | 0m 0s | | The patch has no blanks issues. | | +1 :green_heart: | checkstyle | 0m 52s | | the patch passed | | +1 :green_heart: | mvnsite | 1m 23s | | the patch passed | | +1 :green_heart: | javadoc | 0m 55s | | the patch passed with JDK Ubuntu-11.0.14+9-Ubuntu-0ubuntu2.20.04 | | +1 :green_heart: | javadoc | 1m 21s | | the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | +1 :green_heart: | spotbugs | 3m 29s | | the patch passed | | +1 :green_heart: | shadedclient | 24m 1s | | patch has no errors when building and testing our client artifacts. | _ Other Tests _ | | +1 :green_heart: | unit | 237m 34s | | hadoop-hdfs in the patch passed. | | +1 :green_heart: | asflicense | 0m 46s | | The patch does not generate ASF License warnings. | | | | 343m 30s | | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4087/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/4087 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell | | uname | Linux 956ddb79f286 4.15.0-169-generic #177-Ubuntu SMP Thu Feb 3 10:50:38 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | dev-support/bin/hadoop.sh | | git revision | trunk / 0cf9128bf5f6a2ce5fc9842c6966e374c7aae7fd | | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.14+9-Ubuntu-0ubuntu2.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 | | Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4087/1/testReport/ | | Max. process+thread count | 2880 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4087/1/console | | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 | | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org | This message was automatically generated. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about
[GitHub] [hadoop] hfutatzhanghb commented on pull request #3976: HDFS-16452. msync RPC should send to acitve namenode directly
hfutatzhanghb commented on pull request #3976: URL: https://github.com/apache/hadoop/pull/3976#issuecomment-1073612305 hi, @xkrogen . So sorry for disturbing you. I have a new idea about what we have discussed. In practice, We usually plan some machines as Observer Namenode before setup cluster. Can we add a configuration entry in hdfs-site.xml to specify the nnid of Observer namenode ? After doing so, when we initialize failoverProxy, we can aovid adding observer namenodes to the proxies list. I am looking forward to your reply. Thx a lot. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] tomscut commented on pull request #4057: HDFS-16498. Fix NPE for checkBlockReportLease
tomscut commented on pull request #4057: URL: https://github.com/apache/hadoop/pull/4057#issuecomment-1073611615 > Had a quick look, prod change makes sense to me, the `datanodeManager.getDatanode(nodeID)` methods shows it can register null, if the node isn't found. > > The test is little complex and isn't showing some actual scenario. If this issue is caused by some delay or race condition, will mocking something to create some delays and so help? > > Try to get a test which shows the actual scenario, I found it really hard to follow this test. If nothing helps, the worst would be add some comments explaining things in details in the test Thank you @ayushtkn very much for your review and detailed suggestions. I will update the code. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ayushtkn commented on a change in pull request #4057: HDFS-16498. Fix NPE for checkBlockReportLease
ayushtkn commented on a change in pull request #4057: URL: https://github.com/apache/hadoop/pull/4057#discussion_r830838309 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockReportLease.java ## @@ -136,6 +137,48 @@ public void testCheckBlockReportLease() throws Exception { } } + @Test + public void testCheckBlockReportLeaseWhenDnUnregister() throws Exception { +HdfsConfiguration conf = new HdfsConfiguration(); +Random rand = new Random(); + +try (MiniDFSCluster cluster = new MiniDFSCluster.Builder(conf) +.numDataNodes(1).build()) { + FSNamesystem fsn = cluster.getNamesystem(); + BlockManager blockManager = fsn.getBlockManager(); + String poolId = cluster.getNamesystem().getBlockPoolId(); + NamenodeProtocols rpcServer = cluster.getNameNodeRpc(); + + // Remove the unique datanode to simulate the unregistered situation. + DataNode dn = cluster.getDataNodes().get(0); + blockManager.getDatanodeManager().getDatanodeMap().remove(dn.getDatanodeUuid()); + + // Trigger BlockReport. + DatanodeRegistration dnRegistration = dn.getDNRegistrationForBP(poolId); + StorageReport[] storages = dn.getFSDataset().getStorageReports(poolId); + ExecutorService pool = Executors.newFixedThreadPool(1); + BlockReportContext brContext = new BlockReportContext(1, 0, + rand.nextLong(), 1); + Future sendBRfuturea = pool.submit(() -> { Review comment: the variable name is very confusing, I couldn't understand what does a in the end means. ## File path: hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlockReportLease.java ## @@ -136,6 +137,48 @@ public void testCheckBlockReportLease() throws Exception { } } + @Test + public void testCheckBlockReportLeaseWhenDnUnregister() throws Exception { +HdfsConfiguration conf = new HdfsConfiguration(); +Random rand = new Random(); + +try (MiniDFSCluster cluster = new MiniDFSCluster.Builder(conf) +.numDataNodes(1).build()) { Review comment: by default number of datanodes is 1 ## File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java ## @@ -2751,6 +2751,11 @@ public boolean checkBlockReportLease(BlockReportContext context, return true; } DatanodeDescriptor node = datanodeManager.getDatanode(nodeID); +if (node == null) { + final UnregisteredNodeException e = new UnregisteredNodeException(nodeID, null); + NameNode.stateChangeLog.error("BLOCK* NameSystem.getDatanode: " + e.getLocalizedMessage()); + throw e; Review comment: Can you share me the log message and the exception trace post this. We have passed null here in the exception. I feel like it can lead to something like: ``Node null is expected to serve this storage`` Which doesn't make sense to me. May be some more appropriate message should be there. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] tomscut commented on pull request #4057: HDFS-16498. Fix NPE for checkBlockReportLease
tomscut commented on pull request #4057: URL: https://github.com/apache/hadoop/pull/4057#issuecomment-1073569715 Hi @ayushtkn @Hexiaoqiao @ferhui , please take a look at this. Thanks. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] tomscut removed a comment on pull request #4057: HDFS-16498. Fix NPE for checkBlockReportLease
tomscut removed a comment on pull request #4057: URL: https://github.com/apache/hadoop/pull/4057#issuecomment-1069166429 Hi @ayushtkn , please take a look. Thank you. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] tomscut commented on pull request #4067: HDFS-16503. Should verify whether the path name is valid in the WebHDFS
tomscut commented on pull request #4067: URL: https://github.com/apache/hadoop/pull/4067#issuecomment-1073547712 Thanks @ayushtkn . -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ayushtkn merged pull request #4067: HDFS-16503. Should verify whether the path name is valid in the WebHDFS
ayushtkn merged pull request #4067: URL: https://github.com/apache/hadoop/pull/4067 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-18166) Jenkins jobs intermittently failing in the end
[ https://issues.apache.org/jira/browse/HADOOP-18166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17509647#comment-17509647 ] Ayush Saxena commented on HADOOP-18166: --- Created ticket for tracking in case anyone has idea about this. I checked with Infra folks couple of days back, they disowned this. cc. [~aajisaka] just in case you have any pointers or idea, how we can solve this. > Jenkins jobs intermittently failing in the end > -- > > Key: HADOOP-18166 > URL: https://issues.apache.org/jira/browse/HADOOP-18166 > Project: Hadoop Common > Issue Type: Bug >Reporter: Ayush Saxena >Priority: Critical > > The PR results are not completed sometime: > {noformat} > 13:52:57 Recording test results > 13:53:18 Remote call on hadoop10 failed > [Pipeline] echo > 13:53:18 junit processing: java.io.IOException: Remote call on hadoop10 > failed > {noformat} > Ref: > https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4081/8/consoleFull > Daily Builds also face similar issues: > {noformat} > java.io.IOException: Pipe closed after 0 cycles > at > org.apache.sshd.common.channel.ChannelPipedInputStream.read(ChannelPipedInputStream.java:126) > at > org.apache.sshd.common.channel.ChannelPipedInputStream.read(ChannelPipedInputStream.java:105) > at > hudson.remoting.FlightRecorderInputStream.read(FlightRecorderInputStream.java:93) > at > hudson.remoting.ChunkedInputStream.readHeader(ChunkedInputStream.java:74) > at > hudson.remoting.ChunkedInputStream.readUntilBreak(ChunkedInputStream.java:104) > at > hudson.remoting.ChunkedCommandTransport.readBlock(ChunkedCommandTransport.java:39) > at > hudson.remoting.AbstractSynchronousByteArrayCommandTransport.read(AbstractSynchronousByteArrayCommandTransport.java:34) > at > hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:61) > Caused: java.io.IOException: Backing channel 'hadoop19' is disconnected. > {noformat} > Ref: > https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/810/console > https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/807/console -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-18166) Jenkins jobs intermittently failing in the end
Ayush Saxena created HADOOP-18166: - Summary: Jenkins jobs intermittently failing in the end Key: HADOOP-18166 URL: https://issues.apache.org/jira/browse/HADOOP-18166 Project: Hadoop Common Issue Type: Bug Reporter: Ayush Saxena The PR results are not completed sometime: {noformat} 13:52:57 Recording test results 13:53:18 Remote call on hadoop10 failed [Pipeline] echo 13:53:18 junit processing: java.io.IOException: Remote call on hadoop10 failed {noformat} Ref: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4081/8/consoleFull Daily Builds also face similar issues: {noformat} java.io.IOException: Pipe closed after 0 cycles at org.apache.sshd.common.channel.ChannelPipedInputStream.read(ChannelPipedInputStream.java:126) at org.apache.sshd.common.channel.ChannelPipedInputStream.read(ChannelPipedInputStream.java:105) at hudson.remoting.FlightRecorderInputStream.read(FlightRecorderInputStream.java:93) at hudson.remoting.ChunkedInputStream.readHeader(ChunkedInputStream.java:74) at hudson.remoting.ChunkedInputStream.readUntilBreak(ChunkedInputStream.java:104) at hudson.remoting.ChunkedCommandTransport.readBlock(ChunkedCommandTransport.java:39) at hudson.remoting.AbstractSynchronousByteArrayCommandTransport.read(AbstractSynchronousByteArrayCommandTransport.java:34) at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:61) Caused: java.io.IOException: Backing channel 'hadoop19' is disconnected. {noformat} Ref: https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/810/console https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/807/console -- This message was sent by Atlassian Jira (v8.20.1#820001) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ayushtkn commented on pull request #4081: HDFS-13248: Namenode needs to use the actual client IP when going through RBF proxy.
ayushtkn commented on pull request #4081: URL: https://github.com/apache/hadoop/pull/4081#issuecomment-1073536111 +1, TestRouterDistCpProcedure isn't related, we should track that separately -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org